Science.gov

Sample records for luminance mapping technique

  1. Evaluation of High Dynamic Range Photography as a Luminance Mapping Technique

    SciTech Connect

    Inanici, Mehlika; Galvin, Jim

    2004-12-30

    The potential, limitations, and applicability of the High Dynamic Range (HDR) photography technique is evaluated as a luminance mapping tool. Multiple exposure photographs of static scenes are taken with a Nikon 5400 digital camera to capture the wide luminance variation within the scenes. The camera response function is computationally derived using the Photosphere software, and is used to fuse the multiple photographs into HDR images. The vignetting effect and point spread function of the camera and lens system is determined. Laboratory and field studies have shown that the pixel values in the HDR photographs can correspond to the physical quantity of luminance with reasonable precision and repeatability.

  2. A new device for dynamic luminance mapping and glare risk assessment in buildings

    NASA Astrophysics Data System (ADS)

    Borisuit, Apiparn; Münch, Mirjam; Deschamps, Laurent; Kämpf, Jérôme; Scartezzini, Jean-Louis

    2012-10-01

    High dynamic range imaging has been shown to be a reliable tool to assess luminance maps and glare risk probability in buildings. However, there are some limitations of image capturing time, especially when dealing with highly dynamic and contrasted daylight situations. We used a newly developed prototype of a digital camera which contains a high dynamic range pixel array chip, with a logarithmic scale for encoding. This type of camera allows to effectively assessing luminance, contrast and contrast directions, by taking only a single image or by performing real time recordings. The device was equipped with a fisheye lens and V-lambda filters to adapt the camera's spectral sensitivity to the human eye. After spectral as well as photometric calibration and vignetting correction, the device was tested to perform luminance mapping of real scenes. The results showed that luminance maps of a room can be efficiently assessed under dynamic daylight and mixed day- and electric lighting conditions in a very short time (i.e. 100 ms), when compared to classical HDR imaging techniques. This allows us to calculate glare indexes of a scene simultaneously. The camera opens a variety of new applications as a useful tool for architects, building designers and lighting experts. The device can be used to easily monitor daylight availability and glare indexes in existing buildings and further developments for advanced (day-) lighting control can be envisaged.

  3. Covariance mapping techniques

    NASA Astrophysics Data System (ADS)

    Frasinski, Leszek J.

    2016-08-01

    Recent technological advances in the generation of intense femtosecond pulses have made covariance mapping an attractive analytical technique. The laser pulses available are so intense that often thousands of ionisation and Coulomb explosion events will occur within each pulse. To understand the physics of these processes the photoelectrons and photoions need to be correlated, and covariance mapping is well suited for operating at the high counting rates of these laser sources. Partial covariance is particularly useful in experiments with x-ray free electron lasers, because it is capable of suppressing pulse fluctuation effects. A variety of covariance mapping methods is described: simple, partial (single- and multi-parameter), sliced, contingent and multi-dimensional. The relationship to coincidence techniques is discussed. Covariance mapping has been used in many areas of science and technology: inner-shell excitation and Auger decay, multiphoton and multielectron ionisation, time-of-flight and angle-resolved spectrometry, infrared spectroscopy, nuclear magnetic resonance imaging, stimulated Raman scattering, directional gamma ray sensing, welding diagnostics and brain connectivity studies (connectomics). This review gives practical advice for implementing the technique and interpreting the results, including its limitations and instrumental constraints. It also summarises recent theoretical studies, highlights unsolved problems and outlines a personal view on the most promising research directions.

  4. Nonlinear mapping of the luminance in dual-layer high dynamic range displays

    NASA Astrophysics Data System (ADS)

    Guarnieri, Gabriele; Ramponi, Giovanni; Bonfiglio, Silvio; Albani, Luigi

    2009-02-01

    It has long been known that the human visual system (HVS) has a nonlinear response to luminance. This nonlinearity can be quantified using the concept of just noticeable difference (JND), which represents the minimum amplitude of a specified test pattern an average observer can discern from a uniform background. The JND depends on the background luminance following a threshold versus intensity (TVI) function. It is possible to define a curve which maps physical luminances into a perceptually linearized domain. This mapping can be used to optimize a digital encoding, by minimizing the visibility of quantization noise. It is also commonly used in medical applications to display images adapting to the characteristics of the display device. High dynamic range (HDR) displays, which are beginning to appear on the market, can display luminance levels outside the range in which most standard mapping curves are defined. In particular, dual-layer LCD displays are able to extend the gamut of luminance offered by conventional liquid crystals towards the black region; in such areas suitable and HVS-compliant luminance transformations need to be determined. In this paper we propose a method, which is primarily targeted to the extension of the DICOM curve used in medical imaging, but also has a more general application. The method can be modified in order to compensate for the ambient light, which can be significantly greater than the black level of an HDR display and consequently reduce the visibility of the details in dark areas.

  5. C IV and C III] reverberation mapping of the luminous quasar PG 1247+267

    SciTech Connect

    Trevese, D.; Saturni, F. G.; Perna, M.; Dadina, M.

    2014-11-10

    So far the masses of about 50 active galactic nuclei (AGNs) have been measured through the reverberation mapping technique (RM). Most measurements have been performed for objects of moderate luminosity and redshift, based on Hβ, which is also used to calibrate the scaling relation that allows single-epoch (SE) mass determination based on AGN luminosity and the width of different emission lines. Due to the complex structure and gas dynamics of the relevant emission region, the SE masses obtained from the C IV(1549 Å) line show a large spread around the mean values. Direct RM measures of C IV exist for only six AGNs of low luminosity and redshift, and only one luminous quasar. Since 2003, we have collected photometric and spectroscopic observations of PG1247+267, the most luminous quasar ever analyzed for RM. We provide light curves for the continuum and for C IV(1549 Å) and C III](1909 Å), and measures of the reverberation time lags based on the SPEAR method. The sizes of the line emission regions assume a ratio of R {sub C} {sub III]}/R {sub C} {sub IV} ∼ 2, similar to the case of Seyfert galaxies, indicating for the first time a similar ionization stratification in a luminous quasar and low-luminosity nuclei. Due to the relatively small size of the broad line region and the relatively narrow line widths, we estimate a small mass and an anomalously high Eddington ratio. We discuss the possibility that either the shape of the emission region or an amplification of the luminosity caused by gravitational lensing may be partly responsible for the result.

  6. Luminance and contrast ideal balancing based tone mapping algorithm

    NASA Astrophysics Data System (ADS)

    Besrour, Amine; Abdelkefi, Fatma; Siala, Mohamed; Snoussi, Hichem

    2015-09-01

    The tone mapping field represents a challenge for all the HDR researchers. Indeed, this field is very important since, it offers better display terms for the end-user. This paper details a design of a recent tone mapping operator used in high dynamic range imaging systems. The proposed operator represents a local method that uses an adaptable factor which combines both the average neighbouring contrast and the brightness difference. Thanks to that, this solution provides good results with better brightness, contrast, and visibility and without producing neither undesired artifacts nor shadow effects.

  7. Dynamic range optimization by flexible local contrast enhancement using luminance map

    NASA Astrophysics Data System (ADS)

    Lin, Chih-Hung; Lin, Shu-Yuan; Chen, Tzung-Her

    2013-07-01

    The original dynamic range of scene is generally limited to the capture sensors and display devices, and showed with the low dynamic range.Therefore, it is difficult to display the details in both dark and bright areas simultaneously. This paperadopted flexible thresholds combined with luminance map to improve the quality of image captured with unideal light distribution, and based on simple computation. The implementation effectively adjusts image contrast to both lowlight and highlight details while avoiding common quality loss due to halo-artifacts, desaturation and greyish appearance.

  8. Visuoauditory mappings between high luminance and high pitch are shared by chimpanzees (Pan troglodytes) and humans.

    PubMed

    Ludwig, Vera U; Adachi, Ikuma; Matsuzawa, Tetsuro

    2011-12-20

    Humans share implicit preferences for certain cross-sensory combinations; for example, they consistently associate higher-pitched sounds with lighter colors, smaller size, and spikier shapes. In the condition of synesthesia, people may experience such cross-modal correspondences to a perceptual degree (e.g., literally seeing sounds). So far, no study has addressed the question whether nonhuman animals share cross-modal correspondences as well. To establish the evolutionary origins of cross-modal mappings, we tested whether chimpanzees (Pan troglodytes) also associate higher pitch with higher luminance. Thirty-three humans and six chimpanzees were required to classify black and white squares according to their color while hearing irrelevant background sounds that were either high-pitched or low-pitched. Both species performed better when the background sound was congruent (high-pitched for white, low-pitched for black) than when it was incongruent (low-pitched for white, high-pitched for black). An inherent tendency to pair high pitch with high luminance hence evolved before the human lineage split from that of chimpanzees. Rather than being a culturally learned or a linguistic phenomenon, this mapping constitutes a basic feature of the primate sensory system. PMID:22143791

  9. Visuoauditory mappings between high luminance and high pitch are shared by chimpanzees (Pan troglodytes) and humans

    PubMed Central

    Ludwig, Vera U.; Adachi, Ikuma; Matsuzawa, Tetsuro

    2011-01-01

    Humans share implicit preferences for certain cross-sensory combinations; for example, they consistently associate higher-pitched sounds with lighter colors, smaller size, and spikier shapes. In the condition of synesthesia, people may experience such cross-modal correspondences to a perceptual degree (e.g., literally seeing sounds). So far, no study has addressed the question whether nonhuman animals share cross-modal correspondences as well. To establish the evolutionary origins of cross-modal mappings, we tested whether chimpanzees (Pan troglodytes) also associate higher pitch with higher luminance. Thirty-three humans and six chimpanzees were required to classify black and white squares according to their color while hearing irrelevant background sounds that were either high-pitched or low-pitched. Both species performed better when the background sound was congruent (high-pitched for white, low-pitched for black) than when it was incongruent (low-pitched for white, high-pitched for black). An inherent tendency to pair high pitch with high luminance hence evolved before the human lineage split from that of chimpanzees. Rather than being a culturally learned or a linguistic phenomenon, this mapping constitutes a basic feature of the primate sensory system. PMID:22143791

  10. Brain Friendly Techniques: Mind Mapping

    ERIC Educational Resources Information Center

    Goldberg, Cristine

    2004-01-01

    Mind Mapping can be called the Swiss Army Knife for the brain, a total visual thinking tool or a multi-handed thought catcher. Invented by Tony Buzan in the early 1970s and used by millions around the world, it is a method that can be a part of a techniques repertoire when teaching information literacy, planning, presenting, thinking, and so…

  11. A MAP OF THE INTEGRATED SACHS-WOLFE SIGNAL FROM LUMINOUS RED GALAXIES

    SciTech Connect

    Granett, Benjamin R.; Neyrinck, Mark C.; Szapudi, Istvan

    2009-08-10

    We construct a map of the time derivative of the gravitational potential traced by Sloan Digital Sky Survey luminous red galaxies (LRGs). The potential decays on large scales due to cosmic acceleration, leaving an imprint on cosmic microwave background (CMB) radiation through the integrated Sachs-Wolfe (ISW) effect. With a template fit, we directly measure this signature on the CMB at a 2{sigma} confidence level. The measurement is consistent with the cross-correlation statistic, strengthening the claim that dark energy is indeed the cause of the correlation. This new approach potentially simplifies the cosmological interpretation. Our constructed linear ISW map shows no evidence for degree-scale cold and hot spots associated with supervoid and supercluster structures. This suggests that the linear ISW effect in a concordance {lambda}CDM cosmology is insufficient to explain the strong CMB imprints from these structures that we previously reported.

  12. Mapping luminous blue compact galaxies with VIRUS-P. Morphology, line ratios, and kinematics

    NASA Astrophysics Data System (ADS)

    Cairós, L. M.; Caon, N.; García Lorenzo, B.; Kelz, A.; Roth, M.; Papaderos, P.; Streicher, O.

    2012-11-01

    Context. Blue compact galaxies (BCG) are narrow emission-line systems that undergo a violent burst of star formation. They are compact, low-luminosity galaxies, with blue colors and low chemical abundances, which offer us a unique opportunity to investigate collective star formation and its effects on galaxy evolution in a relatively simple, dynamically unperturbed environment. Spatially resolved spectrophotometric studies of BCGs are essential for a better understanding of the role of starburst-driven feedback processes on the kinematical and chemical evolution of low-mass galaxies near and far. Aims: We carry out an integral field spectroscopy (IFS) study of a sample of luminous BCGs, with the aim to probe the morphology, kinematics, dust extinction, and excitation mechanisms of their warm interstellar medium (ISM). Methods: We obtained IFS data for five luminous BCGs with VIRUS-P, the prototype instrument for the Visible Integral Field Replicable Unit Spectrograph, attached to the 2.7 m Harlan J. Smith Telescope at the McDonald Observatory. VIRUS-P consists of a square array of 247 optical fibers, which covers a 109″ × 109″ field of view, with a spatial sampling of 4farcs2 and a 0.3 filling factor. We observed in the 3550-5850 Å spectral range, with a resolution of 5 Å FWHM. From these data we built two-dimensional maps of the continuum and the most prominent emission-lines ([O ii] λ3727, Hγ, Hβ and [O iii] λ5007), and investigated the morphology of diagnostic emission-line ratios and the extinction patterns in the ISM as well as stellar and gas kinematics. Additionally, from integrated spectra we inferred total line fluxes and luminosity-weighted extinction coefficients and gas-phase metallicities. Results: All galaxies exhibit an overall regular morphology in the stellar continuum, while their warm ISM morphology is more complex: in II Zw 33 and Mrk 314, the star-forming regions are aligned along a chain-structure; Haro 1, NGC 4670 and III Zw 102

  13. Window defect planar mapping technique

    NASA Technical Reports Server (NTRS)

    Minton, F. R.; Minton, U. O. (Inventor)

    1976-01-01

    A method of planar mapping defects in a window having an edge surface and a planar surface. The method is comprised of steps for mounting the window on a support surface. Then a light sensitive paper is placed adjacent to the window surface. A light source is positioned adjacent to the window edge. The window is then illuminated with the source of light for a predetermined interval of time. Defects on the surface of the glass, as well as in the interior of the glass are detected by analyzing the developed light sensitive paper. The light source must be in the form of optical fibers or a light tube whose light transmitting ends are placed near the edge surface of the window.

  14. Digital Mapping Techniques '07 - Workshop Proceedings

    USGS Publications Warehouse

    Soller, David R.

    2008-01-01

    The Digital Mapping Techniques '07 (DMT'07) workshop was attended by 85 technical experts from 49 agencies, universities, and private companies, including representatives from 27 state geological surveys. This year's meeting, the tenth in the annual series, was hosted by the South Carolina Geological Survey, from May 20-23, 2007, on the University of South Carolina campus in Columbia, South Carolina. Each DMT workshop has been coordinated by the U.S. Geological Survey's National Geologic Map Database Project and the Association of American State Geologists (AASG). As in previous year's meetings, the objective was to foster informal discussion and exchange of technical information, principally in order to develop more efficient methods for digital mapping, cartography, GIS analysis, and information management. At this meeting, oral and poster presentations and special discussion sessions emphasized: 1) methods for creating and publishing map products (here, 'publishing' includes Web-based release); 2) field data capture software and techniques, including the use of LIDAR; 3) digital cartographic techniques; 4) migration of digital maps into ArcGIS Geodatabase format; 5) analytical GIS techniques; and 6) continued development of the National Geologic Map Database.

  15. New techniques in dark matter mapping

    NASA Astrophysics Data System (ADS)

    Lorenz, Suzanne

    We have developed a new pipeline for mapping dark matter associated with clusters of galaxies via weak gravitational lensing. This method will be useful both with current datasets and future large optical survey telescopes, such as the Large Synoptic Survey Telescope (LSST). We use a novel source finding technique using a wavelet detection method. We then find known photometric and spectroscopic redshifts associated with our sources and measure the ellipticities of galaxies using a second moment technique. The ellipticity and photometric redshift distribution are then converted to a dark matter map. We have represented the dark matter as smoothed particles to invert the ellipticity map. This had yielded both 2D and 3D dark matter distributions when applied to our Subaru archive image of Abell 2218.

  16. Three-dimensional spectroscopy of local luminous compact blue galaxies: kinematic maps of a sample of 22 objects

    NASA Astrophysics Data System (ADS)

    Pérez-Gallego, J.; Guzmán, R.; Castillo-Morales, A.; Gallego, J.; Castander, F. J.; Garland, C. A.; Gruel, N.; Pisano, D. J.; Zamorano, J.

    2011-12-01

    We use three-dimensional optical spectroscopy observations of a sample of 22 local luminous compact blue galaxies (LCBGs) to create kinematic maps. By means of these, we classify the kinematics of these galaxies into three different classes: rotating disc (RD), perturbed rotation (PR) and complex kinematics (CK). We find 48 per cent are RDs, 28 per cent are PRs and 24 per cent are CKs. RDs show rotational velocities that range between ˜50 and ˜200 km s-1, and dynamical masses that range between ˜1 × 109 and ˜3 × 1010 M⊙. We also address the following two fundamental questions through the study of the kinematic maps: (i) What processes are triggering the current starburst in LCBGs? We search our maps of the galaxy velocity fields for signatures of recent interactions and close companions that may be responsible for the enhanced star formation in our sample. We find that 5 per cent of objects show evidence of a recent major merger, 10 per cent of a minor merger and 45 per cent of a companion. This argues in favour of ongoing interactions with close companions as a mechanism for the enhanced star formation activity in these galaxies. (ii) What processes may eventually quench the current starbust in LCBGs? Velocity and velocity width maps, together with emission line ratio maps, can reveal signatures of active galactic nuclei (AGNs) activity or supernova (SN)-driven galactic winds that could halt the current burst. We find only 5 per cent of objects with clear evidence of AGN activity and 27 per cent with kinematics consistent with SN-driven galactic winds. Therefore, a different mechanism may be responsible for quenching the star formation in LCBGs. Finally, from our analysis, we find that the velocity widths of RDs, rather than accounting exclusively for the rotational nature of these objects, may account as well for other kinematic components and may not be good tracers of their dynamical masses.

  17. Digital Mapping Techniques '06 - Workshop Proceedings

    USGS Publications Warehouse

    Soller, David R.

    2007-01-01

    The Digital Mapping Techniques `06 (DMT`06) workshop was attended by more than 110 technical experts from 51 agencies, universities, and private companies, including representatives from 27 state geological surveys (see Appendix A of these Proceedings). This workshop was similar in nature to the previous nine meetings, which were held in Lawrence, Kansas (Soller, 1997), Champaign, Illinois (Soller, 1998), Madison, Wisconsin (Soller, 1999), Lexington, Kentucky (Soller, 2000), Tuscaloosa, Alabama (Soller, 2001), Salt Lake City, Utah (Soller, 2002), Millersville, Pennsylvania (Soller, 2003), Portland, Oregon (Soller, 2004), and Baton Rouge, Louisiana (Soller, 2005). This year?s meeting was hosted by the Ohio Geological Survey, from June 11-14, 2006, on the Ohio State University campus in Columbus, Ohio. As in the previous meetings, the objective was to foster informal discussion and exchange of technical information. It is with great pleasure that I note that the objective was successfully met, as attendees continued to share and exchange knowledge and information, and renew friendships and collegial work begun at past DMT workshops. Each DMT workshop has been coordinated by the Association of American State Geologists (AASG) and U.S. Geological Survey (USGS) Data Capture Working Group, the latter of which was formed in August 1996 to support the AASG and the USGS in their effort to build a National Geologic Map Database (see Soller, this volume, and http://ngmdb.usgs.gov/info/standards/datacapt/). The Working Group was formed because increased production efficiencies, standardization, and quality of digital map products were needed for the database?and for the State and Federal geological surveys?to provide more high-quality digital maps to the public. At the 2006 meeting, oral and poster presentations and special discussion sessions emphasized: 1) methods for creating and publishing map products (here, ?publishing? includes Web-based release); 2) field data capture

  18. Fermi surface mapping: Techniques and visualization

    SciTech Connect

    Rotenberg, E.; Denlinger, J. D.; Kevan, S. D.; Goodman, K. W.; Tobin, J. G.; Mankey, G. J.

    1997-04-01

    Angle-resolved photoemission (ARP) of valence bands is a mature technique that has achieved spectacular success in band-mapping metals, semiconductors, and insulators. The purpose of the present study was the development of experimental and analytical techniques in ARP which take advantage of third generation light sources. Here the authors studied the relatively simple Cu surface in preparation for other metals. Copper and related metals themselves are of current interest, especially due to its role as an interlayer in spin valves and other magnetic heterostructures. A major goal of this study was the development of a systematic technique to quickly (i.e. in a few hours of synchrotron beamtime) measure the FS and separate it into bulk and surface FS`s. Often, one needs to avoid bulk features altogether, which one can achieve by carefully mapping their locations in k-space. The authors will also show how they systematically map Fermi surfaces throughout large volumes of k-space, and, by processing the resulting volume data sets, provide intuitive pictures of FS`s, both bulk and surface.

  19. Investigation of light scattering as a technique for detecting discrete soot particles in a luminous flame

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The practicability of using a classical light-scattering technique, involving comparison of angular scattering intensity patterns with theoretically determined Mie and Rayleight patterns, to detect discrete soot particles (diameter less than 50 nm) in premixed propane/air and propane/oxygen-helium flames is considered. The experimental apparatus employed in this investigation included a laser light source, a flat-flame burner, specially coated optics, a cooled photomultiplier detector, and a lock-in voltmeter readout. Although large, agglomerated soot particles were detected and sized, it was not possible to detect small, discrete particles. The limiting factor appears to be background scattering by the system's optics.

  20. Structural Modeling Using "Scanning and Mapping" Technique

    NASA Technical Reports Server (NTRS)

    Amos, Courtney L.; Dash, Gerald S.; Shen, J. Y.; Ferguson, Frederick; Noga, Donald F. (Technical Monitor)

    2000-01-01

    Supported by NASA Glenn Center, we are in the process developing a structural damage diagnostic and monitoring system for rocket engines, which consists of five modules: Structural Modeling, Measurement Data Pre-Processor, Structural System Identification, Damage Detection Criterion, and Computer Visualization. The function of the system is to detect damage as it is incurred by the engine structures. The scientific principle to identify damage is to utilize the changes in the vibrational properties between the pre-damaged and post-damaged structures. The vibrational properties of the pre-damaged structure can be obtained based on an analytic computer model of the structure. Thus, as the first stage of the whole research plan, we currently focus on the first module - Structural Modeling. Three computer software packages are selected, and will be integrated for this purpose. They are PhotoModeler-Pro, AutoCAD-R14, and MSC/NASTRAN. AutoCAD is the most popular PC-CAD system currently available in the market. For our purpose, it plays like an interface to generate structural models of any particular engine parts or assembly, which is then passed to MSC/NASTRAN for extracting structural dynamic properties. Although AutoCAD is a powerful structural modeling tool, the complexity of engine components requires a further improvement in structural modeling techniques. We are working on a so-called "scanning and mapping" technique, which is a relatively new technique. The basic idea is to producing a full and accurate 3D structural model by tracing on multiple overlapping photographs taken from different angles. There is no need to input point positions, angles, distances or axes. Photographs can be taken by any types of cameras with different lenses. With the integration of such a modeling technique, the capability of structural modeling will be enhanced. The prototypes of any complex structural components will be produced by PhotoModeler first based on existing similar

  1. Digital Mapping Techniques '11–12 workshop proceedings

    USGS Publications Warehouse

    Soller, David R.

    2014-01-01

    At these meetings, oral and poster presentations and special discussion sessions emphasized: (1) methods for creating and publishing map products (here, "publishing" includes Web-based release); (2) field data capture software and techniques, including the use of LiDAR; (3) digital cartographic techniques; (4) migration of digital maps into ArcGIS Geodatabase formats; (5) analytical GIS techniques; and (6) continued development of the National Geologic Map Database.

  2. Process Mapping: Tools, Techniques, & Critical Success Factors.

    ERIC Educational Resources Information Center

    Kalman, Howard K.

    2002-01-01

    Explains process mapping as an analytical tool and a process intervention that performance technologists can use to improve human performance by reducing error variance. Highlights include benefits of process mapping; and critical success factors, including organizational readiness, time commitment by participants, and the availability of a…

  3. Concept Mapping: A Critical Thinking Technique

    ERIC Educational Resources Information Center

    Harris, Charles M.; Zha, Shenghua

    2013-01-01

    Concept mapping, graphically depicting the structure of abstract concepts, is based on the observation that pictures and line drawings are often more easily comprehended than the words that represent an abstract concept. The efficacy of concept mapping for facilitating critical thinking was assessed in four sections of an introductory psychology…

  4. Freeform lens design for light-emitting diode uniform illumination by using a method of source-target luminous intensity mapping.

    PubMed

    Chen, Jin-Jia; Huang, Ze-Yu; Liu, Te-Shu; Tsai, Ming-Da; Huang, Kuang-Lung

    2015-10-01

    We present a freeform lens for application to LED uniform illumination. This lens, which is designed with a method of simple source-target luminous intensity mapping, can produce irradiance uniformity of greater than 0.8 and optical efficiency above 90% with an arbitrary half-beam angle greater than 45 deg. Typically, as compared with a conventional source-target energy mapping method, this design method can achieve better optical performance of lenses for general LED lighting. When a non-Lambertian-type light source is employed, for example, the chip on board LED, the use of the method can result in a compact LED lens without losing the optical performances of high irradiance uniformity and high optical efficiency as yielded by lenses for Lambertian-type LED light sources. PMID:26479644

  5. Potential Benefits of the Information Mapping Technique.

    ERIC Educational Resources Information Center

    Schaffer, Eric M.

    1982-01-01

    A comparison of the performance of 10 employees using the current version of a Corporate Time Reporting Instruction and a revision prepared using Information Mapping (IM) indicates that subjects committed 54 percent fewer errors when using the IM version. Samples of both versions and evaluation forms used are included. (MER)

  6. Historical shoreline mapping (I): improving techniques and reducing positioning errors

    USGS Publications Warehouse

    Thieler, E. Robert; Danforth, William W.

    1994-01-01

    A critical need exists among coastal researchers and policy-makers for a precise method to obtain shoreline positions from historical maps and aerial photographs. A number of methods that vary widely in approach and accuracy have been developed to meet this need. None of the existing methods, however, address the entire range of cartographic and photogrammetric techniques required for accurate coastal mapping. Thus, their application to many typical shoreline mapping problems is limited. In addition, no shoreline mapping technique provides an adequate basis for quantifying the many errors inherent in shoreline mapping using maps and air photos. As a result, current assessments of errors in air photo mapping techniques generally (and falsely) assume that errors in shoreline positions are represented by the sum of a series of worst-case assumptions about digitizer operator resolution and ground control accuracy. These assessments also ignore altogether other errors that commonly approach ground distances of 10 m. This paper provides a conceptual and analytical framework for improved methods of extracting geographic data from maps and aerial photographs. We also present a new approach to shoreline mapping using air photos that revises and extends a number of photogrammetric techniques. These techniques include (1) developing spatially and temporally overlapping control networks for large groups of photos; (2) digitizing air photos for use in shoreline mapping; (3) preprocessing digitized photos to remove lens distortion and film deformation effects; (4) simultaneous aerotriangulation of large groups of spatially and temporally overlapping photos; and (5) using a single-ray intersection technique to determine geographic shoreline coordinates and express the horizontal and vertical error associated with a given digitized shoreline. As long as historical maps and air photos are used in studies of shoreline change, there will be a considerable amount of error (on the

  7. A calibration-free evapotranspiration mapping technique

    NASA Astrophysics Data System (ADS)

    Szilagyi, J.

    2010-12-01

    With the availability of Moderate Resolution Imaging Spectroradiometer (MODIS) data the spatial distribution of the resulting daytime land surface temperature (Ts) can be tracked at a resolution of about 1-km. A simple, self-calibrating linear transformation of the Ts values into evapotranspiration (ET) rates is possible if the following criteria are met: a) the vertical gradient of the air temperature near the surface is directly proportional to Ts; b) net energy available for sensible and latent heat transfer at the surface is quasi-constant in space; c) heat conduction into the soil is negligible, and; d) land-surface properties do not change drastically over space. The validity of a) has been proved by such models as SEBAL and METRIC. Requirement b) is fulfilled over a flat or rolling terrain provided the probability distribution of the surface albedo values of the MODIS cells has a narrow spread, which is the case for the two study areas (Hungary and Nebraska) with a characteristic vegetation-period mean of about 16% and a standard deviation of 1.4%. Heat conduction into the soil can be considered negligible for periods longer than a day, thus the 8-day composited Ts values employed in the present study comply with this requirement. Finally, for periods longer than a day, the assumption of near-neutral atmospheric conditions is justified which entails that spatial variations in surface properties have a significantly dampened effect on the flux-transfer coefficient (i.e., aerodynamic resistance) value which therefore can be considered as quasi-constant in space. The linear transformation of the Ts values into ET rates in this study has been performed on a monthly basis. The transformation requires specifying two anchor points in the Ts - ET plane with the help of standard atmospheric variables, such as air temperature and humidity, as well as incident global radiation, or in lieu of it, sunshine duration. From March to November ET has been mapped for Hungary

  8. Mapping the climate: guidance on appropriate techniques to map climate variables and their uncertainty

    NASA Astrophysics Data System (ADS)

    Kaye, N. R.; Hartley, A.; Hemming, D.

    2012-02-01

    Maps are a crucial asset in communicating climate science to a diverse audience, and there is a wealth of software available to analyse and visualise climate information. However, this availability makes it easy to create poor maps as users often lack an underlying cartographic knowledge. Unlike traditional cartography, where many known standards allow maps to be interpreted easily, there is no standard mapping approach used to represent uncertainty (in climate or other information). Consequently, a wide range of techniques have been applied for this purpose, and users may spend unnecessary time trying to understand the mapping approach rather than interpreting the information presented. Furthermore, communicating and visualising uncertainties in climate data and climate change projections, using for example ensemble based approaches, presents additional challenges for mapping that require careful consideration. The aim of this paper is to provide background information and guidance on suitable techniques for mapping climate variables, including uncertainty. We assess a range of existing and novel techniques for mapping variables and uncertainties, comparing "intrinsic" approaches that use colour in much the same way as conventional thematic maps with "extrinsic" approaches that incorporate additional geometry such as points or features. Using cartographic knowledge and lessons learned from mapping in different disciplines we propose the following 6 general mapping guidelines to develop a suitable mapping technique that represents both magnitude and uncertainty in climate data:

    - use a sensible sequential or diverging colour scheme;

    - use appropriate colour symbolism if it is applicable;

    - ensure the map is usable by colour blind people;

    - use a data classification scheme that does not misrepresent the data;

    - use a map

  9. Map Your Way to Speech Success! Employing Mind Mapping as a Speech Preparation Technique

    ERIC Educational Resources Information Center

    Paxman, Christina G.

    2011-01-01

    Mind mapping has gained considerable credibility recently in corporations such as Boeing and Nabisco, as well as in the classroom in terms of preparing for examinations and preparing for speeches. A mind map is a graphic technique for organizing an individual's thoughts and other information. It harnesses the full range of cortical skills--word,…

  10. Automated thermal mapping techniques using chromatic image analysis

    NASA Technical Reports Server (NTRS)

    Buck, Gregory M.

    1989-01-01

    Thermal imaging techniques are introduced using a chromatic image analysis system and temperature sensitive coatings. These techniques are used for thermal mapping and surface heat transfer measurements on aerothermodynamic test models in hypersonic wind tunnels. Measurements are made on complex vehicle configurations in a timely manner and at minimal expense. The image analysis system uses separate wavelength filtered images to analyze surface spectral intensity data. The system was initially developed for quantitative surface temperature mapping using two-color thermographic phosphors but was found useful in interpreting phase change paint and liquid crystal data as well.

  11. Comparison of the Reliability and Validity of Scores from Two Concept-Mapping Techniques.

    ERIC Educational Resources Information Center

    Ruiz-Primo, Maria Araceli; Schultz, Susan E.; Li, Min; Shavelson, Richard J.

    2001-01-01

    Reports the results of a study that compared two concept-mapping techniques, one high-directed, "fill-in-the-map" and one low-directed, "construct-a-map-from-scratch". Examines whether: (1) skeleton map scores were sensitive to the sample; (2) the two types of skeleton maps were equivalent; and (3) the two mapping techniques provided similar…

  12. Extended mapping and characteristics techniques for inverse aerodynamic design

    NASA Technical Reports Server (NTRS)

    Sobieczky, H.; Qian, Y. J.

    1991-01-01

    Some ideas for using hodograph theory, mapping techniques and methods of characteristics to formulate typical aerodynamic design boundary value problems are developed. The inverse method of characteristics is shown to be a fast tool for design of transonic flow elements as well as supersonic flows with given shock waves.

  13. Uncertainty in mapping urban air quality using crowdsourcing techniques

    NASA Astrophysics Data System (ADS)

    Schneider, Philipp; Castell, Nuria; Lahoz, William; Bartonova, Alena

    2016-04-01

    Small and low-cost sensors measuring various air pollutants have become available in recent years owing to advances in sensor technology. Such sensors have significant potential for improving high-resolution mapping of air quality in the urban environment as they can be deployed in comparatively large numbers and therefore are able to provide information at unprecedented spatial detail. However, such sensor devices are subject to significant and currently little understood uncertainties that affect their usability. Not only do these devices exhibit random errors and biases of occasionally substantial magnitudes, but these errors may also shift over time. In addition, there often tends to be significant inter-sensor variability even when supposedly identical sensors from the same manufacturer are used. We need to quantify accurately these uncertainties to make proper use of the information they provide. Furthermore, when making use of the data and producing derived products such as maps, the measurement uncertainties that propagate throughout the analysis need to be clearly communicated to the scientific and non-scientific users of the map products. Based on recent experiences within the EU-funded projects CITI-SENSE and hackAIR we discuss the uncertainties along the entire processing chain when using crowdsourcing techniques for mapping urban air quality. Starting with the uncertainties exhibited by the sensors themselves, we present ways of quantifying the error characteristics of a network of low-cost microsensors and show suitable statistical metrics for summarizing them. Subsequently, we briefly present a data-fusion-based method for mapping air quality in the urban environment and illustrate how we propagate the uncertainties of the individual sensors throughout the mapping system, resulting in detailed maps that document the pixel-level uncertainty for each concentration field. Finally, we present methods for communicating the resulting spatial uncertainty

  14. Luminous efficiency functions at higher intensities

    NASA Astrophysics Data System (ADS)

    Harrington, Lawrence Kent

    Two psychophysical measurement techniques, flicker photometry and successive heterochromatic brightness matching, were used to measure changes in luminance efficiency functions with increasing levels of light adaptation. Both measurement techniques were performed using the same optical system and the same seven healthy adults as subjects. Measurements were taken at four reference stimulus intensities, 1, 10, 100 and 1000 foot-lamberts. Luminous efficiency was found to depend on both the technique and the reference stimulus intensity with which the measurements were taken. For heterochromatic brightness matching, luminous efficiency increased for longer wavelengths as reference intensity increased. Peak luminous efficiency shifted from approximately 540nm to greater than 600nm with increasing intensity for all seven subjects. Peak luminous efficiency was constant for flicker photometry across all intensities but the function narrowed slightly at 100 foot-lamberts.

  15. Strain mapping accuracy improvement using super-resolution techniques.

    PubMed

    Bárcena-González, G; Guerrero-Lebrero, M P; Guerrero, E; Fernández-Reyes, D; González, D; Mayoral, A; Utrilla, A D; Ulloa, J M; Galindo, P L

    2016-04-01

    Super-resolution (SR) software-based techniques aim at generating a final image by combining several noisy frames with lower resolution from the same scene. A comparative study on high-resolution high-angle annular dark field images of InAs/GaAs QDs has been carried out in order to evaluate the performance of the SR technique. The obtained SR images present enhanced resolution and higher signal-to-noise (SNR) ratio and sharpness regarding the experimental images. In addition, SR is also applied in the field of strain analysis using digital image processing applications such as geometrical phase analysis and peak pairs analysis. The precision of the strain mappings can be improved when SR methodologies are applied to experimental images.

  16. NO plume mapping by laser-radar techniques.

    PubMed

    Edner, H; Sunesson, A; Svanberg, S

    1988-09-01

    Mapping of NO plumes by using laser-radar techniques has been demonstrated with a mobile differential absorption lidar system. The system was equipped with a narrow-linewidth Nd:YAG-pumped dye laser that, with doubling and mixing, generated pulse energies of 3-5 mJ at 226 nm, with a linewidth of 1pm. This permitted range-resolved measurements of NO, with a range of about 500 m. The detection limit was estimated to 3 microg/m(3), with an integration interval of 350 m. Spectroscopic studies on the gamma(0, 0) bandhead near 226.8 nm were performed with 1-pm resolution, and the differential absorption cross section was determined to be (6.6 +/- 0.6) x 10(-22) m(2), with a wavelength difference of 12 pm.

  17. Applications of Mapping and Tomographic Techniques in Gem Sciences

    NASA Astrophysics Data System (ADS)

    Shen, A. H.

    2014-12-01

    Gem Sciences are scientific studies of gemstones - their genesis, provenance, synthesis, enhancement, treatment and identification. As high quality forms of specific minerals, the gemstones exhibit unusual physical properties that are usually unseen in the regular counterparts. Most gemstones are colored by trace elements incorporated in the crystal lattice during various growth stages; forming coloration zones of various scales. Studying the spectral and chemical contrast across color zones helps elucidating the origins of colors. These are done by UV-visible spectrometers with microscope and LA-ICPMS in modern gemological laboratories. In the case of diamonds, their colored zones arise from various structural defects incorporated in different growth zones and are studied with FTIR spectrometers with IR microscope and laser photoluminescence spectrometers. Advancement in modern synthetic techniques such as chemical vapor deposition (CVD) has created some problem for identification. Some exploratory experiments in carbon isotope mapping were done on diamonds using SIMS. The most important issue in pearls is to identify one particular pearl being a cultured one or a natural pearl. The price difference can be enormous. Classical way of such identification is done by x-ray radiographs, which clearly show the bead and the nacre. Modern cultured pearl advancement has eliminated the need for an artificial bead, but a small piece of tissue instead. Nowadays, computer x-ray tomography (CT) scanning devices are used to depict the clear image of the interior of a pearl. In the Chinese jade market, filling fissures with epoxy and/or wax are very commonly seen. We are currently exploring Magnetic Resonance Imaging (MRI) technique to map the distribution of artificial resin within a polycrystalline aggregates.

  18. Gluing for Raman lidar systems using the lamp mapping technique.

    PubMed

    Walker, Monique; Venable, Demetrius; Whiteman, David N

    2014-12-20

    In the context of combined analog and photon counting (PC) data acquisition in a Lidar system, glue coefficients are defined as constants used for converting an analog signal into a virtual PC signal. The coefficients are typically calculated using Lidar profile data taken under clear, nighttime conditions since, in the presence of clouds or high solar background, it is difficult to obtain accurate glue coefficients from Lidar backscattered data. Here we introduce a new method in which we use the lamp mapping technique (LMT) to determine glue coefficients in a manner that does not require atmospheric profiles to be acquired and permits accurate glue coefficients to be calculated when adequate Lidar profile data are not available. The LMT involves scanning a halogen lamp over the aperture of a Lidar receiver telescope such that the optical efficiency of the entire detection system is characterized. The studies shown here involve two Raman lidar systems; the first from Howard University and the second from NASA/Goddard Space Flight Center. The glue coefficients determined using the LMT and the Lidar backscattered method agreed within 1.2% for the water vapor channel and within 2.5% for the nitrogen channel for both Lidar systems. We believe this to be the first instance of the use of laboratory techniques for determining the glue coefficients for Lidar data analysis. PMID:25608203

  19. Large areas elemental mapping by ion beam analysis techniques

    NASA Astrophysics Data System (ADS)

    Silva, T. F.; Rodrigues, C. L.; Curado, J. F.; Allegro, P.; Moro, M. V.; Campos, P. H. O. V.; Santos, S. B.; Kajiya, E. A. M.; Rizzutto, M. A.; Added, N.; Tabacniks, M. H.

    2015-07-01

    The external beam line of the Laboratory for Material Analysis with Ion Beams (LAMFI) is a versatile setup for multi-technique analysis. X-ray detectors for Particle Induced X-rays Emission (PIXE) measurements, a Gamma-ray detector for Particle Induced Gamma- ray Emission (PIGE), and a particle detector for scattering analysis, such as Rutherford Backscattering Spectrometry (RBS), were already installed. In this work, we present some results, using a large (60-cm range) XYZ computer controlled sample positioning system, completely developed and build in our laboratory. The XYZ stage was installed at the external beam line and its high spacial resolution (better than 5 μm over the full range) enables positioning the sample with high accuracy and high reproducibility. The combination of a sub-millimeter beam with the large range XYZ robotic stage is being used to produce elemental maps of large areas in samples like paintings, ceramics, stones, fossils, and all sort of samples. Due to its particular characteristics, this is a unique device in the sense of multi-technique analysis of large areas. With the continuous development of the external beam line at LAMFI, coupled to the robotic XYZ stage, it is becoming a robust and reliable option for regular analysis of trace elements (Z > 5) competing with the traditional in-vacuum ion-beam-analysis with the advantage of automatic rastering.

  20. Gluing for Raman lidar systems using the lamp mapping technique.

    PubMed

    Walker, Monique; Venable, Demetrius; Whiteman, David N

    2014-12-20

    In the context of combined analog and photon counting (PC) data acquisition in a Lidar system, glue coefficients are defined as constants used for converting an analog signal into a virtual PC signal. The coefficients are typically calculated using Lidar profile data taken under clear, nighttime conditions since, in the presence of clouds or high solar background, it is difficult to obtain accurate glue coefficients from Lidar backscattered data. Here we introduce a new method in which we use the lamp mapping technique (LMT) to determine glue coefficients in a manner that does not require atmospheric profiles to be acquired and permits accurate glue coefficients to be calculated when adequate Lidar profile data are not available. The LMT involves scanning a halogen lamp over the aperture of a Lidar receiver telescope such that the optical efficiency of the entire detection system is characterized. The studies shown here involve two Raman lidar systems; the first from Howard University and the second from NASA/Goddard Space Flight Center. The glue coefficients determined using the LMT and the Lidar backscattered method agreed within 1.2% for the water vapor channel and within 2.5% for the nitrogen channel for both Lidar systems. We believe this to be the first instance of the use of laboratory techniques for determining the glue coefficients for Lidar data analysis.

  1. Cognitive Mapping Techniques: Implications for Research in Engineering and Technology Education

    ERIC Educational Resources Information Center

    Dixon, Raymond A.; Lammi, Matthew

    2014-01-01

    The primary goal of this paper is to present the theoretical basis and application of two types of cognitive maps, concept map and mind map, and explain how they can be used by educational researchers in engineering design research. Cognitive mapping techniques can be useful to researchers as they study students' problem solving strategies…

  2. Duodenal luminal nutrient sensing

    PubMed Central

    Rønnestad, Ivar; Akiba, Yasutada; Kaji, Izumi; Kaunitz, Jonathan D

    2016-01-01

    The gastrointestinal mucosa is exposed to numerous chemical substances and microorganisms, including macronutrients, micronutrients, bacteria, endogenous ions, and proteins. The regulation of mucosal protection, digestion, absorption and motility is signaled in part by luminal solutes. Therefore, luminal chemosensing is an important mechanism enabling the mucosa to monitor luminal conditions, such as pH, ion concentrations, nutrient quantity, and microflora. The duodenal mucosa shares luminal nutrient receptors with lingual taste receptors in order to detect the five basic tastes, in addition to essential nutrients, and unwanted chemicals. The recent ‘de-orphanization’ of nutrient sensing G protein-coupled receptors provides an essential component of the mechanism by which the mucosa senses luminal nutrients. In this review, we will update the mechanisms of and underlying physiological and pathological roles in luminal nutrient sensing, with a main focus on the duodenal mucosa. PMID:25113991

  3. Hydrocarbon microseepage mapping using signature based target detection techniques

    NASA Astrophysics Data System (ADS)

    Soydan, Hilal; Koz, Alper; Şebnem Düzgün, H.; Aydin Alatan, A.

    2015-10-01

    In this paper, we compare the conventional methods in hydrocarbon seepage anomalies with the signature based detection algorithms. The Crosta technique [1] is selected as a basement in the experimental comparisons for the conventional approach. The Crosta technique utilizes the characteristic bands of the searched target for principal component transformation in order to determine the components characterizing the target in interest. Desired Target Detection and Classification Algorithm (DTDCA), Spectral Matched Filter (SMF), and Normalized Correlation (NC) are employed for signature based target detection. Signature based target detection algorithms are applied to the whole spectrum benefiting from the information stored in all spectral bands. The selected methods are applied to a multispectral Advanced SpaceBorne Thermal Emission and Radiometer (ASTER) image of the study region, with an atmospheric correction prior to the realization of the algorithms. ASTER provides multispectral bands covering visible, short wave, and thermal infrared region, which serves as a useful tool for the interpretation of the areas with hydrocarbon anomalies. The exploration area is selected as Gemrik Anticline which is located in South East Anatolia, Adıyaman, Bozova Oil Field, where microseeps can be observed with almost no vegetation cover. The spectral signatures collected with Analytical Spectral Devices Inc. (ASD) spectrometer from the reference valley [2] have been utilized as an input to the signature based detection algorithms. The experiments have indicated that DTDCA and MF outperforms the Crosta technique by locating the microseepage patterns along the mitigation pathways with a better contrast. On the other hand, NC has not been able to map the searched target with a visible distinction. It is concluded that the signature based algorithms can be more effective than the conventional methods for the detection of microseepage induced anomalies.

  4. Semantic Data And Visualization Techniques Applied To Geologic Field Mapping

    NASA Astrophysics Data System (ADS)

    Houser, P. I. Q.; Royo-Leon, M.; Munoz, R.; Estrada, E.; Villanueva-Rosales, N.; Pennington, D. D.

    2015-12-01

    Geologic field mapping involves the use of technology before, during, and after visiting a site. Geologists utilize hardware such as Global Positioning Systems (GPS) connected to mobile computing platforms such as tablets that include software such as ESRI's ArcPad and other software to produce maps and figures for a final analysis and report. Hand written field notes contain important information and drawings or sketches of specific areas within the field study. Our goal is to collect and geo-tag final and raw field data into a cyber-infrastructure environment with an ontology that allows for large data processing, visualization, sharing, and searching, aiding in connecting field research with prior research in the same area and/or aid with experiment replication. Online searches of a specific field area return results such as weather data from NOAA and QuakeML seismic data from USGS. These results that can then be saved to a field mobile device and searched while in the field where there is no Internet connection. To accomplish this we created the GeoField ontology service using the Web Ontology Language (OWL) and Protégé software. Advanced queries on the dataset can be made using reasoning capabilities can be supported that go beyond a standard database service. These improvements include the automated discovery of data relevant to a specific field site and visualization techniques aimed at enhancing analysis and collaboration while in the field by draping data over mobile views of the site using augmented reality. A case study is being performed at University of Texas at El Paso's Indio Mountains Research Station located near Van Horn, Texas, an active multi-disciplinary field study site. The user can interactively move the camera around the study site and view their data digitally. Geologist's can check their data against the site in real-time and improve collaboration with another person as both parties have the same interactive view of the data.

  5. Phase reversal technique decreases cortical stimulation time during motor mapping.

    PubMed

    Simon, Mirela V; Sheth, Sameer A; Eckhardt, Christine A; Kilbride, Ronan D; Braver, Diana; Williams, Ziv; Curry, William; Cahill, Dan; Eskandar, Emad N

    2014-06-01

    Neurophysiologic mapping of the primary motor cortex (PMC) is commonly used in supratentorial surgery. Electrical cortical stimulation is guided by anatomic landmarks towards the precentral gyrus, with recording of the triggered primary motor responses (TPMR) in the contralateral hemibody. Thus, factors such as distortion of the pericentral anatomy, small surgical fields, brain shifts and miscalibrated neuronavigational systems may lengthen the process and result in unnecessary stimulations, increasing the probability of triggering seizures. We hypothesized that central sulcus localization via the median somatosensory evoked potentials phase reversal technique (MSSEP PRT) accurately guides the surgeon, resulting in prompt identification of the PMC with minimal electrical stimulation. Multivariate Cox regression was used to study the impact of MSSEP PRT on time spent performing electrical cortical stimulation to TPMR. The analysis was adjusted for presence of increased cortical excitability, high motor thresholds, lesions close to PMC and fMRI data, in 100 consecutive standardized motor mapping procedures for brain tumor resection and epilepsy surgery. Phase reversal and change morphology of the recorded somatosensory evoked potentials quadrupled (hazard ratio [HR] 4.13, p<0.0001) and doubled (HR 2.14, p=0.02) the rate of obtaining TPMR, respectively. A 1mA increase in motor threshold decreased the rate by 9% (HR 0.91, p=0.0002). Afterdischarges triggered before TPMR and lesions in close proximity to PMC decreased the rate of TPMR by 76% (HR 0.23, p<0.0001) and 48% (HR 0.52, p=0.04), respectively. Informative PRT decreases stimulation time. Afterdischarges triggered before TPMR, high motor thresholds and lesions close to the PMC increase it. PMID:24679940

  6. Interaction-Point Phase-Space Characterization using Single-Beam and Luminous-Region Measurements at PEP-II

    SciTech Connect

    Kozanecki, W; Bevan, A.J.; Viaud, B.F.; Cai, Y.; Fisher, A.S.; O'Grady, C.; Lindquist, B.; Roodman, A.; J.M.Thompson, M.Weaver; /SLAC

    2008-09-09

    We present an extensive experimental characterization of the e{sup {+-}} phase space at the interaction point of the SLAC PEP-II B-Factory, that combines a detailed mapping of luminous-region observables using the BABAR detector, with stored-beam measurements by accelerator techniques.

  7. Optical coherence tomography investigations of ceramic lumineers

    NASA Astrophysics Data System (ADS)

    Fernandes, Luana O.; Graça, Natalia D. R. L.; Melo, Luciana S. A.; Silva, Claudio H. V.; Gomes, Anderson S. L.

    2016-02-01

    Lumineers are veneer laminates used as an alternative for aesthetic dental solutions of the highest quality, but the only current means of its performance assessment is visual inspection. The objective of this study was to use the Optical Coherence Tomography (OCT) technique working in spectral domain to analyze in vivo in a single patient, 14 lumineers 180 days after cementation. It was possible to observe images in various kinds of changes in the cementing line and the laminate. It was concluded that the OCT is an effective and promising method to clinical evaluation of the cementing line in lumineers.

  8. Digital Mapping Techniques '09-Workshop Proceedings, Morgantown, West Virginia, May 10-13, 2009

    USGS Publications Warehouse

    Soller, David R.

    2011-01-01

    As in the previous years' meetings, the objective was to foster informal discussion and exchange of technical information, principally in order to develop more efficient methods for digital mapping, cartography, GIS analysis, and information management. At this meeting, oral and poster presentations and special discussion sessions emphasized (1) methods for creating and publishing map products (here, "publishing" includes Web-based release); (2) field data capture software and techniques, including the use of LiDAR; (3) digital cartographic techniques; (4) migration of digital maps into ArcGIS Geodatabase format; (5) analytical GIS techniques; and (6) continued development of the National Geologic Map Database.

  9. Techniques, problems and uses of mega-geomorphological mapping

    NASA Technical Reports Server (NTRS)

    Embleton, C.

    1985-01-01

    A plea for a program of global geomorphological mapping based on remote sensing data is presented. It is argued that the program is a necessary step in bringing together the rapidly evolving concepts of plate tectonics with the science of geomorphology. Geomorphologists are urged to bring temporal scales into their subject and to abandon their recent isolation from tectonics and geological history. It is suggested that a start be made with a new geomorphological map of Europe, utilizing the latest space technology.

  10. A close-range photogrammetric technique for mapping neotectonic features in trenches

    USGS Publications Warehouse

    Fairer, G.M.; Whitney, J.W.; Coe, J.A.

    1989-01-01

    Close-range photogrammetric techniques and newly available computerized plotting equipment were used to map exploratory trench walls that expose Quaternary faults in the vicinity of Yucca Mountain, Nevada. Small-scale structural, lithologic, and stratigraphic features can be rapidly mapped by the photogrammetric method. This method is more accurate and significantly more rapid than conventional trench-mapping methods, and the analytical plotter is capable of producing cartographic definition of high resolution when detailed trench maps are necessary. -from Authors

  11. Boson mapping techniques applied to constant gauge fields in QCD

    NASA Technical Reports Server (NTRS)

    Hess, Peter Otto; Lopez, J. C.

    1995-01-01

    Pairs of coordinates and derivatives of the constant gluon modes are mapped to new gluon-pair fields and their derivatives. Applying this mapping to the Hamiltonian of constant gluon fields results for large coupling constants into an effective Hamiltonian which separates into one describing a scalar field and another one for a field with spin two. The ground state is dominated by pairs of gluons coupled to color and spin zero with slight admixtures of color zero and spin two pairs. As color group we used SU(2).

  12. A comparison of two conformal mapping techniques applied to an aerobrake body

    NASA Technical Reports Server (NTRS)

    Hommel, Mark J.

    1987-01-01

    Conformal mapping is a classical technique which has been utilized for solving problems in aerodynamics and hydrodynamics. Conformal mapping has been successfully applied in the construction of grids around airfoils, engine inlets and other aircraft configurations. Conformal mapping techniques were applied to an aerobrake body having an axis of symmetry. Two different approaches were utilized: (1) Karman-Trefftz transformation; and (2) Point Wise Schwarz Christoffel transformation. In both cases, the aerobrake body was mapped onto a near circle, and a grid was generated in the mapped plane. The mapped body and grid were then mapped back into physical space and the properties of the associated grids were examined. Advantages and disadvantages of both approaches are discussed.

  13. Simulating and mapping spatial complexity using multi-scale techniques

    USGS Publications Warehouse

    De Cola, L.

    1994-01-01

    A central problem in spatial analysis is the mapping of data for complex spatial fields using relatively simple data structures, such as those of a conventional GIS. This complexity can be measured using such indices as multi-scale variance, which reflects spatial autocorrelation, and multi-fractal dimension, which characterizes the values of fields. These indices are computed for three spatial processes: Gaussian noise, a simple mathematical function, and data for a random walk. Fractal analysis is then used to produce a vegetation map of the central region of California based on a satellite image. This analysis suggests that real world data lie on a continuum between the simple and the random, and that a major GIS challenge is the scientific representation and understanding of rapidly changing multi-scale fields. -Author

  14. Two techniques for mapping and area estimation of small grains in California using Landsat digital data

    NASA Technical Reports Server (NTRS)

    Sheffner, E. J.; Hlavka, C. A.; Bauer, E. M.

    1984-01-01

    Two techniques have been developed for the mapping and area estimation of small grains in California from Landsat digital data. The two techniques are Band Ratio Thresholding, a semi-automated version of a manual procedure, and LCLS, a layered classification technique which can be fully automated and is based on established clustering and classification technology. Preliminary evaluation results indicate that the two techniques have potential for providing map products which can be incorporated into existing inventory procedures and automated alternatives to traditional inventory techniques and those which currently employ Landsat imagery.

  15. Mapping Diffuse Seismicity Using Empirical Matched Field Processing Techniques

    SciTech Connect

    Wang, J; Templeton, D C; Harris, D B

    2011-01-21

    The objective of this project is to detect and locate more microearthquakes using the empirical matched field processing (MFP) method than can be detected using only conventional earthquake detection techniques. We propose that empirical MFP can complement existing catalogs and techniques. We test our method on continuous seismic data collected at the Salton Sea Geothermal Field during November 2009 and January 2010. In the Southern California Earthquake Data Center (SCEDC) earthquake catalog, 619 events were identified in our study area during this time frame and our MFP technique identified 1094 events. Therefore, we believe that the empirical MFP method combined with conventional methods significantly improves the network detection ability in an efficient matter.

  16. Middle Atmosphere Program. Handbook for MAP. Volume 15: Balloon techniques

    NASA Technical Reports Server (NTRS)

    Murcray, D. G. (Editor)

    1985-01-01

    Some techniques employed by investigators using balloons to obtain data on the properties of the middle atmosphere are discussed. Much effort has gone into developing instruments which could be used on small balloons to measure temperature and variable species. These efforts are discussed. Remote sensing techniques used to obtain data on atmospheric composition are described. Measurement of stratospheric ions and stratospheric aerosols are also discussed.

  17. Development of predictive mapping techniques for soil survey and salinity mapping

    NASA Astrophysics Data System (ADS)

    Elnaggar, Abdelhamid A.

    Conventional soil maps represent a valuable source of information about soil characteristics, however they are subjective, very expensive, and time-consuming to prepare. Also, they do not include explicit information about the conceptual mental model used in developing them nor information about their accuracy, in addition to the error associated with them. Decision tree analysis (DTA) was successfully used in retrieving the expert knowledge embedded in old soil survey data. This knowledge was efficiently used in developing predictive soil maps for the study areas in Benton and Malheur Counties, Oregon and accessing their consistency. A retrieved soil-landscape model from a reference area in Harney County was extrapolated to develop a preliminary soil map for the neighboring unmapped part of Malheur County. The developed map had a low prediction accuracy and only a few soil map units (SMUs) were predicted with significant accuracy, mostly those shallow SMUs that have either a lithic contact with the bedrock or developed on a duripan. On the other hand, the developed soil map based on field data was predicted with very high accuracy (overall was about 97%). Salt-affected areas of the Malheur County study area are indicated by their high spectral reflectance and they are easily discriminated from the remote sensing data. However, remote sensing data fails to distinguish between the different classes of soil salinity. Using the DTA method, five classes of soil salinity were successfully predicted with an overall accuracy of about 99%. Moreover, the calculated area of salt-affected soil was overestimated when mapped using remote sensing data compared to that predicted by using DTA. Hence, DTA could be a very helpful approach in developing soil survey and soil salinity maps in more objective, effective, less-expensive and quicker ways based on field data.

  18. S.M.A.R.T. map: Site map attribute retrieval technique

    SciTech Connect

    Brown-Rall, M.

    1995-03-29

    Plant Engineering`s Space and Site Planning (S&SP) organization at Lawrence Livermore National Laboratory (LLNL) has developed a new tool, which is a computerized mapping system that can graphically illustrate facility characteristics. The current ``base`` map being used is the LLNL Site Map prepared by Plant Engineering`s CADD Support group. Using current information in the Facility Information Tracking System (FITS) database, a Microsoft Excel spreadsheet, an electronic sort can be made, tying in the AutoCAD-generated site map to specific database fields. This link is accomplished by using a software overlay called the CadPLUS InfoEngine. The fields in the database include such things as, facility number, occupant program, population, facility age, facility quality, security level, etc. By selecting one or a combination of the fields, a map is generated, illustrating in color and hatch patterns the facilities or entities that are associated with the chosen fields. This process can be very useful for seeing the LLNL site at a glance, with highlighted characteristics for particular areas of interest. The generation of large complex graphics, using large-scale databases selectively, can be accomplished quickly. These extractions and links between data and graphics create a S.M.A.R.T. Map.

  19. Quantification of Geologic Lineaments by Manual and Machine Processing Techniques. [Landsat satellites - mapping/geological faults

    NASA Technical Reports Server (NTRS)

    Podwysocki, M. H.; Moik, J. G.; Shoup, W. C.

    1975-01-01

    The effect of operator variability and subjectivity in lineament mapping and methods to minimize or eliminate these problems by use of several machine preprocessing methods was studied. Mapped lineaments of a test landmass were used and the results were compared statistically. The total number of fractures mapped by the operators and their average lengths varied considerably, although comparison of lineament directions revealed some consensus. A summary map (785 linears) produced by overlaying the maps generated by the four operators shows that only 0.4 percent were recognized by all four operators, 4.7 percent by three, 17.8 percent by two, and 77 percent by one operator. Similar results were obtained in comparing these results with another independent group. This large amount of variability suggests a need for the standardization of mapping techniques, which might be accomplished by a machine aided procedure. Two methods of machine aided mapping were tested, both simulating directional filters.

  20. Formal results regarding metric space techniques for the study of astrophysical maps

    NASA Technical Reports Server (NTRS)

    Adams, Fred C.; Wiseman, Jennifer J.

    1994-01-01

    We extend a newly developed formal system for the description of astrophyscial maps. In this formalism, we consider the difference between maps to be the distance between elements of a pseudometric space (the space of all such maps). This Ansatz allows us to measure quantitatively the difference between any two maps and to order the space of all maps. For each physical characteristic of interest, this technique assigns an 'output' function to each map; the difference between the maps is then determined from the difference between their corresponding output functions. In this present study, we show that the results of this procedure are invariant under a class of transformations of the maps and the domains of the maps. In addition, we study the propagation of errors (observational uncertainties) through this formalism. We show that the uncertainties in the output functions can be controlled provided that the signal to noise ratios in the original astrophysical maps are sufficiently high. The results of this paper thus increase the effectiveness of this formal system for the description, classification, and analysis of astrophysical maps.

  1. THE TOTAL LUMINOUS EFFICIENCY OF LUMINOUS BACTERIA.

    PubMed

    Harvey, E N

    1925-09-18

    Methods are described for measuring the light emitted by an emulsion of luminous bacteria of given thickness, and calculating the light emitted by a single bacterium, measuring 1.1 x 2.2 micra, provided there is no absorption of light in the emulsion. At the same time, the oxygen consumed by a single bacterium was measured by recording the time for the bacteria to use up .9 of the oxygen dissolved in sea water from air (20 per cent oxygen). The luminescence intensity does not diminish until the oxygen concentration falls below 2 per cent, when the luminescence diminishes rapidly. Above 2 per cent oxygen (when the oxygen dissolving in sea water from pure oxygen at 760 mm. Hg pressure = 100 per cent) the bacteria use equal amounts of oxygen in equal times, while below 2 per cent oxygen it seems very likely that rate of oxygen absorption is proportional to oxygen concentration. By measuring the time for a tube of luminous bacteria of known concentration saturated with air (20 per cent oxygen) to begin to darken (2 per cent oxygen) we can calculate the oxygen absorbed by one bacterium per second. The bacteria per cc. are counted on a blood counting slide or by a centrifugal method, after measuring the volume of a single bacterium (1.695 x 10(-12) cc.). Both methods gave results in good agreement with each other. The maximum value for the light from a single bacterium was 24 x 10(-14) lumens or 1.9 x 10(-14) candles. The maximum value for lumen-seconds per mg. of oxygen absorbed was 14. The average value for lumen-seconds per mg. O(2) was 9.25. The maximum values were selected in calculating the efficiency of light production, since some of the bacteria counted may not be producing light, although they may still be using oxygen. The "diet" of the bacteria was 60 per cent glycerol and 40 per cent peptone. To oxidize this mixture each mg. of oxygen would yield 3.38 gm. calories or 14.1 watts per second. 1 lumen per watt is therefore produced by a normal bacterium which

  2. An Image Morphing Technique Based on Optimal Mass Preserving Mapping

    PubMed Central

    Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

    2013-01-01

    Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L2 mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128

  3. An image morphing technique based on optimal mass preserving mapping.

    PubMed

    Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

    2007-06-01

    Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L(2) mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128

  4. My Solar System: A Developmentally Adapted Eco-Mapping Technique for Children

    ERIC Educational Resources Information Center

    Curry, Jennifer R.; Fazio-Griffith, Laura J.; Rohr, Shannon N.

    2008-01-01

    Counseling children requires specific skills and techniques, such as play therapy and expressive arts, to address developmental manifestations and to facilitate the understanding of presenting problems. This article outlines an adapted eco-mapping activity that can be used as a creative counseling technique with children in order to promote…

  5. New and emerging imaging techniques for mapping brain circuitry.

    PubMed

    Pan, Hong; Epstein, Jane; Silbersweig, David A; Stern, Emily

    2011-06-24

    This article provides a survey of major methodological and analytic developments in the use of functional neuroimaging to study complex structural and functional brain activity and connectivity, including data analysis methods devised to distill network properties from high-dimensional and multimodal datasets. The goal of this survey is to provide those in the broader neuroscientific community with an understanding of these developments sufficient to facilitate an informed reading of the literature, and a thoughtful approach to their use in the investigation of questions in their own areas of interest. Practical methodological considerations for assessing and designing functional neuroimaging studies are provided, as are examples of the types of questions that can be addressed by various techniques.

  6. Digitizing zone maps, using modified LARSYS program. [computer graphics and computer techniques for mapping

    NASA Technical Reports Server (NTRS)

    Giddings, L.; Boston, S.

    1976-01-01

    A method for digitizing zone maps is presented, starting with colored images and producing a final one-channel digitized tape. This method automates the work previously done interactively on the Image-100 and Data Analysis System computers of the Johnson Space Center (JSC) Earth Observations Division (EOD). A color-coded map was digitized through color filters on a scanner to form a digital tape in LARSYS-2 or JSC Universal format. The taped image was classified by the EOD LARSYS program on the basis of training fields included in the image. Numerical values were assigned to all pixels in a given class, and the resulting coded zone map was written on a LARSYS or Universal tape. A unique spatial filter option permitted zones to be made homogeneous and edges of zones to be abrupt transitions from one zone to the next. A zoom option allowed the output image to have arbitrary dimensions in terms of number of lines and number of samples on a line. Printouts of the computer program are given and the images that were digitized are shown.

  7. Digital Mapping Techniques '08—Workshop Proceedings, Moscow, Idaho, May 18–21, 2008

    USGS Publications Warehouse

    Soller, David R.

    2009-01-01

    The Digital Mapping Techniques '08 (DMT'08) workshop was attended by more than 100 technical experts from 40 agencies, universities, and private companies, including representatives from 24 State geological surveys. This year's meeting, the twelfth in the annual series, was hosted by the Idaho Geological Survey, from May 18-21, 2008, on the University of Idaho campus in Moscow, Idaho. Each DMT workshop has been coordinated by the U.S. Geological Survey's National Geologic Map Database Project and the Association of American State Geologists (AASG). As in previous years' meetings, the objective was to foster informal discussion and exchange of technical information, principally in order to develop more efficient methods for digital mapping, cartography, GIS analysis, and information management. At this meeting, oral and poster presentations and special discussion sessions emphasized (1) methods for creating and publishing map products (here, "publishing" includes Web-based release); (2) field data capture software and techniques, including the use of LiDAR; (3) digital cartographic techniques; (4) migration of digital maps into ArcGIS Geodatabase format; (5) analytical GIS techniques; and (6) continued development of the National Geologic Map Database.

  8. Improving predictive mapping of deep-water habitats: Considering multiple model outputs and ensemble techniques

    NASA Astrophysics Data System (ADS)

    Robert, Katleen; Jones, Daniel O. B.; Roberts, J. Murray; Huvenne, Veerle A. I.

    2016-07-01

    In the deep sea, biological data are often sparse; hence models capturing relationships between observed fauna and environmental variables (acquired via acoustic mapping techniques) are often used to produce full coverage species assemblage maps. Many statistical modelling techniques are being developed, but there remains a need to determine the most appropriate mapping techniques. Predictive habitat modelling approaches (redundancy analysis, maximum entropy and random forest) were applied to a heterogeneous section of seabed on Rockall Bank, NE Atlantic, for which landscape indices describing the spatial arrangement of habitat patches were calculated. The predictive maps were based on remotely operated vehicle (ROV) imagery transects high-resolution autonomous underwater vehicle (AUV) sidescan backscatter maps. Area under the curve (AUC) and accuracy indicated similar performances for the three models tested, but performance varied by species assemblage, with the transitional species assemblage showing the weakest predictive performances. Spatial predictions of habitat suitability differed between statistical approaches, but niche similarity metrics showed redundancy analysis and random forest predictions to be most similar. As one statistical technique could not be found to outperform the others when all assemblages were considered, ensemble mapping techniques, where the outputs of many models are combined, were applied. They showed higher accuracy than any single model. Different statistical approaches for predictive habitat modelling possess varied strengths and weaknesses and by examining the outputs of a range of modelling techniques and their differences, more robust predictions, with better described variation and areas of uncertainties, can be achieved. As improvements to prediction outputs can be achieved without additional costly data collection, ensemble mapping approaches have clear value for spatial management.

  9. Development and Comparison of Techniques for Generating Permeability Maps using Independent Experimental Approaches

    NASA Astrophysics Data System (ADS)

    Hingerl, Ferdinand; Romanenko, Konstantin; Pini, Ronny; Balcom, Bruce; Benson, Sally

    2014-05-01

    We have developed and evaluated methods for creating voxel-based 3D permeability maps of a heterogeneous sandstone sample using independent experimental data from single phase flow (Magnetic Resonance Imaging, MRI) and two-phase flow (X-ray Computed Tomography, CT) measurements. Fluid velocities computed from the generated permeability maps using computational fluid dynamics simulations fit measured velocities very well and significantly outperform empirical porosity-permeability relations, such as the Kozeny-Carman equation. Acquiring images on the meso-scale from porous rocks using MRI has till recently been a great challenge, due to short spin relaxation times and large field gradients within the sample. The combination of the 13-interval Alternating-Pulsed-Gradient Stimulated-Echo (APGSTE) scheme with three-dimensional Single Point Ramped Imaging with T1 Enhancement (SPRITE) - a technique recently developed at the UNB MRI Center - can overcome these challenges and enables obtaining quantitative 3 dimensional maps of porosities and fluid velocities. Using porosity and (single-phase) velocity maps from MRI and (multi-phase) saturation maps from CT measurements, we employed three different techniques to obtain permeability maps. In the first approach, we applied the Kozeny-Carman relationship to porosities measured using MRI. In the second approach, we computed permeabilities using a J-Leverett scaling method, which is based on saturation maps obtained from N2-H2O multi-phase experiments. The third set of permeabilities was generated using a new inverse iterative-updating technique, which is based on porosities and measured velocities obtained in single-phase flow experiments. The resulting three permeability maps provided then input for computational fluid dynamics simulations - employing the Stanford CFD code AD-GPRS - to generate velocity maps, which were compared to velocity maps measured by MRI. The J-Leveret scaling method and the iterative-updating method

  10. Digital Mapping Techniques '10-Workshop Proceedings, Sacramento, California, May 16-19, 2010

    USGS Publications Warehouse

    Soller, David R.; Soller, David R.

    2012-01-01

    The Digital Mapping Techniques '10 (DMT'10) workshop was attended by 110 technical experts from 40 agencies, universities, and private companies, including representatives from 19 State geological surveys (see Appendix A). This workshop, hosted by the California Geological Survey, May 16-19, 2010, in Sacramento, California, was similar in nature to the previous 13 meetings (see Appendix B). The meeting was coordinated by the U.S. Geological Survey's (USGS) National Geologic Map Database project. As in the previous meetings, the objective was to foster informal discussion and exchange of technical information. It is with great pleasure that I note that the objective was again successfully met, as attendees continued to share and exchange knowledge and information, and renew friendships and collegial work begun at past DMT workshops. At this meeting, oral and poster presentations and special discussion sessions emphasized (1) methods for creating and publishing map products ("publishing" includes Web-based release); (2) field data capture software and techniques, including the use of LiDAR; (3) digital cartographic techniques; (4) migration of digital maps into ArcGIS Geodatabase format; (5) analytical GIS techniques; and (6) continued development of the National Geologic Map Database.

  11. Evaluation of Landslide Mapping Techniques and LiDAR-based Conditioning Factors

    NASA Astrophysics Data System (ADS)

    Mahalingam, R.; Olsen, M. J.

    2014-12-01

    Landslides are a major geohazard, which result in significant human, infrastructure, and economic losses. Landslide susceptibility mapping can help communities to plan and prepare for these damaging events. Mapping landslide susceptible locations using GIS and remote sensing techniques is gaining popularity in the past three decades. These efforts use a wide variety of procedures and consider a wide range of factors. Unfortunately, each study is often completed differently and independently of others. Further, the quality of the datasets used varies in terms of source, data collection, and generation, which can propagate errors or inconsistencies into the resulting output maps. Light detection and ranging (LiDAR) has proved to have higher accuracy in representing the continuous topographic surface, which can help minimize this uncertainty. The primary objectives of this paper are to investigate the applicability and performance of terrain factors in landslide hazard mapping, determine if LiDAR-derived datasets (slope, slope roughness, terrain roughness, stream power index and compound topographic index) can be used for predictive mapping without data representing other common landslide conditioning factors, and evaluate the differences in landslide susceptibility mapping using widely-used statistical approaches. The aforementioned factors were used to produce landslide susceptibility maps for a 140 km2 study area in northwest Oregon using six representative techniques: frequency ratio, weights of evidence, logistic regression, discriminant analysis, artificial neural network, and support vector machine. Most notably, the research showed an advantage in selecting fewer critical conditioning factors. The most reliable factors all could be derived from a single LiDAR DEM, reducing the need for laborious and costly data gathering. Most of the six techniques showed similar statistical results; however, ANN showed less accuracy for predictive mapping. Keywords : Li

  12. The Most Luminous Supernovae

    NASA Astrophysics Data System (ADS)

    Sukhbold, Tuguldur; Woosley, S. E.

    2016-04-01

    Recent observations have revealed a stunning diversity of extremely luminous supernovae, seemingly increasing in radiant energy without bound. We consider simple approximate limits for what existing models can provide for the peak luminosity and total radiated energy for non-relativistic, isotropic stellar explosions. The brightest possible supernova is a Type I explosion powered by a sub-millisecond magnetar with field strength B ∼ few × {10}13 G. In extreme cases, such models might reach a peak luminosity of 2× {10}46 {erg} {{{s}}}-1 and radiate a total energy of up to 4× {10}52 {erg}. Other less luminous models are also explored, including prompt hyper-energetic explosions in red supergiants, pulsational-pair instability supernovae, pair-instability supernovae, and colliding shells. Approximate analytic expressions and limits are given for each case. Excluding magnetars, the peak luminosity is near 3× {10}44 {erg} {{{s}}}-1 for the brightest models and the corresponding limit on total radiated energy is 3× {10}51 {erg}. Barring new physics, supernovae with a light output over 3× {10}51 erg must be rotationally powered, either during the explosion itself or after, the most obvious candidate being a rapidly rotating magnetar. A magnetar-based model for the recent transient event, ASASSN-15lh is presented that strains, but does not exceed the limits of what the model can provide.

  13. Techniques for Down-Sampling a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. The software tool of the current two new techniques can be used in all optical model validation processes involving large space optical surfaces

  14. Accuracy of vertical radial plume mapping technique in measuring lagoon gas emission

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Recently, the U.S. Environmental Protection Agency (USEPA) posted a ground-based optical remote sensing method on its website called OTM 10 for measuring fugitive gas emission flux from area sources such as closed landfills. The OTM 10 utilizes the vertical radial plume mapping (VRPM) technique to c...

  15. Comparing Computer-Supported Dynamic Modeling and "Paper & Pencil" Concept Mapping Technique in Students' Collaborative Activity

    ERIC Educational Resources Information Center

    Komis, Vassilis; Ergazaki, Marida; Zogza, Vassiliki

    2007-01-01

    This study aims at highlighting the collaborative activity of two high school students (age 14) in the cases of modeling the complex biological process of plant growth with two different tools: the "paper & pencil" concept mapping technique and the computer-supported educational environment "ModelsCreator". Students' shared activity in both cases…

  16. Information Landscaping: Information Mapping, Charting, Querying and Reporting Techniques for Total Quality Knowledge Management.

    ERIC Educational Resources Information Center

    Tsai, Bor-sheng

    2003-01-01

    Total quality management and knowledge management are merged and used as a conceptual model to direct and develop information landscaping techniques through the coordination of information mapping, charting, querying, and reporting. Goals included: merge citation analysis and data mining, and apply data visualization and information architecture…

  17. Acoustic emission source location in complex structures using full automatic delta T mapping technique

    NASA Astrophysics Data System (ADS)

    Al-Jumaili, Safaa Kh.; Pearson, Matthew R.; Holford, Karen M.; Eaton, Mark J.; Pullin, Rhys

    2016-05-01

    An easy to use, fast to apply, cost-effective, and very accurate non-destructive testing (NDT) technique for damage localisation in complex structures is key for the uptake of structural health monitoring systems (SHM). Acoustic emission (AE) is a viable technique that can be used for SHM and one of the most attractive features is the ability to locate AE sources. The time of arrival (TOA) technique is traditionally used to locate AE sources, and relies on the assumption of constant wave speed within the material and uninterrupted propagation path between the source and the sensor. In complex structural geometries and complex materials such as composites, this assumption is no longer valid. Delta T mapping was developed in Cardiff in order to overcome these limitations; this technique uses artificial sources on an area of interest to create training maps. These are used to locate subsequent AE sources. However operator expertise is required to select the best data from the training maps and to choose the correct parameter to locate the sources, which can be a time consuming process. This paper presents a new and improved fully automatic delta T mapping technique where a clustering algorithm is used to automatically identify and select the highly correlated events at each grid point whilst the "Minimum Difference" approach is used to determine the source location. This removes the requirement for operator expertise, saving time and preventing human errors. A thorough assessment is conducted to evaluate the performance and the robustness of the new technique. In the initial test, the results showed excellent reduction in running time as well as improved accuracy of locating AE sources, as a result of the automatic selection of the training data. Furthermore, because the process is performed automatically, this is now a very simple and reliable technique due to the prevention of the potential source of error related to manual manipulation.

  18. A wafer mapping technique for residual stress in surface micromachined films

    NASA Astrophysics Data System (ADS)

    Schiavone, G.; Murray, J.; Smith, S.; Desmulliez, M. P. Y.; Mount, A. R.; Walton, A. J.

    2016-09-01

    The design of MEMS devices employing movable structures is crucially dependant on the mechanical behaviour of the deposited materials. It is therefore important to be able to fully characterize the micromachined films and predict with confidence the mechanical properties of patterned structures. This paper presents a characterization technique that enables the residual stress in MEMS films to be mapped at the wafer level by using microstructures released by surface micromachining. These dedicated MEMS test structures and the associated measurement techniques are used to extract localized information on the strain and Young’s modulus of the film under investigation. The residual stress is then determined by numerically coupling this data with a finite element analysis of the structure. This paper illustrates the measurement routine and demonstrates it with a case study using electrochemically deposited alloys of nickel and iron, particularly prone to develop high levels of residual stress. The results show that the technique enables wafer mapping of film non-uniformities and identifies wafer-to-wafer differences. A comparison between the results obtained from the mapping technique and conventional wafer bow measurements highlights the benefits of using a procedure tailored to films that are non-uniform, patterned and surface-micromachined, as opposed to simple standard stress extraction methods. The presented technique reveals detailed information that is generally unexplored when using conventional stress extraction methods such as wafer bow measurements.

  19. Parametric techniques for characterizing myocardial tissue by magnetic resonance imaging (part 1): T1 mapping.

    PubMed

    Perea Palazón, R J; Ortiz Pérez, J T; Prat González, S; de Caralt Robira, T M; Cibeira López, M T; Solé Arqués, M

    2016-01-01

    The development of myocardial fibrosis is a common process in the appearance of ventricular dysfunction in many heart diseases. Magnetic resonance imaging makes it possible to accurately evaluate the structure and function of the heart, and its role in the macroscopic characterization of myocardial fibrosis by late enhancement techniques has been widely validated clinically. Recent studies have demonstrated that T1-mapping techniques can quantify diffuse myocardial fibrosis and the expansion of the myocardial extracellular space in absolute terms. However, further studies are necessary to validate the usefulness of this technique in the early detection of tissue remodeling at a time when implementing early treatment would improve a patient's prognosis. This article reviews the state of the art for T1 mapping of the myocardium, its clinical applications, and its limitations.

  20. Parametric techniques for characterizing myocardial tissue by magnetic resonance imaging (part 1): T1 mapping.

    PubMed

    Perea Palazón, R J; Ortiz Pérez, J T; Prat González, S; de Caralt Robira, T M; Cibeira López, M T; Solé Arqués, M

    2016-01-01

    The development of myocardial fibrosis is a common process in the appearance of ventricular dysfunction in many heart diseases. Magnetic resonance imaging makes it possible to accurately evaluate the structure and function of the heart, and its role in the macroscopic characterization of myocardial fibrosis by late enhancement techniques has been widely validated clinically. Recent studies have demonstrated that T1-mapping techniques can quantify diffuse myocardial fibrosis and the expansion of the myocardial extracellular space in absolute terms. However, further studies are necessary to validate the usefulness of this technique in the early detection of tissue remodeling at a time when implementing early treatment would improve a patient's prognosis. This article reviews the state of the art for T1 mapping of the myocardium, its clinical applications, and its limitations. PMID:26944850

  1. Evaluation of mesh morphing and mapping techniques in patient specific modelling of the human pelvis.

    PubMed

    Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa

    2012-08-01

    Robust generation of pelvic finite element models is necessary to understand variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity-based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis, and their strain distributions were evaluated. Morphing and mapping techniques were effectively applied to generate good quality and geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model.

  2. Evaluation of mesh morphing and mapping techniques in patient specific modeling of the human pelvis.

    PubMed

    Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa

    2013-01-01

    Robust generation of pelvic finite element models is necessary to understand the variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis and their strain distributions evaluated. Morphing and mapping techniques were effectively applied to generate good quality geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model.

  3. Lidar Luminance Quantizer

    NASA Technical Reports Server (NTRS)

    Quilligan, Gerard; DeMonthier, Jeffrey; Suarez, George

    2011-01-01

    This innovation addresses challenges in lidar imaging, particularly with the detection scheme and the shapes of the detected signals. Ideally, the echoed pulse widths should be extremely narrow to resolve fine detail at high event rates. However, narrow pulses require wideband detection circuitry with increased power dissipation to minimize thermal noise. Filtering is also required to shape each received signal into a form suitable for processing by a constant fraction discriminator (CFD) followed by a time-to-digital converter (TDC). As the intervals between the echoes decrease, the finite bandwidth of the shaping circuits blends the pulses into an analog signal (luminance) with multiple modes, reducing the ability of the CFD to discriminate individual events

  4. MODIS Snow Cover Mapping Decision Tree Technique: Snow and Cloud Discrimination

    NASA Technical Reports Server (NTRS)

    Riggs, George A.; Hall, Dorothy K.

    2010-01-01

    Accurate mapping of snow cover continues to challenge cryospheric scientists and modelers. The Moderate-Resolution Imaging Spectroradiometer (MODIS) snow data products have been used since 2000 by many investigators to map and monitor snow cover extent for various applications. Users have reported on the utility of the products and also on problems encountered. Three problems or hindrances in the use of the MODIS snow data products that have been reported in the literature are: cloud obscuration, snow/cloud confusion, and snow omission errors in thin or sparse snow cover conditions. Implementation of the MODIS snow algorithm in a decision tree technique using surface reflectance input to mitigate those problems is being investigated. The objective of this work is to use a decision tree structure for the snow algorithm. This should alleviate snow/cloud confusion and omission errors and provide a snow map with classes that convey information on how snow was detected, e.g. snow under clear sky, snow tinder cloud, to enable users' flexibility in interpreting and deriving a snow map. Results of a snow cover decision tree algorithm are compared to the standard MODIS snow map and found to exhibit improved ability to alleviate snow/cloud confusion in some situations allowing up to about 5% increase in mapped snow cover extent, thus accuracy, in some scenes.

  5. Digital Mapping Techniques '05--Workshop Proceedings, Baton Rouge, Louisiana, April 24-27, 2005

    USGS Publications Warehouse

    Soller, David R.

    2005-01-01

    Intorduction: The Digital Mapping Techniques '05 (DMT'05) workshop was attended by more than 100 technical experts from 47 agencies, universities, and private companies, including representatives from 25 state geological surveys (see Appendix A). This workshop was similar in nature to the previous eight meetings, held in Lawrence, Kansas (Soller, 1997), in Champaign, Illinois (Soller, 1998), in Madison, Wisconsin (Soller, 1999), in Lexington, Kentucky (Soller, 2000), in Tuscaloosa, Alabama (Soller, 2001), in Salt Lake City, Utah (Soller, 2002), in Millersville, Pennsylvania (Soller, 2003), and in Portland, Oregon (Soller, 2004). This year's meeting was hosted by the Louisiana Geological Survey, from April 24-27, 2005, on the Louisiana State University campus in Baton Rouge, Louisiana. As in the previous meetings, the objective was to foster informal discussion and exchange of technical information. It is with great pleasure I note that the objective was successfully met, as attendees continued to share and exchange knowledge and information, and to renew friendships and collegial work begun at past DMT workshops. Each DMT workshop has been coordinated by the Association of American State Geologists (AASG) and U.S. Geological Survey (USGS) Data Capture Working Group, which was formed in August 1996, to support the AASG and the USGS in their effort to build a National Geologic Map Database (see Soller and Berg, this volume, and http://ngmdb.usgs.gov/info/standards/datacapt/). The Working Group was formed because increased production efficiencies, standardization, and quality of digital map products were needed for the database?and for the State and Federal geological surveys?to provide more high-quality digital maps to the public. At the 2005 meeting, oral and poster presentations and special discussion sessions emphasized: 1) methods for creating and publishing map products (here, 'publishing' includes Web-based release); 2) field data capture software and

  6. Mapping

    ERIC Educational Resources Information Center

    Kinney, Douglas M.; McIntosh, Willard L.

    1978-01-01

    Geologic mapping in the United States increased by about one-quarter in the past year. Examinations of mapping trends were in the following categories: (1) Mapping at scales of 1:100, 000; (2) Metric-scale base maps; (3) International mapping, and (4) Planetary mapping. (MA)

  7. Using a Concept Mapping Tool with a Photograph Association Technique (CoMPAT) to Elicit Children's Ideas about Microbial Activity

    ERIC Educational Resources Information Center

    Byrne, Jenny; Grace, Marcus

    2010-01-01

    Concept mapping is a technique used to provide a visual representation of an individual's ideas about a concept or set of related concepts. This paper describes a concept mapping tool using a photograph association technique (CoMPAT) that is considered to be a novel way of eliciting children's ideas. What children at 11 years of age know about…

  8. Digital mapping techniques '00, workshop proceedings - May 17-20, 2000, Lexington, Kentucky

    USGS Publications Warehouse

    Soller, David R.

    2000-01-01

    Introduction: The Digital Mapping Techniques '00 (DMT'00) workshop was attended by 99 technical experts from 42 agencies, universities, and private companies, including representatives from 28 state geological surveys (see Appendix A). This workshop was similar in nature to the first three meetings, held in June, 1997, in Lawrence, Kansas (Soller, 1997), in May, 1998, in Champaign, Illinois (Soller, 1998a), and in May, 1999, in Madison, Wisconsin (Soller, 1999). This year's meeting was hosted by the Kentucky Geological Survey, from May 17 to 20, 2000, on the University of Kentucky campus in Lexington. As in the previous meetings, the objective was to foster informal discussion and exchange of technical information. When, based on discussions at the workshop, an attendee adopts or modifies a newly learned technique, the workshop clearly has met that objective. Evidence of learning and cooperation among participating agencies continued to be a highlight of the DMT workshops (see example in Soller, 1998b, and various papers in this volume). The meeting's general goal was to help move the state geological surveys and the USGS toward development of more cost-effective, flexible, and useful systems for digital mapping and geographic information systems (GIS) analysis. Through oral and poster presentations and special discussion sessions, emphasis was given to: 1) methods for creating and publishing map products (here, 'publishing' includes Web-based release); 2) continued development of the National Geologic Map Database; 3) progress toward building a standard geologic map data model; 4) field data-collection systems; and 5) map citation and authorship guidelines. Four representatives of the GIS hardware and software vendor community were invited to participate. The four annual DMT workshops were coordinated by the AASG/USGS Data Capture Working Group, which was formed in August, 1996, to support the Association of American State Geologists and the USGS in their effort

  9. Breast density mapping based upon system calibration, x-ray techniques, and FFDM images

    NASA Astrophysics Data System (ADS)

    Chen, Biao; Smith, Andrew P.; Jing, Zhenxue; Wu, Tao

    2007-03-01

    Clinical studies have correlated a high breast density to a women's risk of breast cancer. A breast density measurement that can quantitatively depict the volume distribution and percentage of dense tissues in breasts would be very useful for risk factor assessment of breast cancer, and might be more predictive of risks than the common but subjective and coarse 4-point BIRADS scale. This paper proposes to use a neural-network mapping to compute the breast density information based upon system calibration data, x-ray techniques, and Full Field Digital Mammography (FFDM) images. The mapping consists of four modules, namely, system calibration, generator of beam quality, generator of normalized absorption, and a multi-layer feed-forward neural network. As the core of breast density mapping, the network accepts x-ray target/filter combination, normalized x-ray absorption, pixel-wise breast thickness map, and x-ray beam quality during image acquisition as input elements, and exports a pixel-wise breast density distribution and a single breast density percentage for the imaged breast. Training and testing data sets for the design and verification of the network were formulated from calibrated x-ray beam quality, imaging data with a step wedge phantom under a variety x-ray imaging techniques, and nominal breast densities of tissue equivalent materials. The network was trained using a Levenberg-Marquardt algorithm based back-propagation learning method. Various thickness and glandular density phantom studies were performed with clinical x-ray techniques. Preliminary results showed that the neural network mapping is promising in accurately computing glandular density distribution and breast density percentage.

  10. Arsenic risk mapping in Bangladesh: a simulation technique of cokriging estimation from regional count data.

    PubMed

    Hassan, M Manzurul; Atkins, Peter J

    2007-10-01

    Risk analysis with spatial interpolation methods from a regional database on to a continuous surface is of contemporary interest. Groundwater arsenic poisoning in Bangladesh and its impact on human health has been one of the "biggest environmental health disasters" in current years. It is ironic that so many tubewells have been installed in recent times for pathogen-free drinking water but the water pumped is often contaminated with toxic levels of arsenic. This paper seeks to analyse the spatial pattern of arsenic risk by mapping composite "problem regions" in southwest Bangladesh. It also examines the cokriging interpolation method in analysing the suitability of isopleth maps for different risk areas. GIS-based data processing and spatial analysis were used for this research, along with state-of-the-art decision-making techniques. Apart from the GIS-based buffering and overlay mapping operations, a cokriging interpolation method was adopted because of its exact interpolation capacity. The paper presents an interpolation of regional estimates of arsenic data for spatial risk mapping that overcomes the areal bias problem for administrative boundaries. Moreover, the functionality of the cokriging method demonstrates the suitability of isopleth maps that are easy to read.

  11. GIS-based statistical mapping technique for block-and-ash pyroclastic flow and surge hazards

    NASA Astrophysics Data System (ADS)

    Widiwijayanti, C.; Voight, B.; Hidayat, D.; Schilling, S.

    2008-12-01

    Assessments of pyroclastic flow (PF) hazards are commonly based on mapping of PF and surge deposits and estimations of inundation limits, and/or computer models of varying degrees of sophistication. In volcanic crises a PF hazard map may be sorely needed, but limited time, exposures, or safety aspects may preclude fieldwork, and insufficient time or baseline data may be available for reliable dynamic simulations. We have developed a statistically constrained simulation model for block-and-ash PFs to estimate potential areas of inundation by adapting methodology from Iverson et al. (1998) for lahars. The predictive equations for block-and-ash PFs are calibrated with data from many volcanoes and given by A = (0.05-0.1)V2/3, B = (35-40)V2/3 , where A is cross-sectional area of inundation, B is planimetric area and V is deposit volume. The proportionality coefficients were obtained from regression analyses and comparison of simulations to mapped deposits. The method embeds the predictive equations in a GIS program coupled with DEM topography, using the LAHARZ program of Schilling (1998). Although the method is objective and reproducible, any PF hazard zone so computed should be considered as an approximate guide only, due to uncertainties on coefficients applicable to individual PFs, DEM details, and release volumes. Gradational nested hazard maps produced by these simulations reflect in a sense these uncertainties. The model does not explicitly consider dynamic behavior, which can be important. Surge impacts must be extended beyond PF hazard zones and we have explored several approaches to do this. The method has been used to supply PF hazard maps in two crises: Merapi 2006; and Montserrat 2006- 2007. We have also compared our hazard maps to actual recent PF deposits and to maps generated by several other model techniques.

  12. New mapping techniques help assess the health of Hawaii's coral reefs

    USGS Publications Warehouse

    Field, M.E.; Chavez, P.S., Jr.; Evans, K.R.; Cochran, S.A.

    2001-01-01

    The U.S. Geological Survey (USGS) is working closely with academic institutions and state and Federal agencies to assess the factors that affect the health of Hawaii's and our Nation's coral reefs. In order to establish a basis from which scientists can objectively detect changes in reef health, the USGS and its cooperators are applying many new techniques to the mapping and monitoring of coral reefs in Hawaii.

  13. Mineral classification map using MF and SAM techniques: A case study in the Nohwa Island, Korea

    SciTech Connect

    Son, Young-Sun; Yoon, Wang-Jung

    2015-03-10

    The purpose of this study is to map pyprophyllite distribution at surface of the Nohwa deposit, Korea by using Advanced Spaceborne Thermal Emission and Reflectance Radiometer (ASTER) data. For this, combined Spectral Angle Mapper (SAM), and Matched Filtering (MF) technique based on mathematical algorithm was applied. The regional distribution of high-grade and low-grade pyrophyllite in the Nohwa deposit area could be differentiated by this method. The results of this study show that ASTER data analysis using combination of SAM and MF techniques will assist in exploration of pyrophyllite at the exposed surface.

  14. A semi-quantitative technique for mapping potential aquifer productivity on the national scale: example of England and Wales (UK)

    NASA Astrophysics Data System (ADS)

    Abesser, Corinna; Lewis, Melinda

    2015-12-01

    The development and validation of aquifer productivity and depth-to-source maps for England and Wales are described. Aquifer productivity maps can provide valuable support for the assessment, planning and management of groundwater and renewable heat energy resources. Aquifer productivity is often mapped using geostatistical interpolation techniques such as kriging, but these techniques tend to be unsuitable for mapping at the national scale due to the high data (and time) demands. A methodology is outlined for mapping aquifer productivity at the national scale using existing national-scale data sets. Pumping test data are used to characterise the potential borehole yields that different geological formations of varying lithologies and ages can provide. Based on this analysis and using expert knowledge, the corresponding map codes on the geological map are assigned to potential productivity classes. The subsurface (concealed) extent of aquifer units is mapped from geophysical data, and together with the attributed geological map, provide the bedrock-aquifer productivity map. Drilling and pumping costs can be an important consideration when evaluating the feasibility of developing a groundwater source. Thus, a map of the approximate depth to source is developed alongside the aquifer productivity map. The maps are validated using independent data sets, and map performance is compared against performance from maps derived by random and uniform attribution. The results show that the maps successfully predict potential productivity and approximate depth to the water source, although utility of the depth-to-source map could be improved by increasing the vertical discretisation at which depth intervals are mapped.

  15. Dynamics of adaptation at high luminances: adaptation is faster after luminance decrements than after luminance increments.

    PubMed

    Poot, L; Snippe, H P; van Hateren, J H

    1997-09-01

    As is well known, dark adaptation in the human visual system is much slower than is recovery from darkness. We show that at high photopic luminances the situation is exactly opposite. First, we study detection thresholds for a small light flash, at various delays from decrement and increment steps in background luminance. Light adaptation is nearly complete within 100 ms after luminance decrements but takes much longer after luminance increments. Second, we compare sensitivity after equally visible pulses or steps in the adaptation luminance and find that detectability is initially the same but recovers much faster for pulses than for increment steps. This suggests that, whereas any residual threshold elevation after a step shows the incomplete luminance adaptation, the initial threshold elevation is caused by the temporal contrast of the background steps and pulses. This hypothesis is further substantiated in a third experiment, whereby we show that manipulating the contrast of a transition between luminances affects only the initial part of the threshold curve, and not later stages.

  16. Bioclimatic and vegetation mapping of a topographically complex oceanic island applying different interpolation techniques.

    PubMed

    Garzón-Machado, Víctor; Otto, Rüdiger; del Arco Aguilar, Marcelino José

    2014-07-01

    Different spatial interpolation techniques have been applied to construct objective bioclimatic maps of La Palma, Canary Islands. Interpolation of climatic data on this topographically complex island with strong elevation and climatic gradients represents a challenge. Furthermore, meteorological stations are not evenly distributed over the island, with few stations at high elevations. We carried out spatial interpolations of the compensated thermicity index (Itc) and the annual ombrothermic Index (Io), in order to obtain appropriate bioclimatic maps by using automatic interpolation procedures, and to establish their relation to potential vegetation units for constructing a climatophilous potential natural vegetation map (CPNV). For this purpose, we used five interpolation techniques implemented in a GIS: inverse distance weighting (IDW), ordinary kriging (OK), ordinary cokriging (OCK), multiple linear regression (MLR) and MLR followed by ordinary kriging of the regression residuals. Two topographic variables (elevation and aspect), derived from a high-resolution digital elevation model (DEM), were included in OCK and MLR. The accuracy of the interpolation techniques was examined by the results of the error statistics of test data derived from comparison of the predicted and measured values. Best results for both bioclimatic indices were obtained with the MLR method with interpolation of the residuals showing the highest R2 of the regression between observed and predicted values and lowest values of root mean square errors. MLR with correction of interpolated residuals is an attractive interpolation method for bioclimatic mapping on this oceanic island since it permits one to fully account for easily available geographic information but also takes into account local variation of climatic data.

  17. Testing the PV-Theta Mapping Technique in a 3-D CTM Model Simulation

    NASA Technical Reports Server (NTRS)

    Frith, Stacey M.

    2004-01-01

    Mapping lower stratospheric ozone into potential vorticity (PV)- potential temperature (Theta) coordinates is a common technique employed to analyze sparse data sets. Ozone transformed into a flow-following dynamical coordinate system is insensitive to meteorological variations. Therefore data from a wide range of times/locations can be compared, so long as the measurements were made in the same airmass (as defined by PV). Moreover, once a relationship between ozone and PV/Theta is established, a full 3D ozone field can be estimated from this relationship and the 3D analyzed PV field. However, ozone data mapped in this fashion can be hampered by noisy PV fields, or "mis-matches" in the resolution and/or exact location of the ozone and PV measurements. In this study, we investigate the PV-ozone relationship using output from a recent 50-year run of the Goddard 3D chemical transport model (CTM). Model constituents are transported using off-line dynamics from the finite volume general circulation model (FVGCM). By using the internally consistent model PV and ozone fields, we minimize noise due to mis-matching and resolution issues. We calculate correlations between model ozone and PV throughout the stratosphere, and test the sensitivity of the technique to initial data resolution. To do this we degrade the model data to that of various satellite instruments, then compare the mapped fields derived from the sub-sampled data to the full resolution model data. With these studies we can determine appropriate limits for the PV-theta mapping technique in latitude, altitude, and as a function of original data resolution.

  18. Luminance requirements for lighted signage

    NASA Astrophysics Data System (ADS)

    Freyssinier, Jean Paul; Narendran, Nadarajah; Bullough, John D.

    2006-08-01

    Light-emitting diode (LED) technology is presently targeted to displace traditional light sources in backlighted signage. The literature shows that brightness and contrast are perhaps the two most important elements of a sign that determine its attention-getting capabilities and its legibility. Presently, there are no luminance standards for signage, and the practice of developing brighter signs to compete with signs in adjacent businesses is becoming more commonplace. Sign luminances in such cases may far exceed what people usually need for identifying and reading a sign. Furthermore, the practice of higher sign luminance than needed has many negative consequences, including higher energy use and light pollution. To move toward development of a recommendation for lighted signage, several laboratory human factors evaluations were conducted. A scale model of a storefront was used to present human subjects with a typical red channel-letter sign at luminances ranging from 8 cd/m2 to 1512 cd/m2 under four background luminances typical of nighttime outdoor and daytime inside-mall conditions (1, 100, 300, 1000 cd/m2), from three scaled viewing distances (30, 60, 340 ft), and either in isolation or adjacent to two similar signs. Subjects rated the brightness, acceptability, and ease of reading of the test sign for each combination of sign and background luminances and scaled viewing distances.

  19. Applying data fusion techniques for benthic habitat mapping and monitoring in a coral reef ecosystem

    NASA Astrophysics Data System (ADS)

    Zhang, Caiyun

    2015-06-01

    Accurate mapping and effective monitoring of benthic habitat in the Florida Keys are critical in developing management strategies for this valuable coral reef ecosystem. For this study, a framework was designed for automated benthic habitat mapping by combining multiple data sources (hyperspectral, aerial photography, and bathymetry data) and four contemporary imagery processing techniques (data fusion, Object-based Image Analysis (OBIA), machine learning, and ensemble analysis). In the framework, 1-m digital aerial photograph was first merged with 17-m hyperspectral imagery and 10-m bathymetry data using a pixel/feature-level fusion strategy. The fused dataset was then preclassified by three machine learning algorithms (Random Forest, Support Vector Machines, and k-Nearest Neighbor). Final object-based habitat maps were produced through ensemble analysis of outcomes from three classifiers. The framework was tested for classifying a group-level (3-class) and code-level (9-class) habitats in a portion of the Florida Keys. Informative and accurate habitat maps were achieved with an overall accuracy of 88.5% and 83.5% for the group-level and code-level classifications, respectively.

  20. Urban Mapping and Growth Prediction using Remote Sensing and GIS Techniques, Pune, India

    NASA Astrophysics Data System (ADS)

    Sivakumar, V.

    2014-11-01

    This study aims to map the urban area in and around Pune region between the year 1991 and 2010, and predict its probable future growth using remote sensing and GIS techniques. The Landsat TM and ETM+ satellite images of 1991, 2001 and 2010 were used for analyzing urban land use class. Urban class was extracted / mapped using supervised classification technique with maximum likelihood classifier. The accuracy assessment was carried out for classified maps. The achieved overall accuracy and Kappa statistics were 86.33 % & 0.76 respectively. Transition probability matrix and area change were obtained using different classified images. A plug-in was developed in QGIS software (open source) based on Markov Chain model algorithm for predicting probable urban growth for the future year 2021. Based on available data set, the result shows that urban area is expected to grow much higher in the year 2021 when compared to 2010. This study provides an insight into understanding of urban growth and aids in subsequent infrastructure planning, management and decision-making.

  1. Validating Flood Mapping Products Using a Digital Elevation Model Comparison Technique

    NASA Astrophysics Data System (ADS)

    Fayne, J.

    2014-12-01

    This preliminary study assessed the validity of a pixel analysis elevation comparison technique and determined necessary steps for improvement. The pixel analysis sought to assess the probability of a flood occurring in a particular area by comparing the spatial extent of flood mapping products to the local elevation. The method was developed to determine if the physical relationship between elevation and floods as shown in satellite images is accurately represented in a flood mapping product. The data incorporated in this study are raster digital elevation model (DEM) tiles, a scene from Landsat 5 during a flood period, and a scene from the NASA DEVELOP Flood Disasters Team Flood Product. Pixels representing flooded areas were compared to the elevation height pixels using horizontal transect lines to create pixel value profiles across a 727 km transect of Vietnam and Cambodia. The elevation model comparison validates the Flood Product by depicting water presence in alignment with areas of low elevation. Initial findings indicate that the technique can be used to improve the assessment of flood mapping products in transects less than 10 km. Future research will focus on streamlining the pixel analysis process to yield comprehensive results for larger areas.

  2. Performance analysis of the Microsoft Kinect sensor for 2D Simultaneous Localization and Mapping (SLAM) techniques.

    PubMed

    Kamarudin, Kamarulzaman; Mamduh, Syed Muhammad; Shakaff, Ali Yeon Md; Zakaria, Ammar

    2014-12-05

    This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM) techniques (i.e., Gmapping and Hector SLAM) using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS). The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect's depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks.

  3. Novel Techniques of Real-time Blood Flow and Functional Mapping: Technical Note

    PubMed Central

    KAMADA, Kyousuke; OGAWA, Hiroshi; SAITO, Masato; TAMURA, Yukie; ANEI, Ryogo; KAPELLER, Christoph; HAYASHI, Hideaki; PRUECKL, Robert; GUGER, Christoph

    2014-01-01

    There are two main approaches to intraoperative monitoring in neurosurgery. One approach is related to fluorescent phenomena and the other is related to oscillatory neuronal activity. We developed novel techniques to visualize blood flow (BF) conditions in real time, based on indocyanine green videography (ICG-VG) and the electrophysiological phenomenon of high gamma activity (HGA). We investigated the use of ICG-VG in four patients with moyamoya disease and two with arteriovenous malformation (AVM), and we investigated the use of real-time HGA mapping in four patients with brain tumors who underwent lesion resection with awake craniotomy. Real-time data processing of ICG-VG was based on perfusion imaging, which generated parameters including arrival time (AT), mean transit time (MTT), and BF of brain surface vessels. During awake craniotomy, we analyzed the frequency components of brain oscillation and performed real-time HGA mapping to identify functional areas. Processed results were projected on a wireless monitor linked to the operating microscope. After revascularization for moyamoya disease, AT and BF were significantly shortened and increased, respectively, suggesting hyperperfusion. Real-time fusion images on the wireless monitor provided anatomical, BF, and functional information simultaneously, and allowed the resection of AVMs under the microscope. Real-time HGA mapping during awake craniotomy rapidly indicated the eloquent areas of motor and language function and significantly shortened the operation time. These novel techniques, which we introduced might improve the reliability of intraoperative monitoring and enable the development of rational and objective surgical strategies. PMID:25263624

  4. Performance analysis of the Microsoft Kinect sensor for 2D Simultaneous Localization and Mapping (SLAM) techniques.

    PubMed

    Kamarudin, Kamarulzaman; Mamduh, Syed Muhammad; Shakaff, Ali Yeon Md; Zakaria, Ammar

    2014-01-01

    This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM) techniques (i.e., Gmapping and Hector SLAM) using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS). The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect's depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks. PMID:25490595

  5. Performance Analysis of the Microsoft Kinect Sensor for 2D Simultaneous Localization and Mapping (SLAM) Techniques

    PubMed Central

    Kamarudin, Kamarulzaman; Mamduh, Syed Muhammad; Shakaff, Ali Yeon Md; Zakaria, Ammar

    2014-01-01

    This paper presents a performance analysis of two open-source, laser scanner-based Simultaneous Localization and Mapping (SLAM) techniques (i.e., Gmapping and Hector SLAM) using a Microsoft Kinect to replace the laser sensor. Furthermore, the paper proposes a new system integration approach whereby a Linux virtual machine is used to run the open source SLAM algorithms. The experiments were conducted in two different environments; a small room with no features and a typical office corridor with desks and chairs. Using the data logged from real-time experiments, each SLAM technique was simulated and tested with different parameter settings. The results show that the system is able to achieve real time SLAM operation. The system implementation offers a simple and reliable way to compare the performance of Windows-based SLAM algorithm with the algorithms typically implemented in a Robot Operating System (ROS). The results also indicate that certain modifications to the default laser scanner-based parameters are able to improve the map accuracy. However, the limited field of view and range of Kinect's depth sensor often causes the map to be inaccurate, especially in featureless areas, therefore the Kinect sensor is not a direct replacement for a laser scanner, but rather offers a feasible alternative for 2D SLAM tasks. PMID:25490595

  6. Micro-scale strain mapping technique: a tool to quantify strain partitioning during creep deformation

    NASA Astrophysics Data System (ADS)

    Quintanilla-Terminel, Alejandra; Zimmerman, Mark; Evans, Brian; Kohlstedt, David

    2016-04-01

    Several deformation mechanisms interact to accommodate plastic deformation. Quantifying the contribution of each to the total strain is necessary for establishing a better link between observed microstructures and mechanical data, as well as to allow more confident extrapolation from laboratory to natural conditions. In this contribution, we present the experimental and computational technique involved in micro-scale strain mapping (MSSM). The MSSM technique relies on analyzing the relative displacement of initially regularly spaced markers after deformation. We present several microfabrication techniques that permit us to pattern various rocks with micrometric and nanometric metal markers, as well as the challenges faced in working at high temperatures and pressures. A Hough transform algorithm was used to detect the markers and automate as much as possible the strain analysis. The von Mises strain is calculated for a set of n-points and their relative displacements, which allow us to map the strain at different length scales. We applied the MSSM technique to study strain partitioning during deformation creep of Carrara marble and San Carlos olivine at a confining pressure, Pc, of 300 MPa and homologous temperatures of 0.3 to 0.6. We measured the local strain and strain heterogeneity produced during creep deformation of split cylinders of Carrara marble under conventional triaxial loading to inelastic strains of 11 to 36% at a strain rate of 3x10-5s-1, Pc = 300 MPa and 400o < T <700oC. We conclude that the evolution of deformation structures in marble takes place over a substantial interval in strain and that the duration of this interval depends on strain rate, temperature, and pressure. Our first results on strain mapping of olivine deformed at T = 1150oC and Pc = 300 MPa demonstrate promise for characterizing intragranular strain and better defining the contribution of grain boundary sliding to the total strain.

  7. Micro-scale strain mapping technique: a tool to quantify strain partitioning during creep deformation

    NASA Astrophysics Data System (ADS)

    Quintanilla-Terminel, Alejandra; Zimmerman, Mark; Evans, Brian; Kohlstedt, David

    2016-04-01

    Several deformation mechanisms interact to accommodate plastic deformation. Quantifying the contribution of each to the total strain is necessary for establishing a better link between observed microstructures and mechanical data, as well as to allow more confident extrapolation from laboratory to natural conditions. In this contribution, we present the experimental and computational technique involved in micro-scale strain mapping (MSSM). The MSSM technique relies on analyzing the relative displacement of initially regularly spaced markers after deformation. We present several microfabrication techniques that permit us to pattern various rocks with micrometric and nanometric metal markers, as well as the challenges faced in working at high temperatures and pressures. A Hough transform algorithm was used to detect the markers and automate as much as possible the strain analysis. The von Mises strain is calculated for a set of n-points and their relative displacements, which allow us to map the strain at different length scales. We applied the MSSM technique to study strain partitioning during deformation creep of Carrara marble and San Carlos olivine at a confining pressure, Pc, of 300 MPa and homologous temperatures of 0.3 to 0.6. We measured the local strain and strain heterogeneity produced during creep deformation of split cylinders of Carrara marble under conventional triaxial loading to inelastic strains of 11 to 36% at a strain rate of 3x10‑5s‑1, Pc = 300 MPa and 400o < T <700oC. We conclude that the evolution of deformation structures in marble takes place over a substantial interval in strain and that the duration of this interval depends on strain rate, temperature, and pressure. Our first results on strain mapping of olivine deformed at T = 1150oC and Pc = 300 MPa demonstrate promise for characterizing intragranular strain and better defining the contribution of grain boundary sliding to the total strain.

  8. Making Accurate Topographic Maps of the Schoolyard Using Ideas and Techniques Learned and Adapted from Multi-beam Sonar Mapping of the Arctic Ocean

    NASA Astrophysics Data System (ADS)

    Fuerst, S. I.; Roberts, J. D.

    2010-12-01

    Having participated in a University of Rhode Island Project Armada expedition to join the University of New Hampshire Center for Coastal and Oceanographic Studies in making multi-beam sonar contour maps of the Arctic Ocean floor, I was able to bring the principles learned from this trip to my earth science high school students and create a project in our "mapping the earth" unit. Students learn basic surveying techniques and create authentic, accurately detailed topographic maps of the schoolyard. Models of their maps are then constructed of either Styrofoam or wood which enables them to make the transition from a 2-dimensional map to a 3-dimensional representation. Even though our maps are created using sticks, line levels, compasses and GPS, the scientific concepts of using location and elevation data to draw contour lines are identical to those used in underwater mapping. Once the students understand the science in mapping and creating contour maps to scale on graph paper by hand, they are able to easily relate this knowledge to what I was doing onboard ship using multi-beam sonar and computer mapping programs. We would like to share with you the lab and techniques that we have developed to make this activity possible with minimal materials and simple technology. As a background extension, it is also possible to replicate sonar measurements using an aquarium, food coloring, and a surface grid to map the topography of a teacher created landscape on the aquarium bottom. Earth Science students using simple tools to accurately map the topography of the school grounds

  9. Measurement of breast lesion display luminance and overall image display luminance relative to optimum luminance for contrast perception

    NASA Astrophysics Data System (ADS)

    Rawashdeh, Mohammad; Lee, Warwick; Brennan, Patrick; Reed, Warren; McEntee, Mark; Bourne, Roger

    2011-03-01

    Introduction: To minimize fatigue due to eye adaptation and maximize contrast perception, it has been suggested that lesion luminance be matched to overall image luminance to perceive the greatest number of grey level differences. This work examines whether lesion display luminance matches the overall image and breast tissue display luminance and whether these factors are positioned within the optimum luminance for maximal contrast sensitivity. Methods: A set of 42 mammograms, collected from 21 patients and containing 15 malignant and 6 benign lesions, was used to assess overall image luminance. Each image displayed on the monitor was divided into 16 equal regions. The luminance at the midpoint of each region was measured using a calibrated photometer and the overall image luminance was calculated. Average breast tissue display luminance was calculated from the subset of regions containing of only breast tissue. Lesion display luminance was compared with both overall image display luminance and average breast tissue display luminance. Results: Statistically significant differences (p<0.0001) were noted between overall image display luminance (4.3+/-0.7 cd/m2) and lesion display luminance (15.0+/-6.8 cd/m2); and between average breast tissue display luminance (6.8+/-1.3 cd/m2) and lesion display luminance (p<0.002). Conclusions: Lesion luminance was significantly higher than the overall image and breast tissue luminance. Luminance of lesions and general breast tissue fell below the optimum luminance range for contrast perception. Breast lesion detection sensitivity and specificity may be enhanced by use of brighter monitor displays.

  10. Mapping forest vegetation with ERTS-1 MSS data and automatic data processing techniques

    NASA Technical Reports Server (NTRS)

    Messmore, J.; Copeland, G. E.; Levy, G. F.

    1975-01-01

    This study was undertaken with the intent of elucidating the forest mapping capabilities of ERTS-1 MSS data when analyzed with the aid of LARS' automatic data processing techniques. The site for this investigation was the Great Dismal Swamp, a 210,000 acre wilderness area located on the Middle Atlantic coastal plain. Due to inadequate ground truth information on the distribution of vegetation within the swamp, an unsupervised classification scheme was utilized. Initially pictureprints, resembling low resolution photographs, were generated in each of the four ERTS-1 channels. Data found within rectangular training fields was then clustered into 13 spectral groups and defined statistically. Using a maximum likelihood classification scheme, the unknown data points were subsequently classified into one of the designated training classes. Training field data was classified with a high degree of accuracy (greater than 95 percent), and progress is being made towards identifying the mapped spectral classes.

  11. A New Active Cavitation Mapping Technique for Pulsed HIFU Applications – Bubble Doppler

    PubMed Central

    Li, Tong; Khokhlova, Tatiana; Sapozhnikov, Oleg; Hwang, Joo Ha; Sapozhnikov, Oleg; O’Donnell, Matthew

    2015-01-01

    In this work, a new active cavitation mapping technique for pulsed high-intensity focused ultrasound (pHIFU) applications termed bubble Doppler is proposed and its feasibility tested in tissue-mimicking gel phantoms. pHIFU therapy uses short pulses, delivered at low pulse repetition frequency, to cause transient bubble activity that has been shown to enhance drug and gene delivery to tissues. The current gold standard for detecting and monitoring cavitation activity during pHIFU treatments is passive cavitation detection (PCD), which provides minimal information on the spatial distribution of the bubbles. B-mode imaging can detect hyperecho formation, but has very limited sensitivity, especially to small, transient microbubbles. The bubble Doppler method proposed here is based on a fusion of the adaptations of three Doppler techniques that had been previously developed for imaging of ultrasound contrast agents – color Doppler, pulse inversion Doppler, and decorrelation Doppler. Doppler ensemble pulses were interleaved with therapeutic pHIFU pulses using three different pulse sequences and standard Doppler processing was applied to the received echoes. The information yielded by each of the techniques on the distribution and characteristics of pHIFU-induced cavitation bubbles was evaluated separately, and found to be complementary. The unified approach - bubble Doppler – was then proposed to both spatially map the presence of transient bubbles and to estimate their sizes and the degree of nonlinearity. PMID:25265178

  12. Tropical Cyclone Vulnerability Mapping Using Geospatial Techniques: Application to a Coastal Upazila in Bangladesh

    NASA Astrophysics Data System (ADS)

    Hoque, M. A. A.; Phinn, S. R.; Roelfsema, C. M.; Childs, I.

    2015-12-01

    Cyclones are one of the most catastrophic natural disasters. Globally, many coastal regions are vulnerable to different categories cyclones. In Bangladesh, disasters from tropical cyclones are annual occurrences in coastal areas. The intensity and extent of damage due to tropical cyclones are very high. An appropriate mapping approach is essential for producing detail vulnerability assessments to deliver useful information for reducing the impacts of cyclones on people, property and environment. The present study developed and tested a vulnerability mapping approach for tropical cyclone impacts in Sarankhola upazila a 151 km2 local government area located in coastal Bangladesh. The study applied the approach by integrating remote sensing, field data and multi-criteria evaluation at regional scales covering <1000 km2. Seven criteria concerned with cyclone impacts were considered for the study: elevation, slope, geomorphology, proximity to coastline, proximity to cyclone track, land uses and population density. Thematic raster map layers were prepared for every criterion and weighted using Analytical Hierarchy Process (AHP) with sensitivity analysis. Weighted Linear Combination (WLC) technique was used for overlaying standardized criteria maps with their weights to produce the vulnerability map. Our results indicated that 15% of the study area had very high vulnerability; mostly close to the river and densely populated areas, with 40 % area as high vulnerability on cropped grounds. Around 25% area was classified at moderate vulnerability covering most of the forests. The low and very low vulnerable area accounts the 12% and 8% respectively. This approach provided very promising result and it was verified by field survey. The result provides the strong applicability of this approach to assess the vulnerability of coastal Bangladesh to tropical cyclones.

  13. Luminous Phenomena - A Scientific Investigation of Anomalous Luminous Atmospheric Phenomena

    NASA Astrophysics Data System (ADS)

    Teodorani, M.

    2003-12-01

    Anomalous atmospheric luminous phenomena reoccur in several locations of Earth, in the form of multi-color light balls characterized by large dimensions, erratic motion, long duration and a correlated electromagnetic field. The author (an astrophysicist) of this book, which is organized as a selection of some of his technical and popularizing papers and seminars, describes and discusses all the efforts that have been done in 10 years, through several missions and a massive data analysis, in order to obtain some scientific explanation of this kind of anomalies, in particular the Hessdalen anomaly in Norway. The following topics are treated in the book: a) geographic archive of the areas of Earth where such phenomena are known to reoccur most often; b) observational techniques of astrophysical kind that have been used to acquire the data; c) main scientific results obtained so far; d) physical interpretation and natural hypothesis vs. ETV hypothesis; e) historical and chronological issues; f) the importance to brindle new energy sources; g) the importance to keep distance from any kind of "ufology". An unpublished chapter is entirely devoted to a detailed scientific investigation project of light phenomena reoccurring on the Ontario lake; the chosen new-generation multi-wavelength sensing instrumentation that is planned to be used in future missions in that specific area, is described together with scientific rationale and planned procedures. The main results, which were obtained in other areas of the world, such as the Arizona desert, USA and the Sibillini Mountains, Italy, are also briefly mentioned. One chapter is entirely dedicated to the presentation of extensive abstracts of technical papers by the author concerning this specific subject. The book is accompanied with a rich source of bibliographic references.

  14. Equivalent Dynamic Stiffness Mapping technique for identifying nonlinear structural elements from frequency response functions

    NASA Astrophysics Data System (ADS)

    Wang, X.; Zheng, G. T.

    2016-02-01

    A simple and general Equivalent Dynamic Stiffness Mapping technique is proposed for identifying the parameters or the mathematical model of a nonlinear structural element with steady-state primary harmonic frequency response functions (FRFs). The Equivalent Dynamic Stiffness is defined as the complex ratio between the internal force and the displacement response of unknown element. Obtained with the test data of responses' frequencies and amplitudes, the real and imaginary part of Equivalent Dynamic Stiffness are plotted as discrete points in a three dimensional space over the displacement amplitude and the frequency, which are called the real and the imaginary Equivalent Dynamic Stiffness map, respectively. These points will form a repeatable surface as the Equivalent Dynamic stiffness is only a function of the corresponding data as derived in the paper. The mathematical model of the unknown element can then be obtained by surface-fitting these points with special functions selected by priori knowledge of the nonlinear type or with ordinary polynomials if the type of nonlinearity is not pre-known. An important merit of this technique is its capability of dealing with strong nonlinearities owning complicated frequency response behaviors such as jumps and breaks in resonance curves. In addition, this technique could also greatly simplify the test procedure. Besides there is no need to pre-identify the underlying linear parameters, the method uses the measured data of excitation forces and responses without requiring a strict control of the excitation force during the test. The proposed technique is demonstrated and validated with four classical single-degree-of-freedom (SDOF) numerical examples and one experimental example. An application of this technique for identification of nonlinearity from multiple-degree-of-freedom (MDOF) systems is also illustrated.

  15. Spatial Field Variability Mapping of Rice Crop using Clustering Technique from Space Borne Hyperspectral Data

    NASA Astrophysics Data System (ADS)

    Moharana, S.; Dutta, S.

    2015-12-01

    Precision farming refers to field-specific management of an agricultural crop at a spatial scale with an aim to get the highest achievable yield and to achieve this spatial information on field variability is essential. The difficulty in mapping of spatial variability occurring within an agriculture field can be revealed by employing spectral techniques in hyperspectral imagery rather than multispectral imagery. However an advanced algorithm needs to be developed to fully make use of the rich information content in hyperspectral data. In the present study, potential of hyperspectral data acquired from space platform was examined to map the field variation of paddy crop and its species discrimination. This high dimensional data comprising 242 spectral narrow bands with 30m ground resolution Hyperion L1R product acquired for Assam, India (30th Sept and 3rd Oct, 2014) were allowed for necessary pre-processing steps followed by geometric correction using Hyperion L1GST product. Finally an atmospherically corrected and spatially deduced image consisting of 112 band was obtained. By employing an advanced clustering algorithm, 12 different clusters of spectral waveforms of the crop were generated from six paddy fields for each images. The findings showed that, some clusters were well discriminated representing specific rice genotypes and some clusters were mixed treating as a single rice genotype. As vegetation index (VI) is the best indicator of vegetation mapping, three ratio based VI maps were also generated and unsupervised classification was performed for it. The so obtained 12 clusters of paddy crop were mapped spatially to the derived VI maps. From these findings, the existence of heterogeneity was clearly captured in one of the 6 rice plots (rice plot no. 1) while heterogeneity was observed in rest of the 5 rice plots. The degree of heterogeneous was found more in rice plot no.6 as compared to other plots. Subsequently, spatial variability of paddy field was

  16. Mapping Glauconite Unites with Using Remote Sensing Techniques in North East of Iran

    NASA Astrophysics Data System (ADS)

    Ahmadirouhani, R.; Samiee, S.

    2014-10-01

    Glauconite is a greenish ferric-iron silicate mineral with micaceous structure, characteristically formed in shallow marine environments. Glauconite has been used as a pigmentation agent for oil paint, contaminants remover in environmental studies and a source of potassium in plant fertilizers, and other industries. Koppeh-dagh basin is extended in Iran, Afghanistan and Turkmenistan countries and Glauconite units exist in this basin. In this research for enhancing and mapping glauconitic units in Koppeh-dagh structural zone in north east of Iran, remote sensing techniques such as Spectral Angle Mapper classification (SAM), band ratio and band composition methods on SPOT, ASTER and Landsat data in 3 steps were applied.

  17. A Study on Brain Mapping Technique Based on Hierarchical Decomposition Analysis

    NASA Astrophysics Data System (ADS)

    Oura, Kunihiko

    In this paper, brain functional mapping method by hierarchical decomposition analysis (HDA) is proposed. HDA is one of the multi-dimensional AR modeling methods and well-known for its validity to detect temporal lobe seizures. The author transforms the estimated AR model in the form of transfer function from the inner blood flow signal to the cerebral cortex. The signal for HDA is oxidized hemoglobin density HbO, which is measured by near infrared spectroscopy (NIRS). Comparing the 2 tasks which use arithmetic sense, the difference of brain activity becomes clear by proposed technique.

  18. Brain-Mapping Techniques for Evaluating Poststroke Recovery and Rehabilitation: A Review

    PubMed Central

    Eliassen, James C.; Boespflug, Erin L.; Lamy, Martine; Allendorfer, Jane; Chu, Wen-Jang; Szaflarski, Jerzy P.

    2009-01-01

    Brain-mapping techniques have proven to be vital in understanding the molecular, cellular, and functional mechanisms of recovery after stroke. This article briefly summarizes the current molecular and functional concepts of stroke recovery and addresses how various neuroimaging techniques can be used to observe these changes. The authors provide an overview of various techniques including diffusion-tensor imaging (DTI), magnetic resonance spectroscopy (MRS), ligand-based positron emission tomography (PET), single-photon emission computed tomography (SPECT), regional cerebral blood flow (rCBF) and regional metabolic rate of glucose (rCMRglc) PET and SPECT, functional magnetic resonance imaging (fMRI), near infrared spectroscopy (NIRS), electroencephalography (EEG), magnetoencephalography (MEG), and transcranial magnetic stimulation (TMS). Discussion in the context of poststroke recovery research informs about the applications and limitations of the techniques in the area of rehabilitation research. The authors also provide suggestions on using these techniques in tandem to more thoroughly address the outstanding questions in the field. PMID:19008203

  19. Induced contrast asynchronies may be useful for luminance photometry.

    PubMed

    Shapiro, Arthur G; D'Antona, Anthony; Smith, Jared B; Belano, Lindsay A; Charles, Justin P

    2004-01-01

    Shapiro et al. (2004) introduced a new visual effect (the induced contrast asynchrony) that demonstrates a perceptual separation between the response to a modulated light and the response to contrast of the light relative to background. The effect is composed of two physically identical disks, one surrounded by a dark annulus and the other by a light annulus. The luminance levels of both central disks were modulated in time, producing a stimulus with in-phase luminance modulation and antiphase contrast modulation. Observers primarily perceived the disks to be modulating asynchronously (i.e. they perceived the contrast), but at low temporal frequencies could also track the luminance level. Here we document that the induced contrast asynchrony disappears when the surrounds are achromatic and the center lights are modulated near the equiluminant axis. Observers viewed 1-deg-diameter disks embedded 2-deg-diameter achromatic surrounds. The chromaticity of the disks was modulated in time (1 Hz) along lines in an S versus Luminance cardinal color plane and an L-M versus Luminance cardinal color plane; observers responded as to whether the modulation appeared in phase. For all observers and both color planes, the lights appeared in phase most frequently at angles near the standard observer's equiluminant line and out of phase at angles further away from that line. Observers differed in the range of angles that produce the appearance of in-phase modulation. The results suggest that induced contrast asynchronies may be useful as a technique for equating luminance of disparate lights.

  20. Link-specific adaptation in the luminance and chromatic channels.

    PubMed

    Ahn, S J; MacLeod, D I

    1993-11-01

    We present evidence that adaptation occurs in the separate links that connect the receptors to the luminance and the chromatic channels. The relative effectiveness of the L and M cone inputs to the luminance and red/green chromatic channels was determined by using, respectively, heterochromatic flicker photometry and a cancellation technique which maintained a pure yellow. The findings provide evidence for a link-specific adaptation model, wherein one receptor system provides two separate links to the luminance and chromatic channels and the outputs of an individual cone to these two postreceptoral channels can be separately attenuated or weighted at each of these links by colored adapting lights. One line of evidence for link-specific adaptation is that colored adapting fields selectively suppress L and M cone inputs to the red/green chromatic channel by a smaller factor than the luminance channel. A second line of evidence is that there is not only a magnitude difference but also a dynamic difference between adaptive processes operating in the luminance and chromatic channels: the luminance channel has a faster gain change and recovery from adaptation than does the chromatic channel. The results, together with other evidence, make it plausible that an important component of light adaptation in cone vision occurs at the cone-bipolar synapse.

  1. The creation of a digital soil map for Cyprus using decision-tree classification techniques

    NASA Astrophysics Data System (ADS)

    Camera, Corrado; Zomeni, Zomenia; Bruggeman, Adriana; Noller, Joy; Zissimos, Andreas

    2014-05-01

    Considering the increasing threats soil are experiencing especially in semi-arid, Mediterranean environments like Cyprus (erosion, contamination, sealing and salinisation), producing a high resolution, reliable soil map is essential for further soil conservation studies. This study aims to create a 1:50.000 soil map covering the area under the direct control of the Republic of Cyprus (5.760 km2). The study consists of two major steps. The first is the creation of a raster database of predictive variables selected according to the scorpan formula (McBratney et al., 2003). It is of particular interest the possibility of using, as soil properties, data coming from three older island-wide soil maps and the recently published geochemical atlas of Cyprus (Cohen et al., 2011). Ten highly characterizing elements were selected and used as predictors in the present study. For the other factors usual variables were used: temperature and aridity index for climate; total loss on ignition, vegetation and forestry types maps for organic matter; the DEM and related relief derivatives (slope, aspect, curvature, landscape units); bedrock, surficial geology and geomorphology (Noller, 2009) for parent material and age; and a sub-watershed map to better bound location related to parent material sources. In the second step, the digital soil map is created using the Random Forests package in R. Random Forests is a decision tree classification technique where many trees, instead of a single one, are developed and compared to increase the stability and the reliability of the prediction. The model is trained and verified on areas where a 1:25.000 published soil maps obtained from field work is available and then it is applied for predictive mapping to the other areas. Preliminary results obtained in a small area in the plain around the city of Lefkosia, where eight different soil classes are present, show very good capacities of the method. The Ramdom Forest approach leads to reproduce soil

  2. Robust biological parametric mapping: an improved technique for multimodal brain image analysis

    NASA Astrophysics Data System (ADS)

    Yang, Xue; Beason-Held, Lori; Resnick, Susan M.; Landman, Bennett A.

    2011-03-01

    Mapping the quantitative relationship between structure and function in the human brain is an important and challenging problem. Numerous volumetric, surface, region of interest and voxelwise image processing techniques have been developed to statistically assess potential correlations between imaging and non-imaging metrics. Recently, biological parametric mapping has extended the widely popular statistical parametric approach to enable application of the general linear model to multiple image modalities (both for regressors and regressands) along with scalar valued observations. This approach offers great promise for direct, voxelwise assessment of structural and functional relationships with multiple imaging modalities. However, as presented, the biological parametric mapping approach is not robust to outliers and may lead to invalid inferences (e.g., artifactual low p-values) due to slight mis-registration or variation in anatomy between subjects. To enable widespread application of this approach, we introduce robust regression and robust inference in the neuroimaging context of application of the general linear model. Through simulation and empirical studies, we demonstrate that our robust approach reduces sensitivity to outliers without substantial degradation in power. The robust approach and associated software package provides a reliable way to quantitatively assess voxelwise correlations between structural and functional neuroimaging modalities.

  3. Proceedings of a workshop on digital mapping techniques; methods for geologic map data capture, management, and publication - June 2 - 5, 1997, Lawrence, Kansas

    USGS Publications Warehouse

    Soller, David R.

    1997-01-01

    Introduction: From June 2-5, 1997, selected technical representatives of the USGS and State geological surveys participated in the 'AASG/USGS Digital Mapping Techniques' workshop in Lawrence, Kansas. The workshop was initiated by the AASG/USGS Data Capture Working Group, and was hosted by the Kansas Geological Survey (KGS). With a focus on methods for data capture and digital map production, the goal was to help move the state surveys and the USGS toward development of more cost-effective, flexible, and useful systems for digital mapping and GIS analysis.

  4. Mapping Inundation Uncertainty with a Standard Score (Z-Score) Technique

    NASA Astrophysics Data System (ADS)

    Hadley, B. C.; Schmid, K. A.; Waters, K. J.; Marcy, D. C.

    2010-12-01

    Vertical error in the topographic data is the most important factor affecting the accuracy of single value surface model inundation maps (NRC, 2009). A single value surface model, often referred to as a ‘bathtub’ model, requires two primary topographic input variables: (1) the water surface (i.e. tidal datum + inundation level), and (2) the ground elevation. Unfortunately, both variables include spatially varying vertical error that introduces uncertainty into the resultant map for a given inundation scenario. More sophisticated hydraulic and geomorphic models have their own error budgets, which can be quite complex depending on model assumptions. Standard scores, or z-scores, measure the number of standard deviations an observation falls above or below the mean. This investigation employs z-scores to map the uncertainty introduced by the propagated error associated with the topographic variables. The technique permits greater flexibility than existing uncertainty methods which map the horizontal extension of the elevation data at the 95% confidence level. The vertical error in the water surface variable is due to uncertainties and spatial variability in the hydrodynamic models which drive the tidal datum conversions. The National Geodetic Survey’s (NGS) Vertical Datum Transformation Tool (VDatum) software can be used to convert between tidally referenced and orthometric elevations, but depending on location, results in errors on the order of 5-20 cm. An additional source of uncertainty is the elevation data itself. Most inundation mapping applications employ Digital Elevation Models (DEM) derived from topographic lidar data. Although lidar is among the most accurate large area elevation collection techniques, it has limitations in certain land cover types (e.g. forest or estuarine marsh), and its vertical accuracy can vary both within and between collections. To quantify this variability, accuracy assessments are performed to determine the vertical root mean

  5. Fingerprint detection and mapping using a phase shifted coherent gradient sensing technique.

    PubMed

    Dhanotia, Jitendra; Prakash, Satya; Bhatia, Vimal; Prakash, Shashi

    2016-07-10

    In this paper, a full field technique for mapping a latent fingerprint using a coherent gradient sensing (CGS) sensor is proposed. Collimated light from an He-Ne laser illuminates a specimen comprising a fingerprint implanted onto a reflecting surface. Reflected light from the specimen is analyzed using the CGS sensor comprising a pair of gratings. Reflected light carries information regarding the depth and orientation of furrows and ridges in the fingerprint. The topological information of the fingerprint is retrieved using four-step phase shifting interferometry. Well-defined 2D and 3D phase plots have been reconstructed to map the topography of the human fingerprint. The recorded slope data reconstructs the information regarding the separation and depth of the ridges in the latent fingerprint. The proposed technique is noninvasive and full field and does not require any kind of chemical or physical treatment. The sensor is very simple, yields interferometric sensitivity, and has the advantages of easy alignment, compactness, and low cost. PMID:27409305

  6. Computer controlled techniques for high emission density mapping of thermionic cathodes

    NASA Astrophysics Data System (ADS)

    Gibson, J. W.; Thomas, R. E.

    1985-12-01

    Some of the techniques commonly used (e.g. SLEEP and thermionic emission microscope) for measuring emission or work function uniformity of thermionic cathode surfaces require the use of very low or near zero current densities, thus the cathode is characterized at current densities and temperatures much lower than that of a normally operating cathode. The system reported on here uses a high voltage pulse technique and is capable of measuring emission densities in the range 1 to 80 A/cm 2 at normal cathode operating temperatures. The cathode surface is scanned with an anode having a 0.025 mm aperture whose position is controlled by computer operated stepping motors. The current through the aperture to a collector electrode is measured using a sample-and-hold amplifier. Pulsing and sampling are computer synchronized with the scanning, and data for each pulse are accumulated and can be processed and displayed in several ways using the computer, including a detailed "three-dimensional" map of either the electron emission density or work function variations. The entire surface of the cathode or any portion of it can be mapped in steps as small as 0.001 mm (1μm), but typically steps of 5-100 μm were used. Measurements are presented illustrating the uniformity or nonuniformity of the electron emission densities and work functions for type-B and type-M cathodes.

  7. Fingerprint detection and mapping using a phase shifted coherent gradient sensing technique.

    PubMed

    Dhanotia, Jitendra; Prakash, Satya; Bhatia, Vimal; Prakash, Shashi

    2016-07-10

    In this paper, a full field technique for mapping a latent fingerprint using a coherent gradient sensing (CGS) sensor is proposed. Collimated light from an He-Ne laser illuminates a specimen comprising a fingerprint implanted onto a reflecting surface. Reflected light from the specimen is analyzed using the CGS sensor comprising a pair of gratings. Reflected light carries information regarding the depth and orientation of furrows and ridges in the fingerprint. The topological information of the fingerprint is retrieved using four-step phase shifting interferometry. Well-defined 2D and 3D phase plots have been reconstructed to map the topography of the human fingerprint. The recorded slope data reconstructs the information regarding the separation and depth of the ridges in the latent fingerprint. The proposed technique is noninvasive and full field and does not require any kind of chemical or physical treatment. The sensor is very simple, yields interferometric sensitivity, and has the advantages of easy alignment, compactness, and low cost.

  8. Creep deformation behavior in eutectic Sn-Ag solder joints using a novel mapping technique

    SciTech Connect

    Lucas, J.P.; Guo, F.; McDougall, J.; Bieler, T.R.; Subramanian, K.N.; Park, J.K.

    1999-11-01

    Creep deformation behavior was measured for 60--100 {micro}m thick solder joints. The solder joints investigated consisted of: (1) non-composite solder joints made with eutectic Sn-Ag solder, and (2) composite solder joints with eutectic Sn-Ag solder containing 20 vol.%, 5 {micro}m diameter in-situ Cu{sub 6}Sn{sub 5} intermetallic reinforcements. All creep testing in this study was carried out at room temperature. Qualitative and quantitative assessment of creep deformation was characterized on the solder joints. Creep deformation was analyzed using a novel mapping technique where a geometrical-regular line pattern was etched over the entire solder joint using excimer laser ablation. During creep, the laser-ablation (LA) pattern becomes distorted due to deformation in the solder joint. By imaging the distortion of laser-ablation patterns using the SEM, actual deformation mapping for the entire solder joint is revealed. The technique involves sequential optical/digital imaging of the deformation versus time history during creep. By tracing and recording the deformation of the LA patterns on the solder over intervals of time, local creep data are obtained in many locations in the joint. This analysis enables global and localized creep shear strains and strain rate to be determined.

  9. Spectropolarimetry of hot, luminous stars

    NASA Technical Reports Server (NTRS)

    Schulte-Ladbeck, Regina E.

    1994-01-01

    I review polarimetric observations of presumably single, hot luminous stars. The stellar types discussed are OB stars. B(e) supergiants, Luminous Blue Variables (LBV), Wolf-Rayet (W-R) stars, and type II supernovae (SN). It is shown that variable, intrinsic polarization is a common phenomenon in that part of the Hertzsprung-Russell (HR) diagram which these stars occupy. However, much observational work remains to be done before we can answer the most basic, statistical questions about the polarimetric properties of different groups of hot, luminous stars. Insight into the diagnostic power of polarization observations has been gained, but cannot be exploited without detailed models. Thus, while polarimetric observations do tell us that the mass-loss processes of all types of massive stars are time-dependent and anisotropic, the significance that this might have for the accuracy of their stellar parameters and evolutionary paths remains elusive.

  10. Mapping aboveground woody biomass using forest inventory, remote sensing and geostatistical techniques.

    PubMed

    Yadav, Bechu K V; Nandy, S

    2015-05-01

    Mapping forest biomass is fundamental for estimating CO₂ emissions, and planning and monitoring of forests and ecosystem productivity. The present study attempted to map aboveground woody biomass (AGWB) integrating forest inventory, remote sensing and geostatistical techniques, viz., direct radiometric relationships (DRR), k-nearest neighbours (k-NN) and cokriging (CoK) and to evaluate their accuracy. A part of the Timli Forest Range of Kalsi Soil and Water Conservation Division, Uttarakhand, India was selected for the present study. Stratified random sampling was used to collect biophysical data from 36 sample plots of 0.1 ha (31.62 m × 31.62 m) size. Species-specific volumetric equations were used for calculating volume and multiplied by specific gravity to get biomass. Three forest-type density classes, viz. 10-40, 40-70 and >70% of Shorea robusta forest and four non-forest classes were delineated using on-screen visual interpretation of IRS P6 LISS-III data of December 2012. The volume in different strata of forest-type density ranged from 189.84 to 484.36 m(3) ha(-1). The total growing stock of the forest was found to be 2,024,652.88 m(3). The AGWB ranged from 143 to 421 Mgha(-1). Spectral bands and vegetation indices were used as independent variables and biomass as dependent variable for DRR, k-NN and CoK. After validation and comparison, k-NN method of Mahalanobis distance (root mean square error (RMSE) = 42.25 Mgha(-1)) was found to be the best method followed by fuzzy distance and Euclidean distance with RMSE of 44.23 and 45.13 Mgha(-1) respectively. DRR was found to be the least accurate method with RMSE of 67.17 Mgha(-1). The study highlighted the potential of integrating of forest inventory, remote sensing and geostatistical techniques for forest biomass mapping.

  11. Mapping.

    ERIC Educational Resources Information Center

    Kinney, Douglas M.; McIntosh, Willard L.

    1979-01-01

    The area of geological mapping in the United States in 1978 increased greatly over that reported in 1977; state geological maps were added for California, Idaho, Nevada, and Alaska last year. (Author/BB)

  12. Use of Technology-Assisted Techniques of Mind Mapping and Concept Mapping in Science Education: A Constructivist Study

    ERIC Educational Resources Information Center

    Balim, Ali Günay

    2013-01-01

    The study aims to investigate the effects of using mind maps and concept maps on students' learning of concepts in science courses. A total of 51 students participated in this study which used a quasi-experimental research design with pre-test/post-test control groups. The constructivist-inspired study was carried out in the sixth-grade…

  13. MRI technique for the snapshot imaging of quantitative velocity maps using RARE.

    PubMed

    Shiko, G; Sederman, A J; Gladden, L F

    2012-03-01

    A quantitative PGSE-RARE pulse sequence was developed and successfully applied to the in situ dissolution of two pharmaceutical formulations dissolving over a range of timescales. The new technique was chosen over other existing fast velocity imaging techniques because it is T(2) weighted, not T(2)(∗) weighted, and is, therefore, robust for imaging time-varying interfaces and flow in magnetically heterogeneous systems. The complex signal was preserved intact by separating odd and even echoes to obtain two phase maps which are then averaged in post-processing. Initially, the validity of the technique was shown when imaging laminar flow in a pipe. Subsequently, the dissolution of two drugs was followed in situ, where the technique enables the imaging and quantification of changes in the form of the tablet and the flow field surrounding it at high spatial and temporal resolution. First, the complete 3D velocity field around an eroding salicylic acid tablet was acquired at a resolution of 98×49 μm(2), within 20 min, and monitored over ∼13 h. The tablet was observed to experience a heterogeneous flow field and, hence a heterogeneous shear field, which resulted in the non-symmetric erosion of the tablet. Second, the dissolution of a fast dissolving immediate release tablet was followed using one-shot 2D velocity images acquired every 5.2 s at a resolution of 390×390 μm(2). The quantitative nature of the technique and fast acquisition times provided invaluable information on the dissolution behaviour of this tablet, which had not been attainable previously with conventional quantitative MRI techniques.

  14. MRI technique for the snapshot imaging of quantitative velocity maps using RARE

    NASA Astrophysics Data System (ADS)

    Shiko, G.; Sederman, A. J.; Gladden, L. F.

    2012-03-01

    A quantitative PGSE-RARE pulse sequence was developed and successfully applied to the in situ dissolution of two pharmaceutical formulations dissolving over a range of timescales. The new technique was chosen over other existing fast velocity imaging techniques because it is T2 weighted, not T2∗ weighted, and is, therefore, robust for imaging time-varying interfaces and flow in magnetically heterogeneous systems. The complex signal was preserved intact by separating odd and even echoes to obtain two phase maps which are then averaged in post-processing. Initially, the validity of the technique was shown when imaging laminar flow in a pipe. Subsequently, the dissolution of two drugs was followed in situ, where the technique enables the imaging and quantification of changes in the form of the tablet and the flow field surrounding it at high spatial and temporal resolution. First, the complete 3D velocity field around an eroding salicylic acid tablet was acquired at a resolution of 98 × 49 μm2, within 20 min, and monitored over ˜13 h. The tablet was observed to experience a heterogeneous flow field and, hence a heterogeneous shear field, which resulted in the non-symmetric erosion of the tablet. Second, the dissolution of a fast dissolving immediate release tablet was followed using one-shot 2D velocity images acquired every 5.2 s at a resolution of 390 × 390 μm2. The quantitative nature of the technique and fast acquisition times provided invaluable information on the dissolution behaviour of this tablet, which had not been attainable previously with conventional quantitative MRI techniques.

  15. Study of lumineers' interfaces by means of optical coherence tomography

    NASA Astrophysics Data System (ADS)

    de Andrade Borges, Erica; Fernandes Cassimiro-Silva, Patrícia; Osório Fernandes, Luana; Leônidas Gomes, Anderson Stevens

    2015-06-01

    OCT has been used to evaluate dental materials, and is employed here to evaluate lumineers for the first time. Lumineers are used as esthetical indirect restoration, and after wearing and aging, several undesirable features such as gaps, bubbles and mismatch can appear in which would only be seen by invasive analysis. The OCT (spectral domain SD-OCT, 930nm central wavelength) was used to evaluate noninvasively the lumineer- cement-tooth interface. We analyzed 20 specimens of lumineers-teeth that were prepared in bovine teeth and randomly allocated in 4 experimental groups (n=5) with two different cementation techniques and two different types of cementing agent (RelyX U200 and RelyX Veneer, 3M ESPE, with the adhesive recommended by the manufacture). The lumineers were made of lithium disilicate and obtained using a vacuum injection technique. The analysis was performed by using 2D and 3D OCT images, obtained before and after cementing and the thermal cycling process to simulate thermal stress in a oral cavity. Initial measurements showed that the SD-OCT was able to see through the 500μm thick lumineer, as delivered by the fabricant, and internal stress was observed. Failures were found in the cementing process and also after ageing simulation by thermal cycling. The adhesive failures as bubbles, gaps and degradation of the cementation line are the natural precursors of other defects reported by several studies of clinical follow-up (detachments, fractures and cracks). Bubble dimensions ranging from 146 μm to 1427 μm were measured and the OCT was validated as an investigative and precise tool for evaluation of the lumineer-cement-tooth.

  16. Rethinking Program Assessment through the Use of Program Alignment Mapping Technique

    ERIC Educational Resources Information Center

    Liu, Min; Wrobbel, Duff; Blankson, Isaac

    2010-01-01

    Curriculum mapping is a well-known assessment tool used to articulate and align a curriculum. The authors present an improved method of curriculum alignment that combines the traditional curriculum mapping with what is often called prerequisite mapping. This improved method of curriculum alignment mapping we label Program Alignment Mapping (PAM)…

  17. Implementation of quantitative perfusion imaging techniques for functional brain mapping using pulsed arterial spin labeling.

    PubMed

    Wong, E C; Buxton, R B; Frank, L R

    1997-01-01

    We describe here experimental considerations in the implementation of quantitative perfusion imaging techniques for functional MRI using pulsed arterial spin labeling. Three tagging techniques: EPISTAR, PICORE, and FAIR are found to give very similar perfusion results despite large differences in static tissue contrast. Two major sources of systematic error in the perfusion measurement are identified: the transit delay from the tagging region to the imaging slice; and the inclusion of intravascular tagged signal. A modified technique called QUIPSS II is described that decreases sensitivity to these effects by explicitly controlling the time width of the tag bolus and imaging after the bolus is entirely deposited into the slice. With appropriate saturation pulses the pulse sequence can be arranged so as to allow for simultaneous collection of perfusion and BOLD data that can be cleanly separated. Such perfusion and BOLD signals reveal differences in spatial location and dynamics that may be useful both for functional brain mapping and for study of the BOLD contrast mechanism. The implementation of multislice perfusion imaging introduces additional complications, primarily in the elimination of signal from static tissue. In pulsed ASL, this appears to be related to the slice profile of the inversion tag pulse in the presence of relaxation, rather than magnetization transfer effects as in continuous arterial spin labeling, and can be alleviated with careful adjustment of inversion pulse parameters. PMID:9430354

  18. Updating Landsat-derived land-cover maps using change detection and masking techniques

    NASA Technical Reports Server (NTRS)

    Likens, W.; Maw, K.

    1982-01-01

    The California Integrated Remote Sensing System's San Bernardino County Project was devised to study the utilization of a data base at a number of jurisdictional levels. The present paper discusses the implementation of change-detection and masking techniques in the updating of Landsat-derived land-cover maps. A baseline landcover classification was first created from a 1976 image, then the adjusted 1976 image was compared with a 1979 scene by the techniques of (1) multidate image classification, (2) difference image-distribution tails thresholding, (3) difference image classification, and (4) multi-dimensional chi-square analysis of a difference image. The union of the results of methods 1, 3 and 4 was used to create a mask of possible change areas between 1976 and 1979, which served to limit analysis of the update image and reduce comparison errors in unchanged areas. The techniques of spatial smoothing of change-detection products, and of combining results of difference change-detection algorithms are also shown to improve Landsat change-detection accuracies.

  19. Probabilistic flood inundation mapping of ungauged rivers: Linking GIS techniques and frequency analysis

    NASA Astrophysics Data System (ADS)

    Sarhadi, Ali; Soltani, Saeed; Modarres, Reza

    2012-08-01

    SummaryThis study presents an exhaustive methodology of floodplain mapping at ungauged rivers. To present our methodology, we selected the Halilrud basin and Jiroft city in southeastern Iran as an example of hazardous regions. To estimate flood quantiles in different return periods at ungauged reaches, we used regional flood frequency analysis. By using the well-known L-moments approach and related criteria, a homogeneous region was formed and the 3-parameter Log normal distribution was identified as the robust regional frequency distribution. The hydro-geomorphic characteristics and the land use properties of the catchments were then extracted using RS&GIS techniques to establish multivariate regional regression models between hydro-geomorphic characteristics and flood quantiles. After delineation of the catchments for the ungauged reaches, flood quantiles as an important factor in flood inundation at outlets of these reaches with different probabilities were estimated using the regional regression models. To delineate flood hazard maps and to enhance the accuracy of the hydraulic modeling, we applied satellite stereoscope images of Cartosat-1 along with the Rational Polynomial Coefficients to extract a high resolution DTM and detailed parameterization of the channel required by 1D hydraulic model HEC-RAS. The GIS-based HEC-Geo RAS pre- and post-processor were also used for careful optimization of the geometry features for real visualization of the flood prone areas. Information of some historical flood events was also used to evaluate the hydraulic model performance in predicting flood inundations. Finally, vulnerable areas were crossed with extracted land use mapping from IRS-P6 satellite images to differentiate the critical infrastructures and the valuable land use classes affected by floods in different return periods.

  20. Mapping air pollution using Earth observation techniques for cultural heritage sites

    NASA Astrophysics Data System (ADS)

    Agapiou, Athos; Nisantzi, Argyro; Lysandrou, Vasiliki; Mamouri, Rodanthi; Alexakis, Dimitrios D.; Themistocleous, Kyriacos; Sarris, Apostolos; Hadjimitsis, Diofantos G.

    2013-08-01

    Air pollutants, together with climatic parameters, are of major importance for the deterioration of cultural heritage monuments. Atmospheric pollution is widely recognized as one of the major anthropogenic threats to architectural cultural heritage, in particular when associated with water absorption phenomena. Atmospheric particle deposition on surfaces of Monuments (of cultural heritage interest) may cause an aesthetic impact induced by a series of chemical reactions. Therefore there is a need for systematic monitoring and mapping of air pollution for areas where important archaeological sites and monuments are found. observation techniques, such as the use of satellite image for the retrieval of Aerosol Optical Thickness (AOT), are ideal for this purpose. In this paper, all important monuments of the Paphos District, listed by the Department of Antiquities of Cyprus, have been mapped using Geographical Information Systems. Several recent (2012) MODIS satellite images (both Aqua and Terra) have been used to extract the AOT values in this area. Multi-temporal analysis was performed to identify areas of high risk where AOT values are considered to be high. In situ observations have been also carried out to verify the results.

  1. Quantitative mapping of pore fraction variations in silicon nitride using an ultrasonic contact scan technique

    NASA Technical Reports Server (NTRS)

    Roth, Don J.; Kiser, James D.; Swickard, Suzanne M.; Szatmary, Steven A.; Kerwin, David P.

    1993-01-01

    An ultrasonic scan procedure using the pulse-echo contact configuration was employed to obtain maps of pore fraction variations in sintered silicon nitride samples in terms of ultrasonic material properties. Ultrasonic velocity, attenuation coefficient, and reflection coefficient images were obtained simultaneously over a broad band of frequencies (e.g., 30 to 110 MHz) by using spectroscopic analysis. Liquid and membrane (dry) coupling techniques and longitudinal and shear-wave energies were used. The major results include the following: Ultrasonic velocity (longitudinal and shear wave) images revealed and correlated with the extent of average through-thickness pore fraction variations in the silicon nitride disks. Attenuation coefficient images revealed pore fraction nonuniformity due to the scattering that occurred at boundaries between regions of high and low pore fraction. Velocity and attenuation coefficient images were each nearly identical for machined and polished disks, making the method readily applicable to machined materials. Velocity images were similar for wet and membrane coupling. Maps of apparent Poisson's ratio constructed from longitudinal and shear-wave velocities quantified Poisson's ratio variations across a silicon nitride disk. Thermal wave images of a disk indicated transient thermal behavior variations that correlated with observed variations in pore fraction and velocity and attenuation coefficients.

  2. Mapping the temporal pole with a specialized electrode array: technique and preliminary results

    PubMed Central

    Abel, Taylor J.; Rhone, Ariane E.; Nourski, Kirill V.; Granner, Mark A.; Oya, Hiroyuki; Griffiths, Timothy D.; Tranel, Daniel T.; Kawasaki, Hiroto; Howard, Matthew A.

    2014-01-01

    Temporopolar cortex plays a crucial role in the pathogenesis of temporal lobe epilepsy and subserves important cognitive functions. Because of its shape and position in the middle cranial fossa, complete electrode coverage of the temporal pole (TP) is difficult to achieve using existing devices. We designed a novel TP electrode array that conforms to the surface of temporopolar cortex and achieves dense electrode coverage of this important brain region. A multi-pronged electrode array was designed that can be placed over the surface of the TP using a straightforward insertion technique. Twelve patients with medically intractable epilepsy were implanted with the TP electrode array for purposes of seizure localization. Select patients underwent cognitive mapping by electrocorticographic (ECoG) recording from the TP during a naming task. Use of the array resulted in excellent TP electrode coverage in all patients. High quality ECoG data were consistently obtained for purposes of delineating seizure activity and functional mapping. During a naming task, significant increases in ECoG power were observed within localized subregions of the TP. One patient developed a transient neurological deficit thought to be related to the mass effect of multiple intracranial recording arrays, including the TP array. This deficit resolved following removal of all electrodes. The TP electrode array overcomes limitations of existing devices and enables clinicians and researchers to obtain optimal multi-site recordings from this important brain region. PMID:24480831

  3. Inventory and mapping of flood inundation using interactive digital image analysis techniques

    USGS Publications Warehouse

    Rohde, Wayne G.; Nelson, Charles A.; Taranik, J.V.

    1979-01-01

    LANDSAT digital data and color infra-red photographs were used in a multiphase sampling scheme to estimate the area of agricultural land affected by a flood. The LANDSAT data were classified with a maximum likelihood algorithm. Stratification of the LANDSAT data, prior to classification, greatly reduced misclassification errors. The classification results were used to prepare a map overlay showing the areal extent of flooding. These data also provided statistics required to estimate sample size in a two phase sampling scheme, and provided quick, accurate estimates of areas flooded for the first phase. The measurements made in the second phase, based on ground data and photo-interpretation, were used with two phase sampling statistics to estimate the area of agricultural land affected by flooding These results show that LANDSAT digital data can be used to prepare map overlays showing the extent of flooding on agricultural land and, with two phase sampling procedures, can provide acreage estimates with sampling errors of about 5 percent. This procedure provides a technique for rapidly assessing the areal extent of flood conditions on agricultural land and would provide a basis for designing a sampling framework to estimate the impact of flooding on crop production.

  4. Performance prediction of four-contact vertical Hall-devices using a conformal mapping technique

    NASA Astrophysics Data System (ADS)

    Yang, Huang; Yue, Xu; Yufeng, Guo

    2015-12-01

    Instead of the conventional design with five contacts in the sensor active area, innovative vertical Hall devices (VHDs) with four contacts and six contacts are asymmetrical in structural design but symmetrical in the current flow that can be well fit for the spinning current technique for offset elimination. In this article, a conformal mapping calculation method is used to predict the performance of asymmetrical VHD embedded in a deep n-well with four contacts. Furthermore, to make the calculation more accurate, the junction field effect is also involved into the conformal mapping method. The error between calculated and simulated results is less than 5% for the current-related sensitivity, and approximately 13% for the voltage-related sensitivity. This proves that such calculations can be used to predict the optimal structure of the vertical Hall-devices. Project supported by the Natural Science Foundation of Jiangsu Province, China (Nos. BK20131379, BK20141431) and the Graduate Research and Innovation Projects of Jiangsu Province (No. SJLX_0373).

  5. Pilot Application of 3d Underwater Imaging Techniques for Mapping Posidonia Oceanica (L.) Delile Meadows

    NASA Astrophysics Data System (ADS)

    Rende, F. S.; Irving, A. D.; Lagudi, A.; Bruno, F.; Scalise, S.; Cappa, P.; Montefalcone, M.; Bacci, T.; Penna, M.; Trabucco, B.; Di Mento, R.; Cicero, A. M.

    2015-04-01

    Seagrass communities are considered one of the most productive and complex marine ecosystems. Seagrasses belong to a small group of 66 species that can form extensive meadows in all coastal areas of our planet. Posidonia oceanica beds are the most characteristic ecosystem of the Mediterranean Sea, and should be constantly monitored, preserved and maintained, as specified by EU Habitats Directive for priority habitats. Underwater 3D imaging by means of still or video cameras can allow a detailed analysis of the temporal evolution of these meadows, but also of the seafloor morphology and integrity. Video-photographic devices and open source software for acquiring and managing 3D optical data rapidly became more and more effective and economically viable, making underwater 3D mapping an easier task to carry out. 3D reconstruction of the underwater scene can be obtained with photogrammetric techniques that require just one or more digital cameras, also in stereo configuration. In this work we present the preliminary results of a pilot 3D mapping project applied to the P. oceanica meadow in the Marine Protected Area of Capo Rizzuto (KR, Calabria Region - Italy).

  6. Supersonic shear imaging: a new technique for soft tissue elasticity mapping.

    PubMed

    Bercoff, Jérémy; Tanter, Mickaël; Fink, Mathias

    2004-04-01

    Supersonic shear imaging (SSI) is a new ultrasound-based technique for real-time visualization of soft tissue viscoelastic properties. Using ultrasonic focused beams, it is possible to remotely generate mechanical vibration sources radiating low-frequency, shear waves inside tissues. Relying on this concept, SSI proposes to create such a source and make it move at a supersonic speed. In analogy with the "sonic boom" created by a supersonic aircraft, the resulting shear waves will interfere constructively along a Mach cone, creating two intense plane shear waves. These waves propagate through the medium and are progressively distorted by tissue heterogeneities. An ultrafast scanner prototype is able to both generate this supersonic source and image (5000 frames/s) the propagation of the resulting shear waves. Using inversion algorithms, the shear elasticity of medium can be mapped quantitatively from this propagation movie. The SSI enables tissue elasticity mapping in less than 20 ms, even in strongly viscous medium like breast. Modalities such as shear compounding are implementable by tilting shear waves in different directions and improving the elasticity estimation. Results validating SSI in heterogeneous phantoms are presented. The first in vivo investigations made on healthy volunteers emphasize the potential clinical applicability of SSI for breast cancer detection. PMID:15139541

  7. The Development of Luminance- and Texture-Defined Form Perception during the School-Aged Years

    ERIC Educational Resources Information Center

    Bertone, Armando; Hanck, Julie; Guy, Jacalyn; Cornish, Kim

    2010-01-01

    The objective of the present study was to assess the development of luminance- and texture-defined static form perception in school-aged children. This was done using an adapted Landolt-C technique where C-optotypes were defined by either luminance or texture information, the latter necessitating extra-striate neural processing to be perceived.…

  8. A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Application

    NASA Technical Reports Server (NTRS)

    Phan, Minh Q.

    1997-01-01

    This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.

  9. A Multi-Resolution Nonlinear Mapping Technique for Design and Analysis Applications

    NASA Technical Reports Server (NTRS)

    Phan, Minh Q.

    1998-01-01

    This report describes a nonlinear mapping technique where the unknown static or dynamic system is approximated by a sum of dimensionally increasing functions (one-dimensional curves, two-dimensional surfaces, etc.). These lower dimensional functions are synthesized from a set of multi-resolution basis functions, where the resolutions specify the level of details at which the nonlinear system is approximated. The basis functions also cause the parameter estimation step to become linear. This feature is taken advantage of to derive a systematic procedure to determine and eliminate basis functions that are less significant for the particular system under identification. The number of unknown parameters that must be estimated is thus reduced and compact models obtained. The lower dimensional functions (identified curves and surfaces) permit a kind of "visualization" into the complexity of the nonlinearity itself.

  10. Application of the lamp mapping technique for overlap function for Raman lidar systems.

    PubMed

    Walker, Monique; Venable, Demetrius; Whiteman, David N; Sakai, Tetsu

    2016-04-01

    Traditionally, the lidar water vapor mixing ratio (WVMR) is corrected for overlap using data from another instrument, such as a radiosonde. Here we introduce a new experimental method to determine the overlap function using the lamp mapping technique (LMT), which relies on the lidar optics and detection system. The LMT discussed here involves a standard halogen lamp being scanned over the aperture of a Raman lidar telescope in synchronization with the lidar detection system [Appl. Opt.50, 4622 (2011)APOPAI0003-693510.1364/AO.50.004622, Appl. Opt.53, 8538 (2014)APOPAI0003-693510.1364/AO.53.008535]. In this paper, we show results for a LMT-determined overlap function for individual channels, as well as a WVMR overlap function. We found that the LMT-determined WVMR overlap functions deviate within 5% of the traditional radiosonde-determined overlap. PMID:27139656

  11. DIGITAL PROCESSING TECHNIQUES FOR IMAGE MAPPING WITH LANDSAT TM AND SPOT SIMULATOR DATA.

    USGS Publications Warehouse

    Chavez, Pat S., Jr.

    1984-01-01

    To overcome certain problems associated with the visual selection of Landsat TM bands for image mapping, the author used a quantitative technique that ranks the 20 possible three-band combinations based upon their information content. Standard deviations and correlation coefficients can be used to compute a value called the Optimum Index Factor (OIF) for each of the 20 possible combinations. SPOT simulator images were digitally processed and compared with Landsat-4 Thematic Mapper (TM) images covering a semi-arid region in northern Arizona and a highly vegetated urban area near Washington, D. C. Statistical comparisons indicate the more radiometric or color information exists in certain TM three-band combinations than in the three SPOT bands.

  12. Application of the lamp mapping technique for overlap function for Raman lidar systems.

    PubMed

    Walker, Monique; Venable, Demetrius; Whiteman, David N; Sakai, Tetsu

    2016-04-01

    Traditionally, the lidar water vapor mixing ratio (WVMR) is corrected for overlap using data from another instrument, such as a radiosonde. Here we introduce a new experimental method to determine the overlap function using the lamp mapping technique (LMT), which relies on the lidar optics and detection system. The LMT discussed here involves a standard halogen lamp being scanned over the aperture of a Raman lidar telescope in synchronization with the lidar detection system [Appl. Opt.50, 4622 (2011)APOPAI0003-693510.1364/AO.50.004622, Appl. Opt.53, 8538 (2014)APOPAI0003-693510.1364/AO.53.008535]. In this paper, we show results for a LMT-determined overlap function for individual channels, as well as a WVMR overlap function. We found that the LMT-determined WVMR overlap functions deviate within 5% of the traditional radiosonde-determined overlap.

  13. Application of Geologic Mapping Techniques and Autonomous Feature Detection to Future Exploration of Europa

    NASA Astrophysics Data System (ADS)

    Bunte, M. K.; Tanaka, K. L.; Doggett, T.; Figueredo, P. H.; Lin, Y.; Greeley, R.; Saripalli, S.; Bell, J. F.

    2013-12-01

    Europa's extremely young surface age, evidence for extensive resurfacing, and indications of a sub-surface ocean elevate its astrobiological potential for habitable environments and make it a compelling focus for study. Knowledge of the global distribution and timing of Europan geologic units is a key step in understanding the history of the satellite and for identifying areas relevant for exploration. I have produced a 1:15M scale global geologic map of Europa which represents a proportionate distribution of four unit types and associated features: plains, linea, chaos, and crater materials. Mapping techniques differ somewhat from other planetary maps but do provide a method to establish stratigraphic markers and to illustrate the surface history through four periods of formation as a function of framework lineament cross-cutting relationships. Correlations of observed features on Europa with Earth analogs enforce a multi-process theory for formation rather than the typical reliance on the principle of parsimony. Lenticulae and microchaos are genetically similar and most likely form by diapirism. Platy and blocky chaos units, endmembers of archetypical chaos, are best explained by brine mobilization. Ridges account for the majority of lineaments and may form by a number of methods indicative of local conditions; most form by either tidal pumping or shear heating. The variety of morphologies exhibited by bands indicates that multiple formation mechanisms apply once fracturing of the brittle surface over a ductile subsurface is initiated. Mapping results support the interpretation that Europa's shell has thickened over time resulting in changes in the style and intensity of deformation. Mapping serves as an index for change detection and classification, aids in pre-encounter targeting, and supports the selection of potential landing sites. Highest priority target areas are those which indicate geophysical activity by the presence of volcanic plumes, outgassing, or

  14. 78 FR 66785 - Luminant Generation Company, LLC

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-06

    ... consecutive weeks of ] a combined license (COL) application from Luminant Generation Company, LLC. (Luminant... applications for COLs with the NRC, pursuant to Section 103 of the Atomic Energy Act of 1954, as amended, and... for Nuclear Power Plants:'' 1. On September 19, 2008, Luminant submitted an application for COLs...

  15. A self-trained classification technique for producing 30 m percent-water maps from Landsat data

    USGS Publications Warehouse

    Rover, Jennifer R.; Wylie, Bruce K.; Ji, Lei

    2010-01-01

    Small bodies of water can be mapped with moderate-resolution satellite data using methods where water is mapped as subpixel fractions using field measurements or high-resolution images as training datasets. A new method, developed from a regression-tree technique, uses a 30 m Landsat image for training the regression tree that, in turn, is applied to the same image to map subpixel water. The self-trained method was evaluated by comparing the percent-water map with three other maps generated from established percent-water mapping methods: (1) a regression-tree model trained with a 5 m SPOT 5 image, (2) a regression-tree model based on endmembers and (3) a linear unmixing classification technique. The results suggest that subpixel water fractions can be accurately estimated when high-resolution satellite data or intensively interpreted training datasets are not available, which increases our ability to map small water bodies or small changes in lake size at a regional scale.

  16. IUE observations of NGC 1068 - The extremely luminous starburst knots

    NASA Technical Reports Server (NTRS)

    Bruhweiler, F. C.; Truong, K. Q.; Altner, B.

    1991-01-01

    A complete investigation of the UV characteristics of the starburst knots in the inner 3 kpc region of NGC 1068 mapped by the IUE is presented. It is noted that these knots probably represent the most luminous H II regions yet studied at optical and UV wavelengths. Comparisons suggest that the brightest knot, knot 1, is 30 times brighter than NGC 604 in M33 and has a total luminosity of not less than 10 to the 43rd ergs/s.

  17. Spatial epidemiological techniques in cholera mapping and analysis towards a local scale predictive modelling

    NASA Astrophysics Data System (ADS)

    Rasam, A. R. A.; Ghazali, R.; Noor, A. M. M.; Mohd, W. M. N. W.; Hamid, J. R. A.; Bazlan, M. J.; Ahmad, N.

    2014-02-01

    Cholera spatial epidemiology is the study of the spread and control of the disease spatial pattern and epidemics. Previous studies have shown that multi-factorial causation such as human behaviour, ecology and other infectious risk factors influence the disease outbreaks. Thus, understanding spatial pattern and possible interrelationship factors of the outbreaks are crucial to be explored an in-depth study. This study focuses on the integration of geographical information system (GIS) and epidemiological techniques in exploratory analyzing the cholera spatial pattern and distribution in the selected district of Sabah. Spatial Statistic and Pattern tools in ArcGIS and Microsoft Excel software were utilized to map and analyze the reported cholera cases and other data used. Meanwhile, cohort study in epidemiological technique was applied to investigate multiple outcomes of the disease exposure. The general spatial pattern of cholera was highly clustered showed the disease spread easily at a place or person to others especially 1500 meters from the infected person and locations. Although the cholera outbreaks in the districts are not critical, it could be endemic at the crowded areas, unhygienic environment, and close to contaminated water. It was also strongly believed that the coastal water of the study areas has possible relationship with the cholera transmission and phytoplankton bloom since the areas recorded higher cases. GIS demonstrates a vital spatial epidemiological technique in determining the distribution pattern and elucidating the hypotheses generating of the disease. The next research would be applying some advanced geo-analysis methods and other disease risk factors for producing a significant a local scale predictive risk model of the disease in Malaysia.

  18. Threading Dislocation Characterization and Stress Mapping Depth Profiling via Ray Tracing Technique

    NASA Astrophysics Data System (ADS)

    Zhou, Tianyi

    Zinc oxide (ZnO) has been well known as a transparent, dielectric, piezoelectric and wide band gap material. The potential capabilities have been demonstrated for a wide range of applications such as piezoelectric transducer, gas sensor, optical waveguides and transparent electrode. It could also be applied as a substrate material for GaN-based devices. However, while some applications have already been realized, issues relating to crystalline defects remain a barrier to the successful realization of several others. In this thesis, the central focus of Chapter II is to characterize threading dislocations in hydrothermal grown ZnO substrates through simulation work as well as other techniques. The goal of this study is to find the origin of threading dislocations and design strategies to mitigate their negative effects by either reducing their densities or completely eliminating them. In Chapter III, the technique of SMART (stress mapping analysis via ray tracing) is discussed in detail to measure residue stress in packaged silicon circuits. Residual stress plays an important role in the performance and lifetime of single crystal device material. There are mainly two advantages of SMART compared with other techniques: (a) all six components of the stress tensor could be evaluated; (b) it is non-destructive and no damaging trace will be left on the sample. In this study, our goal is to build a relationship between stress distribution and depth. The concept of penetration depth is critically important in this study and its value may cause great changes for real space stress distribution. A new function is applied to get better fitting curves. Data in this study is obtained from various penetration depth, which represents exponentially decaying weighted average of actual stress value or in other words this stress profile is Laplace transform of real stress profile. Mathematical procedure is described to determine real stress profile from Laplace profile. Experiment

  19. Comparison of the Biological Impacts of the Fluoride Compounds by Graphical Risk Visualization Map Technique.

    PubMed

    Usuda, Kan; Kono, Rei; Ueno, Takaaki; Ito, Yuichi; Dote, Tomotaro; Yokoyama, Hirotaka; Kono, Koichi; Tamaki, Junko

    2015-09-01

    Various fluoride compounds are widely used in industry. The present risk assessment study was conducted using a series of inorganic binary fluorides of the type XFn, where X(n) = Na(+), K(+), Li(+), Mg(2+), Ca(2+), Sr(2+), Ba(2+), Al(3+), Nd(3+), La(3+), Ce(3+), Sm(3+), Gd(3+), Y(3+), Yb(2+), and Zn(2+). The aqueous solutions of these salts were orally administrated to 16 experimental groups (one for each of the salts tested). The levels of fluoride, N-acetyl-β-D-glucosaminidase in cumulative 24-h urine samples and creatinine clearance were measured to assess possible acute renal damages. The levels of fluoride, alanine aminotransferase, and aspartate aminotransferase were also determined in serum samples to assess possible acute hepatic damages. The results reveal that sodium fluoride (NaF), potassium fluoride (KF), and zinc fluoride tetrahydrate (ZnF2 (.)4H2O) can carry the fluoride ion into the bloodstream and that it is excreted via urine more readily than the other compounds tested. These fluorides were assigned the highest risk impact factor. Most of the rare earth fluorides are insoluble in water while those groups 2 and 13 of the periodic table are slightly soluble, so that they do not have a significant negative risk. These findings suggest that the biological impact of fluoride depends on the accompanying counter ion and its solubility. The risk map obtained in the present study shows that the graphical visualization map technique employed is a valuable new tool to assess the toxicological risk of chemical compounds.

  20. Contrast adaptation to luminance and brightness modulations.

    PubMed

    Nagai, Takehiro; Nakayama, Kazuki; Kawashima, Yuki; Yamauchi, Yasuki

    2016-03-01

    Perceptual brightness and color contrast decrease after seeing a light temporally modulating along a certain direction in a color space, a phenomenon known as contrast adaptation. We investigated whether contrast adaptation along the luminance direction arises from modulation of luminance signals or apparent brightness signals. The stimulus consisted of two circles on a gray background presented on a CRT monitor. In the adaptation phase, the luminance and chromaticity of one circle were temporally modulated, while the other circle was kept at a constant luminance and color metameric with an equal-energy white. We employed two types of temporal modulations, namely, in luminance and brightness. Chromaticity was sinusoidally modulated along the L-M axis, leading to dissociation between luminance and brightness (the Helmholtz-Kohlrausch effect). In addition, luminance modulation was minimized in the brightness modulation, while brightness modulation was minimized in the luminance modulation. In the test phase, an asymmetric matching method was used to measure the magnitude of contrast adaptation for both modulations. Our results showed that, although contrast adaptation along the luminance direction occurred for both modulations, contrast adaptation for luminance modulation was significantly stronger than that for the brightness modulation regardless of the temporal frequency of the adaptation modulation. These results suggest that luminance modulation is more influential in contrast adaptation than brightness modulation.

  1. Low Altitude AVIRIS Data for Mapping Land Cover in Yellowstone National Park: Use of Isodata Clustering Techniques

    NASA Technical Reports Server (NTRS)

    Spruce, Joe

    2001-01-01

    Yellowstone National Park (YNP) contains a diversity of land cover. YNP managers need site-specific land cover maps, which may be produced more effectively using high-resolution hyperspectral imagery. ISODATA clustering techniques have aided operational multispectral image classification and may benefit certain hyperspectral data applications if optimally applied. In response, a study was performed for an area in northeast YNP using 11 select bands of low-altitude AVIRIS data calibrated to ground reflectance. These data were subjected to ISODATA clustering and Maximum Likelihood Classification techniques to produce a moderately detailed land cover map. The latter has good apparent overall agreement with field surveys and aerial photo interpretation.

  2. Can Social Semantic Web Techniques Foster Collaborative Curriculum Mapping In Medicine?

    PubMed Central

    Finsterer, Sonja; Cremer, Jan; Schenkat, Hennig

    2013-01-01

    Background Curriculum mapping, which is aimed at the systematic realignment of the planned, taught, and learned curriculum, is considered a challenging and ongoing effort in medical education. Second-generation curriculum managing systems foster knowledge management processes including curriculum mapping in order to give comprehensive support to learners, teachers, and administrators. The large quantity of custom-built software in this field indicates a shortcoming of available IT tools and standards. Objective The project reported here aims at the systematic adoption of techniques and standards of the Social Semantic Web to implement collaborative curriculum mapping for a complete medical model curriculum. Methods A semantic MediaWiki (SMW)-based Web application has been introduced as a platform for the elicitation and revision process of the Aachen Catalogue of Learning Objectives (ACLO). The semantic wiki uses a domain model of the curricular context and offers structured (form-based) data entry, multiple views, structured querying, semantic indexing, and commenting for learning objectives (“LOs”). Semantic indexing of learning objectives relies on both a controlled vocabulary of international medical classifications (ICD, MeSH) and a folksonomy maintained by the users. An additional module supporting the global checking of consistency complements the semantic wiki. Statements of the Object Constraint Language define the consistency criteria. We evaluated the application by a scenario-based formative usability study, where the participants solved tasks in the (fictional) context of 7 typical situations and answered a questionnaire containing Likert-scaled items and free-text questions. Results At present, ACLO contains roughly 5350 operational (ie, specific and measurable) objectives acquired during the last 25 months. The wiki-based user interface uses 13 online forms for data entry and 4 online forms for flexible searches of LOs, and all the forms are

  3. Comparing and Reconciling Traditional Field and Photogeologic Mapping Techniques: Lessons from the San Francisco Volcanic Field, Arizona

    NASA Technical Reports Server (NTRS)

    Skinner, J. A., Jr.; Eppler, D. B.; Bleacher, J. E.; Evans, C. A.; Feng, W.; Gruener, J.; Hurwitz, D. M.; Janoiko, B.; Whitson, P.

    2014-01-01

    Cartographic products and - specifically - geologic maps provide critical assistance for establishing physical and temporal frameworks of planetary surfaces. The technical methods that result in the creation of geologic maps vary depending on how observations are made as well as the overall intent of the final products [1-3]. These methods tend to follow a common linear work flow, including the identification and delineation of spatially and temporally discrete materials (units), the documentation of their primary (emplacement) and secondary (erosional) characteristics, analysis of the relative and absolute age relationships between these materials, and the collation of observations and interpretations into an objective map product. The "objectivity" of a map is critical cross comparison with overlapping maps and topical studies as well as its relevance to scientific posterity. However, the "accuracy" and "correctness" of a geologic map is very subject to debate. This can be evidenced by comparison of existing geologic maps at various scales, particularly those compiled through field- and remote-based mapped efforts. Our study focuses on comparing the fidelity of (1) "Apollo-style" geologic investigations, where typically non-geologist crew members follow static traverse routes established through pre-mission planning, and (2) "traditional" field-based investigations, where geologists are given free rein to observe without preplanned routes. This abstract summarizes the regional geology wherein our study was conducted, presents the geologic map created from traditional field mapping techniques, and offers basic insights into how geologic maps created from different tactics can be reconciled in support of exploratory missions. Additional abstracts [4-6] from this study discuss various exploration and science results of these efforts.

  4. Mapping Tamarix: New techniques for field measurements, spatial modeling and remote sensing

    NASA Astrophysics Data System (ADS)

    Evangelista, Paul H.

    peak growing months. These studies demonstrate that new techniques can further our understanding of tamarisk's impacts on ecosystem processes, predict potential distribution and new invasions, and improve our ability to detect occurrence using remote sensing techniques. Collectively, the results of my studies may increase our ability to map tamarisk distributions and better quantify its impacts over multiple spatial and temporal scales.

  5. Classification of a set of vectors using self-organizing map- and rule-based technique

    NASA Astrophysics Data System (ADS)

    Ae, Tadashi; Okaniwa, Kaishirou; Nosaka, Kenzaburou

    2005-02-01

    There exist various objects, such as pictures, music, texts, etc., around our environment. We have a view for these objects by looking, reading or listening. Our view is concerned with our behaviors deeply, and is very important to understand our behaviors. We have a view for an object, and decide the next action (data selection, etc.) with our view. Such a series of actions constructs a sequence. Therefore, we propose a method which acquires a view as a vector from several words for a view, and apply the vector to sequence generation. We focus on sequences of the data of which a user selects from a multimedia database containing pictures, music, movie, etc... These data cannot be stereotyped because user's view for them changes by each user. Therefore, we represent the structure of the multimedia database as the vector representing user's view and the stereotyped vector, and acquire sequences containing the structure as elements. Such a vector can be classified by SOM (Self-Organizing Map). Hidden Markov Model (HMM) is a method to generate sequences. Therefore, we use HMM of which a state corresponds to the representative vector of user's view, and acquire sequences containing the change of user's view. We call it Vector-state Markov Model (VMM). We introduce the rough set theory as a rule-base technique, which plays a role of classifying the sets of data such as the sets of "Tour".

  6. Novel Technique for Cardiac Electromechanical Mapping with Magnetic Resonance Imaging Tagging and an Epicardial Electrode Sock

    PubMed Central

    Faris, Owen P.; Evans, Frank J.; Ennis, Daniel B.; Helm, Patrick A.; Taylor, Joni L.; Chesnick, A. Scott; Guttman, Michael A.; Ozturk, Cengizhan; Mcveigh, Elliot R.

    2005-01-01

    Near-simultaneous measurements of electrical and mechanical activation over the entire ventricular surface are now possible using magnetic resonance imaging tagging and a multielectrode epicardial sock. This new electromechanical mapping technique is demonstrated in the ventricularly paced canine heart. A 128-electrode epicardial sock and pacing electrodes were placed on the hearts of four anesthetized dogs. In the magnetic resonance scanner, tagged cine images (8–15 ms/frame) and sock electrode recordings (1000 Hz) were acquired under right-ventricular pacing and temporally referenced to the pacing stimulus. Electrical recordings were obtained during intermittent breaks in image acquisition, so that both data sets represented the same physiologic state. Since the electrodes were not visible in the images, electrode recordings and cine images were spatially registered with Gd-DTPA markers attached to the sock. Circumferential strain was calculated at locations corresponding to electrodes. For each electrode location, electrical and mechanical activation times were calculated and relationships between the two activation patterns were demonstrated. This method holds promise for improving understanding of the relationships between the patterns of electrical activation and contraction in the heart. PMID:12723684

  7. Studying AGN Feedback with Galactic Outflows in Luminous Obscured Quasar

    NASA Astrophysics Data System (ADS)

    Sun, Ai-Lei

    2016-01-01

    Feedback from Active galactic nuclei (AGN) has been proposed as an important quenching mechanism to suppress star formation in massive galaxies. We investigate the most direct form of AGN feedback - galactic outflows - in the most luminous obscured AGN (L>10^45 erg/s) from the SDSS sample in the nearby universe (z<0.2). Using ALMA and Magellan observations to target molecular and ionized outflows, we find that luminous AGN can impact the dynamics and phase of the galactic medium, and confirm the complex multi-phase and multi-scaled nature of the feedback phenomenon. In particular, we found that most of these luminous AGN hosts ionized outflows. The outflow size, velocity, and energetics correlate with the AGN luminosity, and can be very extended (r > 10 kpc) and fast (v > 1000 km/s) for the most luminous ones. I end with presenting a new technique to find extended ionized outflows using broadband imaging surveys, and to characterize their occurrence rate, morphology, size distribution, and their dependence on the AGN luminosity. This technique will open a new window for feedback studies in the era of large-scale optical imaging surveys, e.g., HSC and then LSST.

  8. Mapping of different structures on large area of granite sample using laser-ablation based analytical techniques, an exploratory study

    NASA Astrophysics Data System (ADS)

    Novotný, K.; Kaiser, J.; Galiová, M.; Konečná, V.; Novotný, J.; Malina, R.; Liška, M.; Kanický, V.; Otruba, V.

    2008-10-01

    Laser-ablation based analytical techniques represent a simple way for fast chemical analysis of different materials. In this work, an exploratory study of multi-element (Ca, Al, Fe, Mn) mappings of a granite sample surface was performed by laser-induced breakdown spectroscopy (LIBS) and subsequently by laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) analysis. The operating parameters (e.g. pulse energy, ablation-crater size) were optimized for both techniques in order to achieve the appropriate conditions for two-dimensional high-resolution compositional mappings of mineral microstructures in large sample areas. The sample was scanned with 100 × 100 individual sample points to map an area of 20 × 20 mm 2. The normalized signals were used for construct of contour plots which were colored according local distribution of the selected elements. The results of two laser-based methods were compared and found to be similar.

  9. The Effect of Using the Story-Mapping Technique on Developing Tenth Grade Students' Short Story Writing Skills in EFL

    ERIC Educational Resources Information Center

    Ibnian, Salem Saleh Khalaf

    2010-01-01

    The current study aimed at investigating the effect of using the story-mapping technique on developing tenth grade students' short story writing skills in EFL. The study attempted to answer the following questions: (1) What are the short story writing skills needed for tenth grade students in EFL?; and (2) What is the effect of the using the…

  10. Use of Portfolio Assessment Technique in Teaching Map Sketching and Location in Secondary School Geography in Jos, Nigeria

    ERIC Educational Resources Information Center

    Ugodulunwa, Christiana; Wakjissa, Sayita

    2015-01-01

    This study investigated the use of portfolio assessment technique in teaching map sketching and location in geography in Jos, Nigeria. It adopted a quasi-experimental design. Two schools were selected using a table of random numbers from a population of 51 schools in Jos South and assigned to each of experimental and control group. The…

  11. A first look at the application of signal extraction techniques to the analysis of body surface potential maps

    NASA Technical Reports Server (NTRS)

    Weinstein, S. B.; Mcneel, M. L.; Matthews, E.; Fischmann, E. J.

    1976-01-01

    Partial body surface potential maps from both normal subjects and subjects with independently diagnosed myocardial infarcts are visually compared from superimposed plots. A correlation test is devised to distinguish the two groups, with the reference waveform determined by means of a gradient-search algorithm. The results are encouraging, and suggest further investigation of these techniques as a future diagnostic tool.

  12. Notes for the improvement of a remote sensing multispectral data non-supervised classification and mapping technique

    NASA Technical Reports Server (NTRS)

    Dalton, C. C.

    1973-01-01

    Examined are: (1) the sequential clustering technique for the unsupervised automatic classification and mapping of earth resources satellite data, (2) theoretical analysis of the tests which were used, and (3) derivation of an alternative set of tests and their necessary algorithm.

  13. The Ozone Mapping and Profiler Suite: Extending the BUV Technique to Meet Future Ozone Measurement Requirements

    NASA Astrophysics Data System (ADS)

    Seftor, C. J.; Larsen, J. C.; Remund, Q.; Rodriguez, J. V.; Flynn, L. E.; Hilsenrath, E.

    2003-12-01

    Measurements from the original TOMS and SBUV instruments onboard the Nimbus-7 spacecraft form one of the cornerstones of satellite-based studies of long-term ozone trends. These sensors established the use of the backscattered ultraviolet (BUV) technique, along with calibration techniques based on the measurement of solar flux, to determine and monitor total column and profile ozone amounts on a global, daily basis. They also provided the foundation for the design and development of the Meteor-3, ADEOS, and Earth Probe TOMS sensors and NOAA's successful SBUV/2 series of ozone sensors, whose total column and profile ozone measurements continue through today to extend the set of ozone measurements begun by Nimbus-7. The Ozone Mapping and Profiler Suite (OMPS) is a new generation of hyperspectral BUV sensors that are currently in development for the National Polar-orbiting Operational Environmental Satellite System (NPOESS). These sensors, whose first launch will be onboard the NPOESS Preparatory Project (NPP) satellite in 2006, are designed to continue this long-term data set with better accuracy, precision, and other requirements than any of the total column and profile ozone retrieval systems currently in orbit. In developing the OMPS suite to meet the NPOESS requirements, we systematically analyzed the performance of current BUV systems. We determined that the TOMS sensor and algorithm provided a strong starting point for the development of the OMPS total column ozone retrieval system and we identified areas where design improvements in both would lead to the performance necessary to meet the NPOESS requirements. Since we also determined that an SBUV-type nadir-looking sensor would not meet the NPOESS profile ozone requirements, the OMPS system includes both a nadir profiling sensor to provide measurements that directly link to the SBUV and SBUV/2 heritage dataset and a sensor-algorithm system that uses the limb-scattered BUV/Visible technique pioneered by SOLSE

  14. Soil zymography - A novel technique for mapping enzyme activity in the rhizosphere

    NASA Astrophysics Data System (ADS)

    Spohn, Marie

    2014-05-01

    The effect plant roots on microbial activity in soil at the millimeter scale is poorly understood. One reason for this is that spatially explicit methods for the study of microbial activity in soil are limited. Here we present a quantitative in situ technique for mapping the distribution of exoenzymes in soil along with some results about the effects of roots on exoenzyme activity in soil. In the first study we showed that both acid and alkaline phosphatase activity were up to 5.4-times larger in the rhizosphere of Lupinus albus than in the bulk soil. While acid phosphatase activity (produced by roots and microorganisms) was closely associated with roots, alkaline phosphatase activity (produced only by microorganisms) was more widely distributed, leading to a 2.5-times larger area of activity of alkaline than of acid phosphatase. These results indicate a spatial differentiation of different ecophysiological groups of organic phosphorus mineralizing organisms in the rhizosphere which might alleviate a potential competition for phosphorus between them. In a second study cellulase, chitinase and phosphatase activities were analyzed in the presence of living Lupinus polyphyllus roots and dead/dying roots (in the same soils 10, 20 and 30 days after cutting the L. polyphyllus shoots). The activity of all three enzymes was 9.0 to 13.9-times higher at the living roots compared to the bulk soil. Microhotspots of cellulase, chitinase and phosphatase activity in the soil were found up to 60 mm away from the living roots. 10 days after shoot cutting, the areas of high activities of cellulase and phosphatase activity were extend up to 55 mm away from the next root, while the extension of the area of chitinase activity did not change significantly. At the root, cellulase and chitinase activity increased first at the root tips after shoot cutting and showed maximal activity 20 days after shoot cutting. The number and activity of microhotspots of chitinase activity was maximal 10

  15. Side-scan sonar mapping: Pseudo-real-time processing and mosaicking techniques

    SciTech Connect

    Danforth, W.W.; Schwab, W.C.; O'Brien, T.F. ); Karl, H. )

    1990-05-01

    The US Geological Survey (USGS) surveyed 1,000 km{sup 2} of the continental shelf off San Francisco during a 17-day cruise, using a 120-kHz side-scan sonar system, and produced a digitally processed sonar mosaic of the survey area. The data were processed and mosaicked in real time using software developed at the Lamont-Doherty Geological Observatory and modified by the USGS, a substantial task due to the enormous amount of data produced by high-resolution side-scan systems. Approximately 33 megabytes of data were acquired every 1.5 hr. The real-time sonar images were displayed on a PC-based workstation and the data were transferred to a UNIX minicomputer where the sonar images were slant-range corrected, enhanced using an averaging method of desampling and a linear-contrast stretch, merged with navigation, geographically oriented at a user-selected scale, and finally output to a thermal printer. The hard-copy output was then used to construct a mosaic of the survey area. The final product of this technique is a UTM-projected map-mosaic of sea-floor backscatter variations, which could be used, for example, to locate appropriate sites for sediment sampling to ground truth the sonar imagery while still at sea. More importantly, reconnaissance surveys of this type allow for the analysis and interpretation of the mosaic during a cruise, thus greatly reducing the preparation time needed for planning follow-up studies of a particular area.

  16. Enhancing Local Climate Projections of Precipitation: Assets and Limitations of Quantile Mapping Techniques for Statistical Downscaling

    NASA Astrophysics Data System (ADS)

    Ivanov, Martin; Kotlarski, Sven; Schär, Christoph

    2015-04-01

    The Swiss CH2011 scenarios provide a portfolio of climate change scenarios for the region of Switzerland, specifically tailored for use in climate impact research. Although widely applied by a variety of end-users, these scenarios are subject to several limitations related to the underlying delta change methodology. Examples are difficulties to appropriately account for changes in the spatio-temporal variability of meteorological fields and for changes in extreme events. The recently launched ELAPSE project (Enhancing local and regional climate change projections for Switzerland) is connected to the EU COST Action VALUE (www.value-cost.eu) and aims at complementing CH2011 by further scenario products, including a bias-corrected version of daily scenarios at the site scale. For this purpose the well-established empirical quantile mapping (QM) methodology is employed. Here, daily temperature and precipitation output of 15 GCM-RCM model chains of the ENSEMBLES project is downscaled and bias-corrected to match observations at weather stations in Switzerland. We consider established QM techniques based on all empirical quantiles or linear interpolation between the empirical percentiles. In an attempt to improve the downscaling of extreme precipitation events, we also apply a parametric approximation of the daily precipitation distribution by a dynamically weighted mixture of a Gamma distribution for the bulk and a Pareto distribution for the right tail for the first time in the context of QM. All techniques are evaluated and intercompared in a cross-validation framework. The statistical downscaling substantially improves virtually all considered distributional and temporal characteristics as well as their spatial distribution. The empirical methods have in general very similar performances. The parametric method does not show an improvement over the empirical ones. Critical sites and seasons are highlighted and discussed. Special emphasis is placed on investigating the

  17. The Art of Teaching Map and Compass: Instructional Techniques, Curricular Formats and Practical Field Exercises.

    ERIC Educational Resources Information Center

    Watters, Ron

    This paper discusses the value of teaching map and compass skills in the classroom or an outdoor situation. Navigation is the most basic of all outdoor skills. A map and compass curriculum can be taught to anyone, is inexpensive, and is easily incorporated in a variety of educational situations. General teaching principles are outlined: (1) start…

  18. Applications of remote sensing techniques to county land use and flood hazard mapping

    NASA Technical Reports Server (NTRS)

    Clark, R. B.; Conn, J. S.; Miller, D. A.; Mouat, D. A.

    1975-01-01

    The application of remote sensing in Arizona is discussed. Land use and flood hazard mapping completed by the Applied Remote Sensing Program is described. Areas subject to periodic flood inundation are delineated and land use maps monitoring the growth within specific counties are provided.

  19. A regional ionospheric TEC mapping technique over China and adjacent areas on the basis of data assimilation

    NASA Astrophysics Data System (ADS)

    Aa, Ercha; Huang, Wengeng; Yu, Shimei; Liu, Siqing; Shi, Liqin; Gong, Jiancun; Chen, Yanhong; Shen, Hua

    2015-06-01

    In this paper, a regional total electron content (TEC) mapping technique over China and adjacent areas (70°E-140°E and 15°N-55°N) is developed on the basis of a Kalman filter data assimilation scheme driven by Global Navigation Satellite Systems (GNSS) data from the Crustal Movement Observation Network of China and International GNSS Service. The regional TEC maps can be generated accordingly with the spatial and temporal resolution being 1°×1° and 5 min, respectively. The accuracy and quality of the TEC mapping technique have been validated through the comparison with GNSS observations, the International Reference Ionosphere model values, the global ionosphere maps from Center for Orbit Determination of Europe, and the Massachusetts Institute of Technology Automated Processing of GPS TEC data from Madrigal database. The verification results indicate that great systematic improvements can be obtained when data are assimilated into the background model, which demonstrates the effectiveness of this technique in providing accurate regional specification of the ionospheric TEC over China and adjacent areas.

  20. Benthic habitat mapping: A review of progress towards improved understanding of the spatial ecology of the seafloor using acoustic techniques

    NASA Astrophysics Data System (ADS)

    Brown, Craig J.; Smith, Stephen J.; Lawton, Peter; Anderson, John T.

    2011-05-01

    This review examines the various strategies and methods used to produce benthic habitat maps using acoustic remote sensing techniques, coupled with in situ sampling. The applications of three acoustic survey techniques are examined in detail: single-beam acoustic ground discrimination systems, sidescan sonar systems, and multi-beam echo sounders. Over the past decade we have witnessed the nascence of the field of benthic habitat mapping and, on the evidence of the literature reviewed in this paper, have seen a rapid evolution in the level of sophistication in our ability to image and thus map seafloor habitats. As acoustic survey tools have become ever more complex, new methods have been tested to segment, classify and combine these data with biological ground truth sample data. Although the specific methods used to derive habitat maps vary considerably, the review indicates that studies can generally be categorized into one of three over-arching strategies; 1) Abiotic surrogate mapping; 2) Assemble first, predict later (unsupervised classification); 3) Predict first, assemble later (supervised classification). Whilst there is still no widely accepted agreement on the best way to produce benthic habitat maps, all three strategies provide valuable map resources to support management objectives. Whilst there is still considerable work to be done before we can answer many of the outstanding technological, methodological, ecological and theoretical questions that have been raised here, the review concludes that the advent of spatial ecological studies founded on high-resolution environmental data sets will undoubtedly help us to examine patterns in community and species distributions. This is a vital first step in unraveling ecological complexities and thus providing improved spatial information for management of marine systems.

  1. Facilitating the exploitation of ERTS-1 imagery using snow enhancement techniques. [geological fault maps of Massachusetts and Connecticut

    NASA Technical Reports Server (NTRS)

    Wobber, F. J. (Principal Investigator); Martin, K. R.; Amato, R. V.; Leshendok, T.

    1973-01-01

    The author has identified the following significant results. The applications of ERTS-1 imagery for geological fracture mapping regardless of season has been repeatedly confirmed. The enhancement provided by a differential cover of snow increases the number and length of fracture-lineaments which can be detected with ERTS-1 data and accelerates the fracture mapping process for a variety of practical applications. The geological mapping benefits of the program will be realized in geographic areas where data are most needed - complex glaciated terrain and areas of deep residual soils. ERTS-1 derived fracture-lineament maps which provide detail well in excess of existing geological maps are not available in the Massachusetts-Connecticut area. The large quantity of new data provided by ERTS-1 may accelerate and improve field mapping now in progress in the area. Numerous other user groups have requested data on the techniques. This represents a major change in operating philosophy for groups who to data judged that snow obscured geological detail.

  2. Impact of Intestinal Microbiota on Intestinal Luminal Metabolome

    PubMed Central

    Matsumoto, Mitsuharu; Kibe, Ryoko; Ooga, Takushi; Aiba, Yuji; Kurihara, Shin; Sawaki, Emiko; Koga, Yasuhiro; Benno, Yoshimi

    2012-01-01

    Low–molecular-weight metabolites produced by intestinal microbiota play a direct role in health and disease. In this study, we analyzed the colonic luminal metabolome using capillary electrophoresis mass spectrometry with time-of-flight (CE-TOFMS) —a novel technique for analyzing and differentially displaying metabolic profiles— in order to clarify the metabolite profiles in the intestinal lumen. CE-TOFMS identified 179 metabolites from the colonic luminal metabolome and 48 metabolites were present in significantly higher concentrations and/or incidence in the germ-free (GF) mice than in the Ex-GF mice (p < 0.05), 77 metabolites were present in significantly lower concentrations and/or incidence in the GF mice than in the Ex-GF mice (p < 0.05), and 56 metabolites showed no differences in the concentration or incidence between GF and Ex-GF mice. These indicate that intestinal microbiota highly influenced the colonic luminal metabolome and a comprehensive understanding of intestinal luminal metabolome is critical for clarifying host-intestinal bacterial interactions. PMID:22724057

  3. Comparison of manually produced and automated cross country movement maps using digital image processing techniques

    NASA Technical Reports Server (NTRS)

    Wynn, L. K.

    1985-01-01

    The Image-Based Information System (IBIS) was used to automate the cross country movement (CCM) mapping model developed by the Defense Mapping Agency (DMA). Existing terrain factor overlays and a CCM map, produced by DMA for the Fort Lewis, Washington area, were digitized and reformatted into geometrically registered images. Terrain factor data from Slope, Soils, and Vegetation overlays were entered into IBIS, and were then combined utilizing IBIS-programmed equations to implement the DMA CCM model. The resulting IBIS-generated CCM map was then compared with the digitized manually produced map to test similarity. The numbers of pixels comprising each CCM region were compared between the two map images, and percent agreement between each two regional counts was computed. The mean percent agreement equalled 86.21%, with an areally weighted standard deviation of 11.11%. Calculation of Pearson's correlation coefficient yielded +9.997. In some cases, the IBIS-calculated map code differed from the DMA codes: analysis revealed that IBIS had calculated the codes correctly. These highly positive results demonstrate the power and accuracy of IBIS in automating models which synthesize a variety of thematic geographic data.

  4. Analysis of terrain map matching using multisensing techniques for applications to autonomous vehicle navigation

    NASA Technical Reports Server (NTRS)

    Page, Lance; Shen, C. N.

    1991-01-01

    This paper describes skyline-based terrain matching, a new method for locating the vantage point of laser range-finding measurements on a global map previously prepared by satellite or aerial mapping. Skylines can be extracted from the range-finding measurements and modelled from the global map, and are represented in parametric, cylindrical form with azimuth angle as the independent variable. The three translational parameters of the vantage point are determined with a three-dimensional matching of these two sets of skylines.

  5. Properties of unusually luminous supernovae

    NASA Astrophysics Data System (ADS)

    Pan, Tony Shih Arng

    This thesis is a theoretical study of the progenitors, event rates, and observational properties of unusually luminous supernova (SN), and aims to identify promising directions for future observations. In Chapter 2, we present model light curves and spectra of pair-instability supernovae (PISNe) over a range of progenitor masses and envelope structures for Pop III stars. We calculate the rates and detectability of PISNe, core-collapse supernovae (CCSNe), and Type Ia SNe at the Epoch of Reionization with the James Webb Space Telescope (JWST), which can be used to determine the contribution of Pop III versus Pop II stars toward ionizing the universe. Although CCSNe are the least intrinsically luminous supernovae, Chapter 5 shows that a JWST survey targeting known galaxy clusters with Einstein radii > 35" should discover gravitationally lensed CCSNe at redshifts exceeding z = 7--8. In Chapter 3, we explain the Pop II/I progenitors of observed PISNe in the local universe can be created via mergers in runaway collisions in young, dense star clusters, despite copious mass loss via line-driven winds. The PISN rate from this mechanism is consistent with the observed volumetric rate, and the Large Synoptic Survey Telescope could discover ~102 such PISNe per year. In Chapter 4, we identify 10 star clusters which may host PISN progenitors with masses up to 600 solar masses formed via runaway collisions. We estimate the probabilities of these very massive stars being in eclipsing binaries to be ≳ 30%, and find that their transits can be detected even under the contamination of the background cluster light, due to mean transit depths of ~10 6 solar luminosities. In Chapter 6, we show that there could be X-ray analogues of optically super-luminous SNe that are powered by the conversion of the kinetic energy of SN ejecta into radiation upon its collision with a dense but optically-thin circumstellar shell. We find shell configurations that can convert a large fraction of the SN

  6. An automated technique for detailed ?-FTIR mapping of diamond and spectral deconvolution

    NASA Astrophysics Data System (ADS)

    Howell, Dan; Griffin, Bill; O'Neill, Craig; O'Reilly, Suzanne; Pearson, Norman; Handley, Heather

    2010-05-01

    other commonly found defects and impurities. Whether these are intrinsic defects like platelets, extrinsic defects like hydrogen or boron atoms, or inclusions of minerals or fluids. Recent technological developments in the field of spectroscopy allow detailed μ-FTIR analysis to be performed rapidly in an automated fashion. The Nicolet iN10 microscope has an integrated design that maximises signal throughput and allows spectra to be collected with greater efficiency than is possible with conventional μ-FTIR spectrometer-microscope systems. Combining this with a computer controlled x-y stage allows for the automated measuring of several thousand spectra in only a few hours. This affords us the ability to record 2D IR maps of diamond plates with minimal effort, but has created the need for an automated technique to process the large quantities of IR spectra and obtain quantitative data from them. We will present new software routines that can process large batches of IR spectra, including baselining, conversion to absorption coefficient, and deconvolution to identify and quantify the various nitrogen components. Possible sources of error in each step of the process will be highlighted so that the data produced can be critically assessed. The end result will be the production of various false colour 2D maps that show the distribution of nitrogen concentrations and aggregation states, as well as other identifiable components.

  7. Thermal mapping of mountain slopes on Mars by application of a Differential Apparent Thermal Inertia technique

    NASA Astrophysics Data System (ADS)

    Kubiak, Marta; Mège, Daniel; Gurgurewicz, Joanna; Ciazela, Jakub

    2015-04-01

    Thermal inertia (P) is an important property of geologic surfaces that essentially describes the resistance to temperature (T) change as heat is added. Most remote sensing data describe the surface only. P is a volume property that is sensitive to the composition of the subsurface, down to a depth reached by the diurnal heating wave. As direct measurement of P is not possible on Mars, thermal inertia models (Fergason et al., 2006) and deductive methods (the Apparent Thermal Inertia: ATI and Differential Apparent Thermal Inertia: DATI) are used to estimate it. ATI is computed as (1 - A) / (Tday - Tnight), where A is albedo. Due to the lack of the thermal daytime images with maximum land surface temperature (LST) and nighttime images with minimum LST in Valles Marineris region, the ATI method is difficult to apply. Instead, we have explored the DATI technique (Sabol et al., 2006). DATI is calculated based on shorter time (t) intervals with a high |ΔT/Δt| gradient (in the morning or in the afternoon) and is proportional to the day/night temperature difference (ATI), and hence P. Mars, which exhibits exceptionally high |ΔT/Δt| gradients due to the lack of vegetation and thin atmosphere, is especially suitable for the DATI approach. Here we present a new deductive method for high-resolution differential apparent thermal inertia (DATI) mapping for areas of highly contrasted relief (e.g., Valles Marineris). Contrary to the thermal inertia models, our method takes local relief characteristics (slopes and aspects) into account. This is crucial as topography highly influences A and ΔT measurements. In spite of the different approach, DATI values in the flat areas are in the same range as the values obtained by Fergason et al. (2006). They provide, however, more accurate information for geological interpretations of hilly or mountainous terrains. Sabol, D. E., Gillespie, A. R., McDonald, E., and Danilina, I., 2006. Differential Thermal Inertia of Geological Surfaces. In

  8. A Hydrogeologic Map of the Death Valley Region, Nevada and California, Developed Using GIS Techniques

    USGS Publications Warehouse

    Faunt, Claudia C.; D'Agnese, Frank A.; Turner, A. Keith

    1997-01-01

    In support of Yucca Mountain site characterization studies, a hydrogeologic framework was developed, and a hydrogeologic map was constructed for the Death Valley region. The region, covering approximately 100,000 km 2 along the Nevada-California border near Las Vegas, is characterized by isolated mountain ranges juxtaposed against broad, alluvium-filled valleys. Geologic conditions are typical of the Basin and Range Province; a variety of sedimentary and igneous intrusive and extrusive rocks have been subjected to both compressional and extensional deformation. The regional ground-water flow system can best be described as a series of connected intermontane basins in which ground-water flow occurs in basin-fill deposits, carbonate rocks, clastic rocks, and volcanic rocks. Previous investigations have developed more site-specific hydrogeologic relationships; however, few have described all the lithologies within the Death Valley regional ground-water flow system. Information required to characterize the hydrogeologic units in the region was obtained from regional geologic maps and reports. Map data were digitized from regional geologic maps and combined into a composite map using a geographic information system. This map was simplified to show 10 laterally extensive hydrogeologic units with distinct hydrologic properties. The hydraulic conductivity values for the hydrogeologic units range over 15 orders of magnitude due to the variability in burial depth and degree of fracturing.

  9. A hydrogeologic map of the Death Valley region, Nevada, and California, developed using GIS techniques

    SciTech Connect

    Faunt, C.C.; D`Agnese, F.A.; Turner, A.K.

    1997-12-31

    In support of Yucca Mountain site characterization studies, a hydrogeologic framework was developed, and a hydrogeologic map was constructed for the Death Valley region. The region, covering approximately 100,000 km{sup 2} along the Nevada-California border near Las Vegas, is characterized by isolated mountain ranges juxtaposed against broad, alluvium-filled valleys. Geologic conditions are typical of the Basin and Range Province; a variety of sedimentary and igneous intrusive and extrusive rocks have been subjected to both compressional and extensional deformation. The regional ground-water flow system can best be described as a series of connected intermontane basins in which ground-water flow occurs in basin-fill deposits, carbonate rocks, clastic rocks, and volcanic rocks. Previous investigations have developed more site-specific hydrogeologic relationships; however, few have described all the lithologies within the Death Valley regional ground-water flow system. Information required to characterize the hydrogeologic units in the region was obtained from regional geologic maps and reports. Map data were digitized from regional geologic maps and combined into a composite map using a geographic information system. This map was simplified to show 10 laterally extensive hydrogeologic units with distinct hydrologic properties. The hydraulic conductivity values for the hydrogeologic units range over 15 orders of magnitude due to the variability in burial depth and degree of fracturing.

  10. Apparent speed increases at low luminance

    PubMed Central

    Vaziri-Pashkam, Maryam; Cavanagh, Patrick

    2009-01-01

    To investigate the effect of luminance on apparent speed, subjects adjusted the speed of a low-luminance rotating grating (0.31 cd/m2) to match that of a high-luminance one (1260 cd/m2). Above 4 Hz, subjects overestimated the speed of the low-luminance grating. This overestimation increased as a function of temporal rate and reached 30% around 10 Hz temporal rates. The speed overestimation became significant once the lower luminance was 2.4 log units lower than the high luminance comparison. Next the role of motion smear in speed overestimation was examined. First it was shown that the length of the perceived motion smear increased at low luminances. Second, the length of the visible smear was manipulated by changing the presentation time of the stimuli. Speed overestimation was reduced at shorter presentation times. Third the speed of a blurred stimulus was compared to a stimulus with sharp edges and the blurred stimulus was judged to move faster. These results indicate that the length of motion smear following a target contributes to its perceived speed and that this leads to speed overestimation at low luminance where motion traces lengthen because of increased persistence. PMID:19146275

  11. 78 FR 68100 - Luminant Generation Company, LLC

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-13

    ... consecutive weeks of a combined license (COL) application from Luminant Generation Company, LLC. (Luminant... ML082680250. The application is also available at http://www.nrc.gov/reactors/new-reactors/col.html . NRC's... filed applications for COLs with the NRC, pursuant to Section 103 of the Atomic Energy Act of 1954,...

  12. 78 FR 69710 - Luminant Generation Company, LLC

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-20

    ... consecutive weeks of a combined license (COL) application from Luminant Generation Company, LLC. (Luminant... ML082680250. The application is also available at http://www.nrc.gov/reactors/new-reactors/col.html . NRC's... filed applications for COLs with the NRC, pursuant to Section 103 of the Atomic Energy Act of 1954,...

  13. Using a Concept Mapping Tool with a Photograph Association Technique (CoMPAT) to Elicit Children's Ideas about Microbial Activity

    NASA Astrophysics Data System (ADS)

    Byrne, Jenny; Grace, Marcus

    2010-03-01

    Concept mapping is a technique used to provide a visual representation of an individual's ideas about a concept or set of related concepts. This paper describes a concept mapping tool using a photograph association technique (CoMPAT) that is considered to be a novel way of eliciting children's ideas. What children at 11 years of age know about particular concepts related to microbial activity is presented and discussed in relation to the effectiveness of CoMPAT as a tool to draw out their ideas. It is proposed that this tool could be used to explore ideas about this and other science concepts from children in other age groups, and where language may be a barrier to participation.

  14. Nonlinear mapping technique for data visualization and clustering assessment of LIBS data: application to ChemCam data.

    PubMed

    Lasue, J; Wiens, R C; Stepinski, T F; Forni, O; Clegg, S M; Maurice, S

    2011-07-01

    ChemCam is a remote laser-induced breakdown spectroscopy (LIBS) instrument that will arrive on Mars in 2012, on-board the Mars Science Laboratory Rover. The LIBS technique is crucial to accurately identify samples and quantify elemental abundances at various distances from the rover. In this study, we compare different linear and nonlinear multivariate techniques to visualize and discriminate clusters in two dimensions (2D) from the data obtained with ChemCam. We have used principal components analysis (PCA) and independent components analysis (ICA) for the linear tools and compared them with the nonlinear Sammon's map projection technique. We demonstrate that the Sammon's map gives the best 2D representation of the data set, with optimization values from 2.8% to 4.3% (0% is a perfect representation), together with an entropy value of 0.81 for the purity of the clustering analysis. The linear 2D projections result in three (ICA) and five times (PCA) more stress, and their clustering purity is more than twice higher with entropy values about 1.8. We show that the Sammon's map algorithm is faster and gives a slightly better representation of the data set if the initial conditions are taken from the ICA projection rather than the PCA projection. We conclude that the nonlinear Sammon's map projection is the best technique for combining data visualization and clustering assessment of the ChemCam LIBS data in 2D. PCA and ICA projections on more dimensions would improve on these numbers at the cost of the intuitive interpretation of the 2D projection by a human operator.

  15. Nonlinear mapping technique for data visualization and clustering assessment of LIBS data: application to ChemCam data

    NASA Astrophysics Data System (ADS)

    Lasue, Jeremie; Wiens, Roger; Stepinski, Tom; Forni, Olivier; Clegg, Samuel; Maurice, Sylvestre; Chemcam Team

    2011-02-01

    ChemCam is a remote laser-induced breakdown spectroscopy (LIBS) instrument that will arrive on Mars in 2012, on-board the Mars Science Laboratory Rover. The LIBS technique is crucial to accurately identify samples and quantify elemental abundances at various distances from the rover. In this study, we compare different linear and nonlinear multivariate techniques to visualize and discriminate clusters in two dimensions (2D) from the data obtained with ChemCam. We have used principal components analysis (PCA) and independent components analysis (ICA) for the linear tools and compared them with the nonlinear Sammon's map projection technique. We demonstrate that the Sammon's map gives the best 2D representation of the data set, with optimization values from 2.8% to 4.3% (0% is a perfect representation), together with an entropy value of 0.81 for the purity of the clustering analysis. The linear 2D projections result in three (ICA) and five times (PCA) more stress, and their clustering purity is more than twice higher with entropy values about 1.8. We show that the Sammon's map algorithm is faster and gives a slightly better representation of the data set if the initial conditions are taken from the ICA projection rather than the PCA projection. We conclude that the nonlinear Sammon's map projection is the best technique for combining data visualization and clustering assessment of the ChemCam LIBS data in 2D. PCA and ICA projections on more dimensions would improve on these numbers at the cost of the intuitive interpretation of the 2D projection by a human operator.

  16. General comparison of ozone vertical profiles obtained by various techniques during the 1983 MAP/GLOBUS campaign

    NASA Technical Reports Server (NTRS)

    Matthews, W. A.; Aimedieu, P.; Megie, G.; Pelon, J.; Attmannspacher, W.; Komhyr, W.; Marche, P.; De La Noe, J.; Rigaud, P.; Robbins, D. E.

    1987-01-01

    As part of the 1983 MAP/GLOBUS campaign, atmospheric ozone profile measurements were made using a large variety of different techniques both from balloon platforms and the ground. It is shown that, for most techniques, the measured height distributions agree to within + or - 5 percent with the exception of the remote visible absorption method. This + or - 5 percent uncertainty is of the order of the individual intersystem accuracy. It is suggested that since the differences with the visible absorption method are in magnitude rather than in form, the absorption cross-section data could be the possible cause for the discrepancy.

  17. Mapping the petroleum system - An investigative technique to explore the hydrocarbon fluid system

    USGS Publications Warehouse

    Magoon, L.B.; Dow, W.G.

    2000-01-01

    Creating a petroleum system map includes a series of logical steps that require specific information to explain the origin in time and space of discovered hydrocarbon occurrences. If used creatively, this map provides a basis on which to develop complementary plays and prospects. The logical steps include the characterization of a petroleum system (that is, to identify, map, and name the hydrocarbon fluid system) and the summary of these results on a folio sheet. A petroleum system map is based on the understanding that there are several levels of certainty from "guessing" to "knowing" that specific oil and gas accumulations emanated from a particular pod of active source rock. Levels of certainty start with the close geographic proximity of two or more accumulations, continues with the close stratigraphic proximity, followed by the similarities in bulk properties, and then detailed geochemical properties. The highest level of certainty includes the positive geochemical correlation of the hydrocarbon fluid in the accumulations to the extract of the active source rock. A petroleum system map is created when the following logic is implemented. Implementation starts when the oil and gas accumulations of a petroleum province are grouped stratigraphically and geographically. Bulk and geochemical properties are used to further refine the groups through the determination of genetically related oil and gas types. To this basic map, surface seeps and well shows are added. Similarly, the active source rock responsible for these hydrocarbon occurrences are mapped to further define the extent of the system. A folio sheet constructed for a hypothetical case study of the Deer-Boar(.) petroleum system illustrates this methodology.

  18. Applying Value Stream Mapping Technique for Production Improvement in a Manufacturing Company: A Case Study

    NASA Astrophysics Data System (ADS)

    Jeyaraj, K. L.; Muralidharan, C.; Mahalingam, R.; Deshmukh, S. G.

    2013-01-01

    The purpose of this paper is to explain how value stream mapping (VSM) is helpful in lean implementation and to develop the road map to tackle improvement areas to bridge the gap between the existing state and the proposed state of a manufacturing firm. Through this case study, the existing stage of manufacturing is mapped with the help of VSM process symbols and the biggest improvement areas like excessive TAKT time, production, and lead time are identified. Some modifications in current state map are suggested and with these modifications future state map is prepared. Further TAKT time is calculated to set the pace of production processes. This paper compares the current state and future state of a manufacturing firm and witnessed 20 % reduction in TAKT time, 22.5 % reduction in processing time, 4.8 % reduction in lead time, 20 % improvement in production, 9 % improvement in machine utilization, 7 % improvement in man power utilization, objective improvement in workers skill level, and no change in the product and semi finished product inventory level. The findings are limited due to the focused nature of the case study. This case study shows that VSM is a powerful tool for lean implementation and allows the industry to understand and continuously improve towards lean manufacturing.

  19. Mapping of Primary Instructional Methods and Teaching Techniques for Regularly Scheduled, Formal Teaching Sessions in an Anesthesia Residency Program.

    PubMed

    Vested Madsen, Matias; Macario, Alex; Yamamoto, Satoshi; Tanaka, Pedro

    2016-06-01

    In this study, we examined the regularly scheduled, formal teaching sessions in a single anesthesiology residency program to (1) map the most common primary instructional methods, (2) map the use of 10 known teaching techniques, and (3) assess if residents scored sessions that incorporated active learning as higher quality than sessions with little or no verbal interaction between teacher and learner. A modified Delphi process was used to identify useful teaching techniques. A representative sample of each of the formal teaching session types was mapped, and residents anonymously completed a 5-question written survey rating the session. The most common primary instructional methods were computer slides-based classroom lectures (66%), workshops (15%), simulations (5%), and journal club (5%). The number of teaching techniques used per formal teaching session averaged 5.31 (SD, 1.92; median, 5; range, 0-9). Clinical applicability (85%) and attention grabbers (85%) were the 2 most common teaching techniques. Thirty-eight percent of the sessions defined learning objectives, and one-third of sessions engaged in active learning. The overall survey response rate equaled 42%, and passive sessions had a mean score of 8.44 (range, 5-10; median, 9; SD, 1.2) compared with a mean score of 8.63 (range, 5-10; median, 9; SD, 1.1) for active sessions (P = 0.63). Slides-based classroom lectures were the most common instructional method, and faculty used an average of 5 known teaching techniques per formal teaching session. The overall education scores of the sessions as rated by the residents were high.

  20. Processing techniques for the production of an experimental computer-generated shaded-relief map

    USGS Publications Warehouse

    Judd, Damon D.

    1986-01-01

    The data consisted of forty-eight 1° by 1° blocks of resampled digital elevation model (DEM) data. These data were digitally mosaicked and assigned colors based on intervals of elevation values. The color-coded data set was then used to create a shaded-relief image that was photographically composited with cartographic line information to produce a shaded-relief map. The majority of the processing was completed at the National Mapping Division EROS Data Center in Sioux Falls, South Dakota.

  1. Low-Altitude AVIRIS Data for Mapping Land Cover in Yellowstone National Park: Use of Isodata Clustering Techniques

    NASA Technical Reports Server (NTRS)

    Spruce, Joseph P.

    2001-01-01

    Northeast Yellowstone National Park (YNP) has a diversity of forest, range, and wetland cover types. Several remote sensing studies have recently been done in this area, including the NASA Earth Observations Commercial Applications Program (EOCAP) hyperspectral project conducted by Yellowstone Ecosystems Studies (YES) on the use of hyperspectral imaging for assessing riparian and in-stream habitats. In 1999, YES and NASA's Commercial Remote Sensing Program Office began collaborative study of this area, assessing the potential of synergistic use of hyperspectral, synthetic aperture radar (SAR), and multiband thermal data for mapping forest, range, and wetland land cover. Since the beginning, a quality 'reference' land cover map has been desired as a tool for developing and validating other land cover maps produced during the project. This paper recounts an effort to produce such a reference land cover map using low-altitude Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data and unsupervised classification techniques. The main objective of this study is to assess ISODATA classification for mapping land cover in Northeast YNP using select bands of low-altitude AVIRIS data. A secondary, more long-term objective is to assess the potential for improving ISODATA-based classification of land cover through use of principal components analysis and minimum noise fraction (MNF) techniques. This paper will primarily report on work regarding the primary research objective. This study focuses on an AVIRIS cube acquired on July 23, 1999, by the confluence of Soda Butte Creek with the Lamar River. Range and wetland habitats dominate the image with forested habitats being a comparatively minor component of the scene. The scene generally tracks from southwest to northeast. Most of the scene is valley bottom with some lower side slopes occurring on the western portion. Elevations within the AVIRIS scene range from approximately 1998 to 2165 m above sea level, based on US

  2. Comparison of regression and kriging techniques for mapping the average annual precipitation of Turkey

    NASA Astrophysics Data System (ADS)

    Bostan, P. A.; Heuvelink, G. B. M.; Akyurek, S. Z.

    2012-10-01

    Accurate mapping of the spatial distribution of annual precipitation is important for many applications in hydrology, climatology, agronomy, ecology and other environmental sciences. In this study, we compared five different statistical methods to predict spatially the average annual precipitation of Turkey using point observations of annual precipitation at meteorological stations and spatially exhaustive covariate data (i.e. elevation, aspect, surface roughness, distance to coast, land use and eco-region). The methods compared were multiple linear regression (MLR), ordinary kriging (OK), regression kriging (RK), universal kriging (UK), and geographically weighted regression (GWR). Average annual precipitation of Turkey from 1970 to 2006 was measured at 225 meteorological stations that are fairly uniformly distributed across the country, with a somewhat higher spatial density along the coastline. The observed annual precipitation varied between 255 mm and 2209 mm with an average of 628 mm. The annual precipitation was highest along the southern and northern coasts and low in the centre of the country, except for the area near the Van Lake, Keban and Ataturk Dams. To compare the performance of the interpolation techniques the total dataset was first randomly split in ten equally sized test datasets. Next, for each test data set the remaining 90% of the data comprised the training dataset. Each training dataset was then used to calibrate and apply the spatial prediction model. Predictions at the test dataset locations were compared with the observed test data. Validation was done by calculating the Root Mean Squared Error (RMSE), R-square and Standardized MSE (SMSE) values. According to these criteria, universal kriging is the most accurate with an RMSE of 178 mm, an R-square of 0.61 and an SMSE of 1.06, whilst multiple linear regression performed worst (RMSE of 222 mm, R-square of 0.39, and SMSE of 1.44). Ordinary kriging, UK using only elevation and geographically

  3. Creating High-Resolution Maps of Leaf Water Isotopes Using IM-CRDS and IRMS Techniques

    NASA Astrophysics Data System (ADS)

    Gerlein-Safdi, C.; Sinkler, C. J.; Caylor, K. K.

    2014-12-01

    Since the development of isotope ratio infrared spectroscopy (IRIS), the applications of water isotope analysis have been increasing. Here, we present a new protocol to create high-resolution maps of leaf water isotopes 18O and 2H. We use the Picarro induction module (IM-CRDS) combined with an isotope analyzer (L2130-i) to sample up to 25 locations in one half of each leaf. Each sampling location corresponds to four samples (6 mm outside diameter punched-holes) punched next to each other. In the induction module, an induction coil heats a metal holder containing the leaf sample. The sample will release water vapor that is then sent to the isotope analyzer. The IM-CRDS allows us to significantly reduce the sample size and therefore increase the sample density, compared to the traditional cryogenic extraction method. Using spatial analysis tools, we create high-resolution spatial maps of each isotope as well as d-excess maps. The water in the second half of the leaf is extracted by cryogenic extraction and analyzed using both IRIS and isotope ratio mass spectroscopy. The isotopic composition of the extracted water is compared to the average composition calculated from the maps and used for calibration. We present applications of this protocol to the analysis of the spatio-temporal evolution of foliar uptake in Colocasia esculenta under laboratory conditions.

  4. Colony mapping: A new technique for monitoring crevice-nesting seabirds

    USGS Publications Warehouse

    Renner, H.M.; Renner, M.; Reynolds, J.H.; Harping, A.M.A.; Jones, I.L.; Irons, D.B.; Byrd, G.V.

    2006-01-01

    Monitoring populations of auklets and other crevice-nesting seabirds remains problematic, although numerous methods have been attempted since the mid-1960s. Anecdotal evidence suggests several large auklet colonies have recently decreased in both abundance and extent, concurrently with vegetation encroachment and succession. Quantifying changes in the geographical extent of auklet colonies may be a useful alternative to monitoring population size directly. We propose a standardized method for colony mapping using a randomized systematic grid survey with two components: a simple presence/absence survey and an auklet evidence density survey. A quantitative auklet evidence density index was derived from the frequency of droppings and feathers. This new method was used to map the colony on St. George Island in the southeastern Bering Sea and results were compared to previous colony mapping efforts. Auklet presence was detected in 62 of 201 grid cells (each grid cell = 2500 m2) by sampling a randomly placed 16 m2 plot in each cell; estimated colony area = 155 000 m2. The auklet evidence density index varied by two orders of magnitude across the colony and was strongly correlated with means of replicated counts of birds socializing on the colony surface. Quantitatively mapping all large auklet colonies is logistically feasible using this method and would provide an important baseline for monitoring colony status. Regularly monitoring select colonies using this method may be the best means of detecting changes in distribution and population size of crevice-nesting seabirds. ?? The Cooper Ornithological Society 2006.

  5. Ionic Current Mapping Techniques and Applications to Aluminum-Copper Corrosion

    SciTech Connect

    Isaacs, H. S.; Jeffcoate, C. S.; Missert, N. A.; Barbour, J. C.

    1999-10-17

    Measurements have been made of the aluminum/metal galvanic couple. A wide range of geometries were investigated varying the areas of anodic and cathodic surfaces and employing specially designed galvanic cells with crevices. In situ ionic current density mapping was used to monitor galvanic corrosion and currents flowing between separated metals was measured.

  6. Metateaching and the Instructional Map. Teaching Techniques/Strategies Series, Volume 1.

    ERIC Educational Resources Information Center

    Timpson, William M.

    This book describes a conceptual framework, the "Instructional Map"--a metaphor for envisioning the interconnectedness of teacher and student, teaching and learning, and content and process--that can help teachers plan, sort their way through course material and instructional options, interact with students, and reflect upon progress made and what…

  7. ERTS-1 imagery interpretation techniques in the Tennessee Valley. [land use and soil mapping

    NASA Technical Reports Server (NTRS)

    Bodenheimer, R. E. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. The feasibility of delineating major soil associations and land uses through computerized analyses is discussed. Useful and potential applications in detecting landscape change and land use mapping are described. Recommendations for improving the data processing effort in a multidisciplinary program are presented.

  8. Detection and mapping of illicit drugs and their metabolites in fingermarks by MALDI MS and compatibility with forensic techniques

    NASA Astrophysics Data System (ADS)

    Groeneveld, G.; de Puit, M.; Bleay, S.; Bradshaw, R.; Francese, S.

    2015-06-01

    Despite the proven capabilities of Matrix Assisted Laser Desorption Ionisation Mass Spectrometry (MALDI MS) in laboratory settings, research is still needed to integrate this technique into current forensic fingerprinting practice. Optimised protocols enabling the compatible application of MALDI to developed fingermarks will allow additional intelligence to be gathered around a suspect’s lifestyle and activities prior to the deposition of their fingermarks while committing a crime. The detection and mapping of illicit drugs and metabolites in latent fingermarks would provide intelligence that is beneficial for both police investigations and court cases. This study investigated MALDI MS detection and mapping capabilities for a large range of drugs of abuse and their metabolites in fingermarks; the detection and mapping of a mixture of these drugs in marks, with and without prior development with cyanoacrylate fuming or Vacuum Metal Deposition, was also examined. Our findings indicate the versatility of MALDI technology and its ability to retrieve chemical intelligence either by detecting the compounds investigated or by using their ion signals to reconstruct 2D maps of fingermark ridge details.

  9. Detection and mapping of illicit drugs and their metabolites in fingermarks by MALDI MS and compatibility with forensic techniques.

    PubMed

    Groeneveld, G; de Puit, M; Bleay, S; Bradshaw, R; Francese, S

    2015-01-01

    Despite the proven capabilities of Matrix Assisted Laser Desorption Ionisation Mass Spectrometry (MALDI MS) in laboratory settings, research is still needed to integrate this technique into current forensic fingerprinting practice. Optimised protocols enabling the compatible application of MALDI to developed fingermarks will allow additional intelligence to be gathered around a suspect's lifestyle and activities prior to the deposition of their fingermarks while committing a crime. The detection and mapping of illicit drugs and metabolites in latent fingermarks would provide intelligence that is beneficial for both police investigations and court cases. This study investigated MALDI MS detection and mapping capabilities for a large range of drugs of abuse and their metabolites in fingermarks; the detection and mapping of a mixture of these drugs in marks, with and without prior development with cyanoacrylate fuming or Vacuum Metal Deposition, was also examined. Our findings indicate the versatility of MALDI technology and its ability to retrieve chemical intelligence either by detecting the compounds investigated or by using their ion signals to reconstruct 2D maps of fingermark ridge details. PMID:26118853

  10. Detection and mapping of illicit drugs and their metabolites in fingermarks by MALDI MS and compatibility with forensic techniques

    PubMed Central

    Groeneveld, G.; de Puit, M.; Bleay, S.; Bradshaw, R.; Francese, S.

    2015-01-01

    Despite the proven capabilities of Matrix Assisted Laser Desorption Ionisation Mass Spectrometry (MALDI MS) in laboratory settings, research is still needed to integrate this technique into current forensic fingerprinting practice. Optimised protocols enabling the compatible application of MALDI to developed fingermarks will allow additional intelligence to be gathered around a suspect’s lifestyle and activities prior to the deposition of their fingermarks while committing a crime. The detection and mapping of illicit drugs and metabolites in latent fingermarks would provide intelligence that is beneficial for both police investigations and court cases. This study investigated MALDI MS detection and mapping capabilities for a large range of drugs of abuse and their metabolites in fingermarks; the detection and mapping of a mixture of these drugs in marks, with and without prior development with cyanoacrylate fuming or Vacuum Metal Deposition, was also examined. Our findings indicate the versatility of MALDI technology and its ability to retrieve chemical intelligence either by detecting the compounds investigated or by using their ion signals to reconstruct 2D maps of fingermark ridge details. PMID:26118853

  11. Use and abuse of Pb-isotope fingerprinting technique and GIS mapping data to assess lead in environmental studies.

    PubMed

    Duzgoren-Aydn, N S; Weiss, A L

    2008-12-01

    The danger to health, especially of young children, from lead-bearing particulates in the surface environment is still a valid concern. Given the multifaceted uses of lead today as well as historically, the sources of lead must be investigated to distinguish the etiology of lead contamination on the environment. The lead isotope finger printing technique based on isotopic ratio analyses (IRA) is one of the methods used commonly for provenancing lead sources in environmental studies throughout the world. This technique, however, has some inherited shortcomings. Therefore, caution must be used in the interpretation of the results, as this technique can lead to overestimations as well as underestimations of the true source etiology/apportionment of lead in specific environments. This paper illustrates this with some examples. Data on geographical information systems (GIS) mapping in urban regions are also presented in this paper.

  12. Dust near luminous ultraviolet stars

    NASA Technical Reports Server (NTRS)

    Henry, Richard C.

    1992-01-01

    More than 700 luminous stars in the infrared astronomical satellite (IRAS) Skyflux plates were examined for the presence of dust heated by a nearby star. This dust may be distinguished from the ubiquitous cool cirrus by its higher temperature and thus enhanced 60 micron emission. More than 120 dust clouds were found around only 106 of the stars with a volume filling factor of 0.006 and an intercloud separation of 46 pc. A region of dust smoothly distributed through the volume of space heated by the star could not be found and hence an upper limit of 0.05 cm(exp -3) is placed on the equivalent gas density in the intercloud regions. The clouds have an average density of 0.22 cm(exp -3) and a radius of 1.9 pc, albeit with wide variations in their properties. Two different scale heights of 140 and 540 pc were found. This was interpreted as evidence for different distributions of dust in and out of the galactic disk.

  13. Dust near luminous ultraviolet stars

    NASA Technical Reports Server (NTRS)

    Henry, Richard C.

    1993-01-01

    This report describes research activities related to the Infrared Astronomical Satellite (IRAS) sky survey. About 745 luminous stars were examined for the presence of interstellar dust heated by a nearby star. The 'cirrus' discovered by IRAS is thermal radiation from interstellar dust at moderate and high galactic latitudes. The IRAS locates the dust which must (at some level) scatter ultraviolet starlight, although it was expected that thermal emission would be found around virtually every star, most stars shown no detectable emission. And the emission found is not uniform. It is not that the star is embedded in 'an interstellar medium', but rather what is found are discrete clouds that are heated by starlight. An exception is the dearth of clouds near the very hottest stars, implying that the very hottest stars play an active role with respect to destroying or substantially modifying the dust clouds over time. The other possibility is simply that the hottest stars are located in regions lacking in dust, which is counter-intuitive. A bibliography of related journal articles is attached.

  14. An optical Moiré technique for cell traction force mapping

    NASA Astrophysics Data System (ADS)

    Zheng, Xiaoyu; Zhang, Xin

    2008-12-01

    Cells alter their shape and morphology and interact with their surrounding environment. Mechanical forces developed by cells to their surrounding environments are fundamental to many physiological processes, such as cell growth, division, migration and apoptosis. In this paper, a novel optical Moiré based biomechanol force sensor was developed for cell traction force mapping. We utilized coherent laser beams to illuminate periodic polymeric substrates where isolated cells were cultured. We demonstrated one-dimensional and two-dimensional traction force mapping via optical Moiré for both cardiac myocytes and vascular smooth muscle cells. The magnification effect of the Moiré fringe pattern permits a real time monitoring of the mechanical interaction between isolated cells and their underlying periodic polymeric structures.

  15. Phase Space Tomography: A Simple, Portable and Accurate Technique to Map Phase Spaces of Beams with Space Charge

    SciTech Connect

    Stratakis, D.; Kishek, R. A.; Bernal, S.; Walter, M.; Haber, I.; Fiorito, R.; Thangaraj, J. C. T.; Quinn, B.; Reiser, M.; O'Shea, P. G.; Li, H.

    2006-11-27

    In order to understand the charged particle dynamics, e.g. the halo formation, emittance growth, x-y energy transfer and coupling, knowledge of the actual phase space is needed. Other the past decade there is an increasing number of articles who use tomography to map the beam phase space and measure the beam emittance. These studies where performed at high energy facilities where the effect of space charge was neglible and therefore not considered in the analysis. This work extends the tomography technique to beams with space charge. In order to simplify the analysis linear forces where assumed. By carefully modeling the tomography process using the particle-in-cell code WARP we test the validity of our assumptions and the accuracy of the reconstructed phase space. Finally, we report experimental results of phase space mapping at the University of Maryland Electron Ring (UMER) using tomography.

  16. Displaying and evaluating engineering properties and natural hazards using geomorphic mapping techniques: Telluride, Colorado

    SciTech Connect

    Gunawan, I.; Giardino, J.R.; Tchakerian, V.P. . Geography Dept.)

    1992-01-01

    Telluride, located in the San Juan mountains of southwestern Colorado, is situated in a glacially carved, fluvially modified alpine valley. Today this chic setting is experiencing rapid urban development resulting from flourishing tourist traffic during both the winter ski season and the summer vacation period. A new development, Mountain Village, is being built on an extensive and complex landslide that has only received superficial scrutiny. Recent fast growth is placing considerable pressure on pristine, undeveloped land. This timely quandary incorporates the interaction between prospective development, geomorphic processes, engineering factors, economic feasibility, and landuse adjudication. In an attempt to respond to these issues the State of Colorado enacted Senate Bill 35 (1972) and House Bills 1034 (1974) and 1041 (1974), all mandating assessment of the natural hazards of an area, preparatory to development. The key to evaluating the natural hazards is to comprehend the geomorphic processes. The area is highly-faulted with associated mineralization. Whereas the upper slopes are composed of massive rhyodacitic-tuff breccias and flows, the valley is sculpted from shales, sandstones, and conglomerates. Several periods of glaciation occurred in the area. Glacial till, talus slopes, avalanche chutes and cones, rock glaciers, alluvium, and landslides have been identified in the field and mapped on aerial photographs. Many of the slopes in the area are active. The authors have constructed a geomorphic map (1:12,500) that shows geology, landforms, geomorphic processes and engineering properties. This map can be used by regulatory agencies in identifying areas of natural hazards potentially sensitive to development.

  17. Mosquito habitat and dengue risk potential in Kenya: alternative methods to traditional risk mapping techniques.

    PubMed

    Attaway, David F; Jacobsen, Kathryn H; Falconer, Allan; Manca, Germana; Rosenshein Bennett, Lauren; Waters, Nigel M

    2014-11-01

    Outbreaks, epidemics and endemic conditions make dengue a disease that has emerged as a major threat in tropical and sub-tropical countries over the past 30 years. Dengue fever creates a growing burden for public health systems and has the potential to affect over 40% of the world population. The problem being investigated is to identify the highest and lowest areas of dengue risk. This paper presents "Similarity Search", a geospatial analysis aimed at identifying these locations within Kenya. Similarity Search develops a risk map by combining environmental susceptibility analysis and geographical information systems, and then compares areas with dengue prevalence to all other locations. Kenya has had outbreaks of dengue during the past 3 years, and we identified areas with the highest susceptibility to dengue infection using bioclimatic variables, elevation and mosquito habitat as input to the model. Comparison of the modelled risk map with the reported dengue epidemic cases obtained from the open source reporting ProMED and Government news reports from 1982-2013 confirmed the high-risk locations that were used as the Similarity Search presence cells. Developing the risk model based upon the bioclimatic variables, elevation and mosquito habitat increased the efficiency and effectiveness of the dengue fever risk mapping process.

  18. Subsurface object recognition by means of regularization techniques for mapping coastal waters floor

    NASA Astrophysics Data System (ADS)

    Jiménez-Rodríguez, Luis O.; Umana-Diaz, Alejandra; Diaz-Santos, Jose; Neira-Carolina, Gerardino; Morales-Morales, Javier; Rodriguez, Eladio

    2005-10-01

    A fundamental challenge to Remote Sensing is mapping the ocean floor in coastal shallow waters where variability, due to the interaction between the coast and the sea, can bring significant disparity in the optical properties of the water column. The objects to be detected, coral reefs, sands and submerged aquatic vegetation, have weak signals, with temporal and spatial variation. In real scenarios the absorption and backscattering coefficients have spatial variation due to different sources of variability (river discharge, different depths of shallow waters, water currents) and temporal fluctuations. This paper presents the development of algorithms for retrieving information and its application to the recognition, classification and mapping of objects under coastal shallow waters. A mathematical model that simplifies the radiative transfer equation was used to quantify the interaction between the object of interest, the medium and the sensor. The retrieval of information requires the development of mathematical models and processing tools in the area of inversion, image reconstruction and detection. The algorithms developed were applied to one set of remotely sensed data: a high resolution HYPERION hyperspectral imagery. An inverse problem arises as this spectral data is used for mapping the ocean shallow waters floor. Tikhonov method of regularization was used in the inversion process to estimate the bottom albedo of the ocean floor using a priori information in the form of stored spectral signatures, previously measured, of objects of interest, such as sand, corals, and sea grass.

  19. Object based technique for delineating and mapping 15 tree species using VHR WorldView-2 imagery

    NASA Astrophysics Data System (ADS)

    Mustafa, Yaseen T.; Habeeb, Hindav N.

    2014-10-01

    Monitoring and analyzing forests and trees are required task to manage and establish a good plan for the forest sustainability. To achieve such a task, information and data collection of the trees are requested. The fastest way and relatively low cost technique is by using satellite remote sensing. In this study, we proposed an approach to identify and map 15 tree species in the Mangish sub-district, Kurdistan Region-Iraq. Image-objects (IOs) were used as the tree species mapping unit. This is achieved using the shadow index, normalized difference vegetation index and texture measurements. Four classification methods (Maximum Likelihood, Mahalanobis Distance, Neural Network, and Spectral Angel Mapper) were used to classify IOs using selected IO features derived from WorldView-2 imagery. Results showed that overall accuracy was increased 5-8% using the Neural Network method compared with other methods with a Kappa coefficient of 69%. This technique gives reasonable results of various tree species classifications by means of applying the Neural Network method with IOs techniques on WorldView-2 imagery.

  20. A new strategy for snow-cover mapping using remote sensing data and ensemble based systems techniques

    NASA Astrophysics Data System (ADS)

    Roberge, S.; Chokmani, K.; De Sève, D.

    2012-04-01

    The snow cover plays an important role in the hydrological cycle of Quebec (Eastern Canada). Consequently, evaluating its spatial extent interests the authorities responsible for the management of water resources, especially hydropower companies. The main objective of this study is the development of a snow-cover mapping strategy using remote sensing data and ensemble based systems techniques. Planned to be tested in a near real-time operational mode, this snow-cover mapping strategy has the advantage to provide the probability of a pixel to be snow covered and its uncertainty. Ensemble systems are made of two key components. First, a method is needed to build an ensemble of classifiers that is diverse as much as possible. Second, an approach is required to combine the outputs of individual classifiers that make up the ensemble in such a way that correct decisions are amplified, and incorrect ones are cancelled out. In this study, we demonstrate the potential of ensemble systems to snow-cover mapping using remote sensing data. The chosen classifier is a sequential thresholds algorithm using NOAA-AVHRR data adapted to conditions over Eastern Canada. Its special feature is the use of a combination of six sequential thresholds varying according to the day in the winter season. Two versions of the snow-cover mapping algorithm have been developed: one is specific for autumn (from October 1st to December 31st) and the other for spring (from March 16th to May 31st). In order to build the ensemble based system, different versions of the algorithm are created by varying randomly its parameters. One hundred of the versions are included in the ensemble. The probability of a pixel to be snow, no-snow or cloud covered corresponds to the amount of votes the pixel has been classified as such by all classifiers. The overall performance of ensemble based mapping is compared to the overall performance of the chosen classifier, and also with ground observations at meteorological

  1. Planning and Measuring Luminance Contrast in Staircases.

    PubMed

    Houck, Leif D; Gundersen, Kristoffer; Strengen, Ola

    2016-01-01

    Norwegian legislation has requirements concerning luminance contrast for different elements in staircases. This paper investigates how architects work to meet the requirements, how to measure the actual built luminance contrasts and finally 21 staircases are measured using two different methods. The results show that some architects do not reflect on luminance contrasts at all, some use their "experience" and some try to measure the reflectance value of different materials during planning. The investigations also reveal that there is not any official predefined way to control luminance contrast, and this investigation shows that different approaches will give different results. To perform the measuring of the built staircases, it has been necessary to develop a defined measuring method. The results of the measuring generally shows that only a few of the staircases studied fully meet the legislation requirements. PMID:27534331

  2. Investigating the Relationship of Luminosity and Curvature Using the Luminous Convolution Model for Spiral Galaxy Rotation Curves

    NASA Astrophysics Data System (ADS)

    Crowley, Meagan

    2016-03-01

    The Luminous Convolution Model maps velocities of galaxies given by data of visible matter with respect to the relative curvature of the emitter and receiver galaxy using five different models of the Milky Way. This model purports that observations made of the luminous profiles of galaxies do not take the relative curvatures of the emitter and receiver galaxies into account, and thus maps the luminous profile onto the curvature using Lorentz transformations, and then back into the flat frame where local observations are made. The five models of the Milky Way used to compile galaxy data are proposed by Klypin:Anatoly (2002) A and B, Xue (2008), Sofue (2013), and a mixture of Xue and Sofue data. The Luminous Convolution Model has been able to accurately describe the rotation of spiral galaxies through this method without the need for dark matter. In each fitting of a given galaxy, the luminous profile graph exhibits a crossing with the graph of the curvature component, suggesting a correlation between the two. This correlation is currently under investigation as being related to phenomena apparent within each galaxy. To determine the correlation between the luminous profile and the curvature component, a functional analysis of the Luminous Convolution Model will be presented

  3. STAR FORMATION IN TWO LUMINOUS SPIRAL GALAXIES

    SciTech Connect

    Hunter, Deidre A.; Ashburn, Allison; Wright, Teresa; Elmegreen, Bruce G.; Rubin, Vera C.; Józsa, Gyula I. G.; Struve, Christian

    2013-10-01

    We examined star formation in two very luminous (M{sub V} = –22 to –23) Sc-type spiral galaxies, NGC 801 and UGC 2885, using ultra-deep Hα images. We combine these Hα images with UBV and Two-Micron All-Sky Survey JHK images and H I maps to explore the star formation characteristics of disk galaxies at high luminosity. Hα traces star formation in these galaxies to 4-6 disk scale lengths, but the lack of detection of Hα further out is likely due to the loss of Lyman continuum photons. Considering gravitational instabilities alone, we find that the gas and stars in the outer regions are marginally stable in an average sense, but considering dissipative gas and radial and azimuthal forcing, the outer regions are marginally unstable to forming spiral arms. Star formation is taking place in spiral arms, which are regions of locally higher gas densities. Furthermore, we have traced smooth exponential stellar disks over four magnitudes in V-band surface brightness and 4-6 disk scale lengths, in spite of a highly variable gravitational instability parameter. Thus, gravitational instability thresholds do not seem relevant to the stellar disk. One possibility for creating an exponential disk is that the molecular cloud densities and star formation rates have exponential profiles and this fact forces the stellar disk to build up such a profile. Another possibility is that the stellar disk is continuously adjusted to an exponential shape regardless of the star formation profile, for example, through global dynamical processes that scatter stars. However, such scattering processes are only known to operate in spiral systems, in which case they cannot explain the same dilemma of smooth exponential disks observed in dwarf irregular galaxies.

  4. Intravascular photoacoustic imaging of exogenously labeled atherosclerotic plaque through luminal blood

    PubMed Central

    Yeager, Doug; Karpiouk, Andrei; Wang, Bo; Amirian, James; Sokolov, Konstantin; Smalling, Richard

    2012-01-01

    Abstract. Combined intravascular ultrasound and intravascular photoacoustic (IVUS/IVPA) imaging has been previously established as a viable means for assessing atherosclerotic plaque morphological and compositional characteristics using both endogenous and exogenous contrast. In this study, IVUS/IVPA imaging of atherosclerotic rabbit aortas following systemic injection of gold nanorods (AUNRs) with peak absorbance within the tissue optical window is performed. Ex vivo imaging results reveal a high photoacoustic signal from localized AUNRs in regions with atherosclerotic plaques. Corresponding histological staining further confirms the preferential extravasation of AUNRs in atherosclerotic regions with compromised luminal endothelium and acute inflammation. The ability to detect AUNRs using combined IVUS and photoacoustic imaging in the presence of luminal saline and luminal blood is evaluated using both spectroscopic and single wavelength IVPA imaging techniques. Results demonstrate that AUNR detection within the arterial wall can be achieved using both methods, even in the case of imaging through luminal blood. PMID:23224013

  5. Orientation and spatial frequency selectivity of adaptation to color and luminance gratings.

    PubMed

    Bradley, A; Switkes, E; De Valois, K

    1988-01-01

    Prolonged viewing of sinusoidal luminance gratings produces elevated contrast detection thresholds for test gratings that are similar in spatial frequency and orientation to the adaptation stimulus. We have used this technique to investigate orientation and spatial frequency selectivity in the processing of color contrast information. Adaptation to isoluminant red-green gratings produces elevated color contrast thresholds that are selective for grating orientation and spatial frequency. Only small elevations in color contrast thresholds occur after adaptation to luminance gratings, and vice versa. Although the color adaptation effects appear slightly less selective than those for luminance, our results suggest similar spatial processing of color and luminance contrast patterns by early stages of the human visual system.

  6. Eta Carinae and Other Luminous Blue Variables

    NASA Technical Reports Server (NTRS)

    Corcoran, M. F.

    2006-01-01

    Luminous Blue Variables (LBVs) are believed to be evolved, extremely massive stars close to the Eddington Limit and hence prone to bouts of large-scale, unstable mass loss. I discuss current understanding of the evolutionary state of these objects, the role duplicity may play and known physical characteristics of these stars using the X-ray luminous LBVs Eta Carinae and HD 5980 as test cases.

  7. An Evaluation of Image Based Techniques for Early Wildfire Detection and Fuel Mapping

    SciTech Connect

    Gabbert, Dustin W.

    2015-05-01

    Few events can cause the catastrophic impact to ecology, infrastructure, and human safety of a wildland fire along the wildland urban interface. The suppression of natural wildland fires over the past decade has caused a buildup of dry, dead surface fuels: a condition that, coupled with the right weather conditions, can cause large destructive wildfires that are capable of threatening both ancient tree stands and manmade infrastructure. Firefighters use fire danger models to determine staffing needs on high fire risk days; however models are only as effective as the spatial and temporal density of their observations. OKFIRE, an Oklahoma initiative created by a partnership between Oklahoma State University and the University of Oklahoma, has proven that fire danger assessments close to the fire – both geographically and temporally – can give firefighters a significant increase in their situational awareness while fighting a wildland fire. This paper investigates several possible solutions for a small Unmanned Aerial System (UAS) which could gather information useful for detecting ground fires and constructing fire danger maps. Multiple fire detection and fuel mapping programs utilize satellites, manned aircraft, and large UAS equipped with hyperspectral sensors to gather useful information. Their success provides convincing proof of the utility that could be gained from low-altitude UAS gathering information at the exact time and place firefighters and land managers are interested in. Close proximity, both geographically and operationally, to the end can reduce latency times below what could ever be possible with satellite observation. This paper expands on recent advances in computer vision, photogrammetry, and infrared and color imagery to develop a framework for a next-generation UAS which can assess fire danger and aid firefighters in real time as they observe, contain, or extinguish wildland fires. It also investigates the impact information gained by this

  8. Village Level Tsunami Threat Maps for Tamil Nadu, SE Coast of India: Numerical Modeling Technique

    NASA Astrophysics Data System (ADS)

    MP, J.; Kulangara Madham Subrahmanian, D.; V, R. M.

    2014-12-01

    The Indian Ocean tsunami (IOT) devastated several countries of North Indian Ocean. India is one of the worst affected countries after Indonesia and Sri Lanka. In India, Tamil Nadu suffered maximum with fatalities exceeding 8,000 people. Historical records show that tsunami has invaded the shores of Tamil Nadu in the past and has made people realize that the tsunami threat looms over Tamil Nadu and it is necessary to evolve strategies for tsunami threat management. The IOT has brought to light that tsunami inundation and runup varied within short distances and for the disaster management for tsunami, large scale maps showing areas that are likely to be affected by future tsunami are identified. Therefore threat assessment for six villages including Mamallapuram (also called Mahabalipuram) which is famous for its rock-cut temples, from the northern part of Tamil Nadu state of India has been carried out and threat maps categorizing the coast into areas of different degree of threat are prepared. The threat was assessed by numerical modeling using TUNAMI N2 code considering different tsunamigenic sources along the Andaman - Sumatra trench. While GEBCO and C-Map data was used for bathymetry and for land elevation data was generated by RTK - GPS survey for a distance of 1 km from shore and SRTM for the inland areas. The model results show that in addition to the Sumatra source which generated the IOT in 2004, earthquakes originating in Car Nicobar and North Andaman can inflict more damage. The North Andaman source can generate a massive tsunami and an earthquake of magnitude more than Mw 9 can not only affect Tamil Nadu but also entire south east coast of India. The runup water level is used to demarcate the tsunami threat zones in the villages using GIS.

  9. Simultaneous chromatic and luminance human electroretinogram responses.

    PubMed

    Parry, Neil R A; Murray, Ian J; Panorgias, Athanasios; McKeefry, Declan J; Lee, Barry B; Kremers, Jan

    2012-07-01

    The parallel processing of information forms an important organisational principle of the primate visual system. Here we describe experiments which use a novel chromatic–achromatic temporal compound stimulus to simultaneously identify colour and luminance specific signals in the human electroretinogram (ERG). Luminance and chromatic components are separated in the stimulus; the luminance modulation has twice the temporal frequency of the chromatic modulation. ERGs were recorded from four trichromatic and two dichromatic subjects (1 deuteranope and 1 protanope). At isoluminance, the fundamental (first harmonic) response was elicited by the chromatic component in the stimulus. The trichromatic ERGs possessed low-pass temporal tuning characteristics, reflecting the activity of parvocellular post-receptoral mechanisms. There was very little first harmonic response in the dichromats' ERGs. The second harmonic response was elicited by the luminance modulation in the compound stimulus and showed, in all subjects, band-pass temporal tuning characteristic of magnocellular activity. Thus it is possible to concurrently elicit ERG responses from the human retina which reflect processing in both chromatic and luminance pathways. As well as providing a clear demonstration of the parallel nature of chromatic and luminance processing in the human retina, the differences that exist between ERGs from trichromatic and dichromatic subjects point to the existence of interactions between afferent post-receptoral pathways that are in operation from the earliest stages of visual processing. PMID:22586211

  10. Simultaneous chromatic and luminance human electroretinogram responses

    PubMed Central

    Parry, Neil R A; Murray, Ian J; Panorgias, Athanasios; McKeefry, Declan J; Lee, Barry B; Kremers, Jan

    2012-01-01

    The parallel processing of information forms an important organisational principle of the primate visual system. Here we describe experiments which use a novel chromatic–achromatic temporal compound stimulus to simultaneously identify colour and luminance specific signals in the human electroretinogram (ERG). Luminance and chromatic components are separated in the stimulus; the luminance modulation has twice the temporal frequency of the chromatic modulation. ERGs were recorded from four trichromatic and two dichromatic subjects (1 deuteranope and 1 protanope). At isoluminance, the fundamental (first harmonic) response was elicited by the chromatic component in the stimulus. The trichromatic ERGs possessed low-pass temporal tuning characteristics, reflecting the activity of parvocellular post-receptoral mechanisms. There was very little first harmonic response in the dichromats’ ERGs. The second harmonic response was elicited by the luminance modulation in the compound stimulus and showed, in all subjects, band-pass temporal tuning characteristic of magnocellular activity. Thus it is possible to concurrently elicit ERG responses from the human retina which reflect processing in both chromatic and luminance pathways. As well as providing a clear demonstration of the parallel nature of chromatic and luminance processing in the human retina, the differences that exist between ERGs from trichromatic and dichromatic subjects point to the existence of interactions between afferent post-receptoral pathways that are in operation from the earliest stages of visual processing. PMID:22586211

  11. A new technique for selective identification and mapping of enhancers within long genomic sequences.

    PubMed

    Chernov, Igor; Stukacheva, Elena; Akopov, Sergey; Didych, Dmitry; Nikolaev, Lev; Sverdlov, Eugene

    2008-05-01

    We report a new experimental method of direct selection, identification, and mapping of potential enhancer sequences within extended stretches of genomic DNA. The method allows simultaneous cloning of a quantity of sequences instead of tedious screening of the separate ones, thus providing a robust and high-throughput approach to the mapping of enhancers. The selection procedure is based on the ability of such sequences to activate a minimal promoter that drives expression of a selective gene. To this end a mixture of short DNA fragments derived from the segment of interest was cloned in a retroviral vector containing the neomycin phosphotransferase II gene under control of a cytomegalovirus (CMV) minimal promoter. The pool of retroviruses obtained was used to infect HeLa cells and then to select neomycin-resistant colonies containing constructs with enhancer-like sequences. The pool of the genomic fragments was rescued by PCR and cloned, forming a library of the potential enhancers. Fifteen enhancer-like fragments were selected from 1-Mb human genome locus, and enhancer activity of 13 of them was verified in a transient transfection reporter gene assay. The sequences selected were found to be predominantly located near 5' regions of genes or within gene introns. PMID:18476831

  12. Nanoscale structural and functional mapping of nacre by scanning probe microscopy techniques

    NASA Astrophysics Data System (ADS)

    Zhou, Xilong; Miao, Hongchen; Li, Faxin

    2013-11-01

    Nacre has received great attention due to its nanoscale hierarchical structure and extraordinary mechanical properties. Meanwhile, the nanoscale piezoelectric properties of nacre have also been investigated but the structure-function relationship has never been addressed. In this work, firstly we realized quantitative nanomechanical mapping of nacre of a green abalone using atomic force acoustic microscopy (AFAM). The modulus of the mineral tablets is determined to be ~80 GPa and that of the organic biopolymer no more than 23 GPa, and the organic-inorganic interface width is determined to be about 34 +/- 9 nm. Then, we conducted both AFAM and piezoresponse force microscopy (PFM) mapping in the same scanning area to explore the correlations between the nanomechanical and piezoelectric properties. The PFM testing shows that the organic biopolymer exhibits a significantly stronger piezoresponse than the mineral tablets, and they permeate each other, which is very difficult to reproduce in artificial materials. Finally, the phase hysteresis loops and amplitude butterfly loops were also observed using switching spectroscopy PFM, implying that nacre may also be a bio-ferroelectric material. The obtained nanoscale structural and functional properties of nacre could be very helpful in understanding its deformation mechanism and designing biomimetic materials of extraordinary properties.

  13. High-Resolution Debris Flow Volume Mapping with Unmanned Aerial Systems (uas) and Photogrammetric Techniques

    NASA Astrophysics Data System (ADS)

    Adams, M. S.; Fromm, R.; Lechner, V.

    2016-06-01

    Debris flows cause an average € 30 million damages and 1-2 fatalities every year in Austria. Detailed documentation of their extent and magnitude is essential for understanding, preventing and mitigating these natural hazard events. The recent development of unmanned aerial systems (UAS) has provided a new possibility for on-demand high-resolution monitoring and mapping. Here, we present a study, where the spatial extent and volume of a large debris flow event were mapped with different UAS, fitted with commercial off-the-shelf sensors. Orthophotos and digital terrain models (DTM) were calculated using structure-from-motion photogrammetry software. Terrain height differences caused by the debris flow in the catchment and valley floor were derived by subtracting the pre-event airborne laser scanning (ALS) DTM from a post-event UAS-DTM. The analysis of the volumetric sediment budget showed, that approximately 265,000 m³ material was mobilised in the catchment, of which 45,000 m³ settled there; of the material, which reached the valley floor, 120,000 m³ was deposited, while another 10,000 m³ was eroded from there. The UAS-results were validated against ALS data and imagery from a traditional manned-aircraft photogrammetry campaign. In conclusion, the UAS-data can reach an accuracy and precision comparable to manned aircraft data, but with the added benefits of higher flexibility, easier repeatability, less operational constraints and higher spatial resolution.

  14. Comparison of VTEC from ground-based space geodetic techniques based on ray-traced mapping factors

    NASA Astrophysics Data System (ADS)

    Heinkelmann, Robert; Alizadeh, M. Mahdi; Schuh, Harald; Deng, Zhiguo; Zus, Florian; Etemadfard, M. Hossein

    2016-07-01

    For the derivation of vertical total electron content (VTEC) from slant total electron content (STEC), usually a standard approach is used based on mapping functions that assume a single-layer model of the ionosphere (e.g. IERS Conventions 2010). In our study we test the standard approach against a recently developed alternative which is based on station specific ray-traced mapping factors. For the evaluation of this new mapping concept, we compute VTEC at selected Very Long Baseline Interferometry (VLBI) stations using the dispersive delays and the corresponding formal errors obtained by observing extra-galactic radio sources at two radio frequencies in S- and X-bands by the permanent geodetic/astrometric program organized by the IVS (International VLBI Service for Geodesy and Astrometry). Additionally, by applying synchronous sampling and a consistent analysis configuration, we determine VTEC at Global Navigation Satellite System (GNSS) antennas using GPS (Global Positioning System) and/or GLONASS (Globalnaja nawigazionnaja sputnikowaja Sistema) observations provided by the IGS (International GNSS Service) that are operated in the vicinity of the VLBI antennas. We compare the VTEC time series obtained by the individual techniques over a period of about twenty years and describe their characteristics qualitatively and statistically. The length of the time series allows us to assess the long-term climatology of ionospheric VTEC during the last twenty years.

  15. Making sense of crowdsourced observations: Data fusion techniques for real-time mapping of urban air quality

    NASA Astrophysics Data System (ADS)

    Schneider, Philipp; Castell, Nuria; Vogt, Matthias; Lahoz, William; Bartonova, Alena

    2015-04-01

    With an ever-increasing amount of environmental observations available through crowdsourcing, one of the major emerging challenges is how to best make sense of the vast amount of collected observations and how to provide citizens and other end-users with a relevant value-added product. Regarding air quality, a high-density network of low-cost sensors provided by crowdsourcing has significant potential for improving spatial mapping in general and in urban areas in particular. However, most datasets of observations made within a crowdsourcing framework contain substantial data gaps and the observations are generally point measurements, which are only representative of a relatively small area. This poses a significant challenge for mapping applications. One way to overcome these issues is to combine the crowdsourced data with spatially continuous data from a model. We present a novel data fusion-based technique for combining real-time crowdsourced observations with model output that allows to provide highly detailed, up-to-date maps of urban air quality. The EU-funded CITI-SENSE project is deploying a dense network of low-cost sensors measuring air quality in eight cities around Europe. These crowdsourced observations are used for mapping urban quality in real-time by fusing them with data obtained from statistical or deterministic air quality models. Data fusion techniques allow for combining observations with model data in a mathematically objective way and therefore provide a means of adding value to both the observations and the model. The observations are improved by filling spatio-temporal gaps in the data and the model is improved by constraining it with observations. The model further provides detailed spatial patterns in areas where no observations are available. As such, data fusion of observations from high-density low-cost sensor networks together with models can contribute to significantly improving urban-scale air quality mapping. The data fusion

  16. Precise calibration of linear camera equipped with cylindrical lenses using a radial basis function-based mapping technique.

    PubMed

    Liu, Haiqing; Yang, Linghui; Guo, Yin; Guan, Ruifen; Zhu, Jigui

    2015-02-01

    The linear camera equipped with cylindrical lenses has prominent advantages in high-precision coordinate measurement and dynamic position-tracking. However, the serious distortion of the cylindrical lenses limits the application of this camera. To overcome this obstacle, a precise two-step calibration method is developed. In the first step, a radial basis function-based (RBF-based) mapping technique is employed to recover the projection mapping of the imaging system by interpolating the correspondence between incident rays and image points. For an object point in 3D space, the plane passing through the object point in camera coordinate frame can be calculated accurately by this technique. The second step is the calibration of extrinsic parameters, which realizes the coordinate transformation from the camera coordinate frame to world coordinate frame. The proposed method has three aspects of advantage. Firstly, this method (black box calibration) is still effective even if the distortion is high and asymmetric. Secondly, the coupling between extrinsic parameters and other parameters, which is normally occurred and may lead to the failure of calibration, is avoided because this method simplifies the pinhole model and only extrinsic parameters are concerned in the simplified model. Thirdly, the nonlinear optimization, which is widely used to refine camera parameters, is better conditioned since fewer parameters are needed and more accurate initial iteration value is estimated. Both simulative and real experiments have been carried out and good results have been obtained.

  17. Evaluation of historical museum interior lighting system using fully immersive virtual luminous environment

    NASA Astrophysics Data System (ADS)

    Navvab, Mojtaba; Bisegna, Fabio; Gugliermetti, Franco

    2013-05-01

    Saint Rocco Museum, a historical building in Venice, Italy is used as a case study to explore the performance of its' lighting system and visible light impact on viewing the large size art works. The transition from threedimensional architectural rendering to the three-dimensional virtual luminance mapping and visualization within a virtual environment is described as an integrated optical method for its application toward preservation of the cultural heritage of the space. Lighting simulation programs represent color as RGB triplets in a devicedependent color space such as ITU-R BT709. Prerequisite for this is a 3D-model which can be created within this computer aided virtual environment. The onsite measured surface luminance, chromaticity and spectral data were used as input to an established real-time indirect illumination and a physically based algorithms to produce the best approximation for RGB to be used as an input to generate the image of the objects. Conversion of RGB to and from spectra has been a major undertaking in order to match the infinite number of spectra to create the same colors that were defined by RGB in the program. The ability to simulate light intensity, candle power and spectral power distributions provide opportunity to examine the impact of color inter-reflections on historical paintings. VR offers an effective technique to quantify the visible light impact on human visual performance under precisely controlled representation of light spectrum that could be experienced in 3D format in a virtual environment as well as historical visual archives. The system can easily be expanded to include other measurements and stimuli.

  18. An endoscope for simultaneous macroscopic navigation and microscopic inspection of luminal sidewalls

    NASA Astrophysics Data System (ADS)

    Leavesley, Silas; Sturgis, Jennifer; Robinson, J. Paul

    2008-02-01

    Endoscopic techniques are commonly used for esophageal and gastrointestinal screening. In this process, atypical regions are identified by gross visual and morphological changes. These regions are then biopsied for pathological confirmation prior to determining treatment. In an effort to increase the sensitivity of endoscopic screening, many groups have performed work in developing microscopic endoscopes capable of inspecting tissues on a cellular level. These microscopic endoscopes are generally implemented as either a stand-alone fiber or through the working channel of a traditional endoscope, and are oriented in a manner similar to traditional flexible endoscopes, imaging the region directly ahead of the endoscope with a wide-angle lens. However, this may not be the optimum configuration for microscopic inspection of luminal sidewalls. We present a novel optical configuration for an endoscope that can simultaneously function as a traditional forward-viewing macroscopic endoscope and as a sidewall-viewing microscopic endoscope. With the first prototype, we have realized a water-emersion microscopic that is capable of imaging tissues on a single-cell level. In addition, microscopic side-port configuration enables efficient mapping of the luminal wall. Utilizing simultaneous macroscopic and microscopic imaging, we are developing software for image registration and analysis that will enable localization of microscopic features within a macroscopic frame of reference. Through a combination of microscopic sidewall imaging and software for image analysis, we aim to provide the clinician with the equivalent of an in vivo biopsy, increasing screening effectiveness and decreasing discomfort and costs related to performing multiple biopsies of suspected regions.

  19. Application of satellite data and LARS's data processing techniques to mapping vegetation of the Dismal Swamp. M.S. Thesis - Old Dominion Univ.

    NASA Technical Reports Server (NTRS)

    Messmore, J. A.

    1976-01-01

    The feasibility of using digital satellite imagery and automatic data processing techniques as a means of mapping swamp forest vegetation was considered, using multispectral scanner data acquired by the LANDSAT-1 satellite. The site for this investigation was the Dismal Swamp, a 210,000 acre swamp forest located south of Suffolk, Va. on the Virginia-North Carolina border. Two basic classification strategies were employed. The initial classification utilized unsupervised techniques which produced a map of the swamp indicating the distribution of thirteen forest spectral classes. These classes were later combined into three informational categories: Atlantic white cedar (Chamaecyparis thyoides), Loblolly pine (Pinus taeda), and deciduous forest. The subsequent classification employed supervised techniques which mapped Atlantic white cedar, Loblolly pine, deciduous forest, water and agriculture within the study site. A classification accuracy of 82.5% was produced by unsupervised techniques compared with 89% accuracy using supervised techniques.

  20. Mapping the mean annual precipitation of China using local interpolation techniques

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Zhu, Yunqiang; Huang, Shengli; Guo, Chunxia

    2015-01-01

    Spatially explicit precipitation data are required in the research of hydrology, agriculture, ecology, and environmental sciences. In this study, two established techniques of local ordinary linear regression (OLR) and geographically weighted regression (GWR) and two new local hybrid interpolation techniques of local regression-kriging (LRK) and geographically weighted regression kriging (GWRK) were compared to predict the spatial distribution of mean annual precipitation of China. Precipitation data from 684 meteorological stations were used in the analysis, and a stepwise regression analysis was used to select six covariates, including longitude, latitude, elevation, slope, surface roughness, and river density. The four spatial prediction methods (OLR, GWR, LRK, and GWRK) were implemented with local regression techniques with different number of neighbors (50, 100, 150, and 200). The prediction accuracy was assessed at validation sites with the root mean squared deviation, mean estimation error, and R-square values. The results showed that LRK outperforms OLR and GWRK outperforms GWR, indicating that adding the kriging of regression residuals can help improve the prediction performance. GWRK gives the best prediction but the accuracy of estimation varies with the number of neighborhood points used for modeling. Although LRK is outperformed by GWRK, LRK is still recommended as a powerful and practical interpolation method given its computation efficiency. However, if LRK and GWRK are used to extrapolate prediction values, post-processing in the areal interpolation will be needed.

  1. Three-dimensional electron paramagnetic resonance imaging technique for mapping porosity in ceramics

    SciTech Connect

    Kordas, G.; Kang, Y.H. )

    1991-04-01

    This paper reports on a three-dimensional (3D) electron paramagnetic resonance imaging (EPRI) method which was developed to probe the structure and size of pores in ceramic materials. The imaging device that was added to the EPR instrument consisted of a computer-controlled current source and magnetic field gradient. This add-on facility was tested using a well-defined diphenlpicrylhydrazzyl phantom sample. Pumice was then used to demonstrate the potential of the technique. This stone was immersed in a 0.5 mm {sup 15}N-substituted per-deutereted tempone water solution to fill the pores with spin labels. Images were reconstructed using a filtered back-projection technique. A two-dimensional (2D) imaging plane was constructed by collecting 33 projection planes over 180 {degrees}. A 3D image was derived from 22 planes each constructed by 22 projections. At present, the facility allows a resolution of 69 and 46 {mu}m for 2D and 3D imaging, respectively. Advancements of the imaging apparatus, software, and line width of the spin labels will be needed to enhance the resolution of this technique.

  2. Duration expansion at low luminance levels.

    PubMed

    Bruno, Aurelio; Ayhan, Inci; Johnston, Alan

    2011-01-01

    Duration distortions have been shown to occur at the time of saccades and following high temporal frequency or contrast adaptation. Under all these conditions, changes in the temporal tuning of M neurons also occur, suggesting that there might be a link between the two phenomena. In order to explore this relationship further, we measured the apparent duration of visual stimuli in the dark, where the temporal impulse response has been reported to lengthen. We first measured a progressive shift and reduction of the occurrence of an apparent motion reversal as we decreased the luminance level, indicating a lengthening of the temporal impulse response. We then measured perceived duration at these luminance levels (0.75, 3, and 50 cd/m(2)) after matching for apparent contrast and temporal frequency. While perceived temporal frequency did not substantially differ across luminance levels, duration appeared expanded at the lowest luminance level relative to the highest by approximately 60 ms. Thus, we have shown that reduced luminance is associated with both a lengthening of the temporal impulse response and a duration expansion, linking the two and providing further evidence for a relationship between changes in the neuronal tuning in the early stages of the visual system and time perception.

  3. Dynamic coding of temporal luminance variation.

    PubMed

    Kristjánsson, Árni

    2012-06-01

    The range of variation in environmental stimuli is much larger than the visual system can represent. It is therefore sensible for the system to adjust its responses to the momentary input statistics of the environment, such as when our pupils contract to limit the light entering the eye. Previous evidence indicates that the visual system increasingly centers responses on the mean of the visual input and scales responses to its variation during adaptation. To what degree does adaptation to a stimulus varying in luminance over time result in such adjustment of responses? The first two experiments were designed to test whether sensitivity to changes in the amplitude and the mean of a 9.6° central patch varying sinusoidally in luminance at 0.6 Hz would increase or decrease with adaptation. This was also tested for a dynamic peripheral stimulus (random patches rotating on the screen) to test to what extent the effects uncovered in the first two experiments reflect retinotopic mechanisms. Sensitivity to changes in mean and amplitude of the temporal luminance variation increased sharply the longer the adaptation to the variation, both for the large patch and the peripheral patches. Adaptation to luminance variation leads to increased sensitivity to temporal luminance variation for both central and peripheral presentation, the latter result ruling retinotopic mechanisms out as sole explanations for the adaptation effects.

  4. Karst features detection and mapping using airphotos, DSMs and GIS techniques

    NASA Astrophysics Data System (ADS)

    Kakavas, M. P.; Nikolakopoulos, K. G.; Zagana, E.

    2015-10-01

    The aim of this work is to detect and qualify natural karst depressions in the Aitoloakarnania Prefecture, Western Greece, using remote sensing data in conjunction with the Geographical Information Systems - GIS. The study area is a part of the Ionian geotectonic zone, and its geological background consists of the Triassic Evaporates. The Triassic carbonate breccias where formed as a result of the tectonic and orogenetic setting of the external Hellenides and the diaper phenomena of the Triassic Evaporates. The landscape characterized by exokarst features closed depressions in the Triassic carbonate breccias. At the threshold of this study, an in situ observation was performed in order to identify dolines and swallow holes. The creation of sinkholes, in general, is based on the collapse of the surface layer due to chemical dissolution of carbonate rocks. In the current study airphotos stereopairs, DSMs and GIS were combined in order to detect and map the karst features. Thirty seven airphotos were imported in Leica Photogrammetry Suite and a stereo model of the study area was created. Then in 3D view possible karst features were detected and digitized. Those sites were verified during the in situ survey. ASTER GDEM, SRTM DEM, high resolution airphoto DSM created from the Greek Cadastral and a DEM from digitized contours from the 1/50,000 topographic were also evaluated in GIS environment for the automatic detection of the karst depressions. The results are presented in this study.

  5. Remote sensing and object-based techniques for mapping fine-scale industrial disturbances

    NASA Astrophysics Data System (ADS)

    Powers, Ryan P.; Hermosilla, Txomin; Coops, Nicholas C.; Chen, Gang

    2015-02-01

    Remote sensing provides an important data source for the detection and monitoring of disturbances; however, using this data to recognize fine-spatial resolution industrial disturbances dispersed across extensive areas presents unique challenges (e.g., accurate delineation and identification) and deserves further investigation. In this study, we present and assess a geographic object-based image analysis (GEOBIA) approach with high-spatial resolution imagery (SPOT 5) to map industrial disturbances using the oil sands region of Alberta's northeastern boreal forest as a case study. Key components of this study were (i) the development of additional spectral, texture, and geometrical descriptors for characterizing image-objects (groups of alike pixels) and their contextual properties, and (ii) the introduction of decision trees with boosting to perform the object-based land cover classification. Results indicate that the approach achieved an overall accuracy of 88%, and that all descriptor groups provided relevant information for the classification. Despite challenges remaining (e.g., distinguishing between spectrally similar classes, or placing discrete boundaries), the approach was able to effectively delineate and classify fine-spatial resolution industrial disturbances.

  6. Basic forest cover mapping using digitized remote sensor data and automated data processing techniques

    NASA Technical Reports Server (NTRS)

    Coggeshall, M. E.; Hoffer, R. M.

    1973-01-01

    Remote sensing equipment and automatic data processing techniques were employed as aids in the institution of improved forest resource management methods. On the basis of automatically calculated statistics derived from manually selected training samples, the feature selection processor of LARSYS selected, upon consideration of various groups of the four available spectral regions, a series of channel combinations whose automatic classification performances (for six cover types, including both deciduous and coniferous forest) were tested, analyzed, and further compared with automatic classification results obtained from digitized color infrared photography.

  7. A perceptual hashing method based on luminance features

    NASA Astrophysics Data System (ADS)

    Luo, Siqing

    2011-02-01

    With the rapid development of multimedia technology, content based searching and image authentication has become strong requirements. Image hashing technique has been proposed to meet them. In this paper, an RST (Rotation, Scaling, and Translation) resistant image hash algorithm is presented. In this method, the geometric distortions are extracted and adjusted by normalization. The features of the image are generated from the high-rank moments of luminance distribution. With the help of the efficient image representation capability of high-rank moments, the robustness and discrimination of proposed method are improved. The experimental results show that the proposed method is better than some existing methods in robustness under rotation attack.

  8. Lessons in modern digital field geology: Open source software, 3D techniques, and the new world of digital mapping

    NASA Astrophysics Data System (ADS)

    Pavlis, Terry; Hurtado, Jose; Langford, Richard; Serpa, Laura

    2014-05-01

    Although many geologists refuse to admit it, it is time to put paper-based geologic mapping into the historical archives and move to the full potential of digital mapping techniques. For our group, flat map digital geologic mapping is now a routine operation in both research and instruction. Several software options are available, and basic proficiency with the software can be learned in a few hours of instruction and practice. The first practical field GIS software, ArcPad, remains a viable, stable option on Windows-based systems. However, the vendor seems to be moving away from ArcPad in favor of mobile software solutions that are difficult to implement without GIS specialists. Thus, we have pursued a second software option based on the open source program QGIS. Our QGIS system uses the same shapefile-centric data structure as our ArcPad system, including similar pop-up data entry forms and generic graphics for easy data management in the field. The advantage of QGIS is that the same software runs on virtually all common platforms except iOS, although the Android version remains unstable as of this writing. A third software option we are experimenting with for flat map-based field work is Fieldmove, a derivative of the 3D-capable program Move developed by Midland Valley. Our initial experiments with Fieldmove are positive, particularly with the new, inexpensive (<300Euros) Windows tablets. However, the lack of flexibility in data structure makes for cumbersome workflows when trying to interface our existing shapefile-centric data structures to Move. Nonetheless, in spring 2014 we will experiment with full-3D immersion in the field using the full Move software package in combination with ground based LiDAR and photogrammetry. One new workflow suggested by our initial experiments is that field geologists should consider using photogrammetry software to capture 3D visualizations of key outcrops. This process is now straightforward in several software packages, and

  9. West Java Snack Mapping based on Snack Types, Main Ingredients, and Processing Techniques

    NASA Astrophysics Data System (ADS)

    Nurani, A. S.; Subekti, S.; Ana

    2016-04-01

    The research was motivated by lack of literature on archipelago snack especially from West Java. It aims to explore the snack types, the processing techniques, and the main ingredients by planning a learning material on archipelago cake especially from West Java. The research methods used are descriptive observations and interviews. The samples were randomly chosen from all regions in West Java. The findings show the identification of traditional snack from West java including: 1. snack types which are similar in all regions as research sample namely: opak, rangginang, nagasari, aliagrem, cuhcur, keripik, semprong, wajit, dodol, kecimpring, combro, tape ketan, and surabi. The typical snack types involve burayot (Garut), simping kaum (Purwakarta), surabi hejo (Karawang), papais cisaat (Subang), Papais moyong, opak bakar (Kuningan), opak oded, ranggesing (Sumedang), gapit, tapel (Cirebon), gulampo, kue aci (Tasikmalaya), wajit cililin, gurilem (West Bandung), and borondong (Bandung District); 2. various processing techniques namely: steaming, boiling, frying, caramelizing, baking, grilling, roaster, sugaring; 3. various main ingredients namely rice, local glutinous rice, rice flour, glutinous rice flour, starch, wheat flour, hunkue flour, cassava, sweet potato, banana, nuts, and corn; 4. snack classification in West Java namely (1) traditional snack, (2) creation-snack, (3) modification-snack, (4) outside influence-snack.

  10. A comparison of two-dimensional techniques for converting magnetocardiogram maps into effective current source distributions

    NASA Astrophysics Data System (ADS)

    Ogata, K.; Kandori, A.; Miyashita, T.; Sekihara, K.; Tsukada, K.

    2011-01-01

    The aim of this study was to develop a method for converting the pseudo two-dimensional current given by a current-arrow map (CAM) into the physical current. The physical current distribution is obtained by the optimal solution in a least mean square sense with Tikhonov regularization (LMSTR). In the current dipole simulation, the current pattern differences (ΔJ) between the results of the CAM and the LMSTR with several regularization parameters (α = 10-1-10-15) are calculated. In magnetocardiographic (MCG) analysis, the depth (zd) of a reconstruction plane is chosen by using the coordinates of the sinus node, which is estimated from MCG signals at the early p-wave. The ΔJs at p-wave peaks, QRS-complex peaks, and T-wave peaks of MCG signals for healthy subjects are calculated. Furthermore, correlation coefficients and regression lines are also calculated from the current values of the CAM and the LMSTR during p-waves, QRS-complex, and T-waves of MCG signals. In the simulation, the ΔJs (α ≈ 10-10) had a minimal value. The ΔJs (α = 10-10) at p-wave peaks, QRS-complex peaks, and T-wave peaks of MCG signals for healthy subjects also had minimal value. The correlation coefficients of the current values given by the CAM and the LMSTR (α = 10-10) were greater than 0.9. Furthermore, slopes (y) of the regression lines are correlated with the depth (zd) (r = -0.93). Consequently, the CAM value can be transformed into the LMSTR current value by multiplying it by the slope (y) obtained from the depth (zd). In conclusion, the result given by the CAM can be converted into an effective physical current distribution by using the depth (zd).

  11. Redox and speciation mapping of rock thin sections using high spatial resolution full-field imaging technique

    NASA Astrophysics Data System (ADS)

    de Andrade, V.; Susini, J.; Salomé, M.; Beraldin, O.; Heymes, T.; Lewin, E.

    2009-04-01

    Because of their complex genesis, natural rocks are the most often heterogeneous systems, with various scale-level heterogeneities for both chemistry and structure. In the last decade, the dramatic improvements of hyperspectral imaging techniques provided new tools for accurate material characterisation. Most of these micro- and nano- analytical techniques rely on scanning instruments, which offer high spatial resolution but suffer from long acquisition times imposing practical limits on the field of view. Conversely, full-field imaging techniques rely on a fast parallel acquisition but have limited resolution. Although soft X-ray full-field microscopes based on Fresnel zone plates are commonly used for high resolution imaging, its combination with spectroscopy is challenging and 2D chemical mapping still difficult. For harder X-rays, lensless X-ray microscope based on simple propagation geometry is easier and can be readily used for 2D spectro-microscopy. A full-field experimental setup was optimized at the ESRF-ID21 beamline to image iron redox and speciation distributions in rocks thin sections. The setup comprises a Si111 or Si220 (E = 0.4 eV) monochromator, a special sample stage and a sensitive camera associated with a brand new GGG:Eu light conversion scintillator and high magnification visible light optics. The pixel size ranges from 1.6 to 0.16 m according to the optic used. This instrument was used to analyse phyllosilicates and oxides of metamorphic sediments coming from the Aspromonte nappes-pile in Calabria. Iron chemical state distributions were derived - from images of 1000 Ã- 2000 Ã- 30 m3 rock thin sections - by subtraction of absorption images above and below the Fe K-edge. Using an automatic stitching reconstruction, a wide field image (4Ã-3 mm2 with a 1 m2 resolution for a total of about 12 millions pixels) of Fetotal elemental distribution was produced. Moreover, -XANES analyses (more than 1 million individual -XANES spectra) were performed

  12. The Views of the Teachers about the Mind Mapping Technique in the Elementary Life Science and Social Studies Lessons Based on the Constructivist Method

    ERIC Educational Resources Information Center

    Seyihoglu, Aysegul; Kartal, Ayca

    2010-01-01

    The purpose of this study is to reveal the opinions of teachers on using the mind mapping technique in Life Science and Social Studies lessons. The participants of the study are 20 primary education teachers. In this study, a semi-structured interview technique was used. For content analysis, the themes and codes were defined, based on the views…

  13. FBGs cascade interrogation technique based on wavelength-to-delay mapping and KLT analysis

    NASA Astrophysics Data System (ADS)

    Hervás, J.; Barrera, D.; Fernández-Pousa, Carlos R.; Sales, S.

    2016-05-01

    The Karhunen-Loeve transform is applied to the coarsely sampled impulse response generated by an FBG cascade in order to calculate the temperature change suffered by the FBGs. Thanks to a dispersive media, the wavelength change performed by the temperature change produces a delay shift in the sample generated by an FBG, delay shift which is recorded in the eigenvalues calculated by the KLT routine, letting to measure the temperature variation. Although the FBGs samples are represented only by four points, a continuous temperature measurement can be performed thanks to the KLT algorithm. This means a three order reduction in the number of points giving this method a low computational complexity. Simulations are performed to validate the interrogation technique and estimate performance and an experimental example is provided to demonstrate real operation.

  14. Mapping of soot particles in a weakly sooting diffusion flame by aerosol techniques

    SciTech Connect

    Hepp, H.; Siegmann, K.

    1998-10-01

    The evolution of detailed particle size distributions has been measured along the centerline of an axisymmetric diffusion flame of CH{sub 4} + Ar burning in air at 1 atm. Soot particles with mean diameters of 3--18 nm were observed. Changes in the size distribution exhibited zones where either nucleation, coagulation, or destruction of soot particles dominated. These highly sensitive measurements were made by microprobe sampling with an immediate dilution of 1:400, to quench the aerosol, and by subsequent application of aerosol measurement techniques. In parallel, the yield of photoemitted electrons from size-selected particles was determined. The yield shows a characteristic dependence on location in the flame, indicating changes of the particle`s surface. Multiphoton, time-of-flight mass spectrometry was used to investigate the correlation between polycyclic aromatic hydrocarbons in the flame and enhanced photoemission yield from the soot particles.

  15. Mapping the developing human brain in utero using quantitative MR imaging techniques.

    PubMed

    Studholme, Colin

    2015-03-01

    Magnetic resonance imaging of the human fetal brain has been a clinical tool for many years and provides valuable additional information to compliment more common ultrasound studies. Advances in both MRI acquisition and post processing over the last 10 years have enabled full 3D imaging and the accurate combination of data acquired in different head positions to create improved geometric integrity, tissue contrast, and resolution. This research is now motivating the development of new quantitative MRI-based techniques for clinical imaging that can more accurately characterize brain development and detect abnormalities. In this article, we will review some of the key areas that are driving changes in our understanding of fetal brain growth using quantitative measures derived from in utero MRI and the possible directions for its increased use in improving the evaluation of pregnancies and the accurate characterization of abnormal brain growth.

  16. A microwave technique for mapping ice temperature in the Arctic seasonal sea ice zone

    SciTech Connect

    St. Germain, K.M.; Cavalieri, D.J.

    1997-07-01

    A technique for deriving ice temperature in the Arctic seasonal sea ice zone from passive microwave radiances has been developed. The algorithm operates on brightness temperatures derived from the Special Sensor Microwave/Imager (SSM/I) and uses ice concentration and type from a previously developed thin ice algorithm to estimate the surface emissivity. Comparisons of the microwave derived temperatures with estimates derived from infrared imagery of the Bering Strait yield a correlation coefficient of 0.93 and an RMS difference of 2.1 K when coastal and cloud contaminated pixels are removed. SSM/I temperatures were also compared with a time series of air temperature observations from Gambell on St. Lawrence Island and from Point Barrow, AK weather stations. These comparisons indicate that the relationship between the air temperature and the ice temperature depends on ice type.

  17. A snapshot of radiation therapy techniques and technology in Queensland: An aid to mapping undergraduate curriculum

    SciTech Connect

    Bridge, Pete; Carmichael, Mary-Ann; Brady, Carole; Dry, Allison

    2013-03-15

    Undergraduate students studying the Bachelor of Radiation Therapy at Queensland University of Technology (QUT) attend clinical placements in a number of department sites across Queensland. To ensure that the curriculum prepares students for the most common treatments and current techniques in use in these departments, a curriculum matching exercise was performed. A cross-sectional census was performed on a pre-determined “Snapshot” date in 2012. This was undertaken by the clinical education staff in each department who used a standardized proforma to count the number of patients as well as prescription, equipment, and technique data for a list of tumour site categories. This information was combined into aggregate anonymized data. All 12 Queensland radiation therapy clinical sites participated in the Snapshot data collection exercise to produce a comprehensive overview of clinical practice on the chosen day. A total of 59 different tumour sites were treated on the chosen day and as expected the most common treatment sites were prostate and breast, comprising 46% of patients treated. Data analysis also indicated that intensity-modulated radiotherapy (IMRT) use is relatively high with 19.6% of patients receiving IMRT treatment on the chosen day. Both IMRT and image-guided radiotherapy (IGRT) indications matched recommendations from the evidence. The Snapshot method proved to be a feasible and efficient method of gathering useful data to inform curriculum matching. Frequency of IMRT use in Queensland matches or possibly exceeds that indicated in the literature. It is recommended that future repetition of the study be undertaken in order to monitor trends in referral patterns and new technology implementation.

  18. Mapping Proxy Sensitivity: A New Technique for Compositional Analysis of Cultured Biominerals and Inorganically Precipitated Materials

    NASA Astrophysics Data System (ADS)

    Gagnon, A. C.; DePaolo, D. J.; DeYoreo, J.; Spero, H. J.; Russell, A. D.

    2011-12-01

    Mineral composition is controlled by a host of environmental factors during precipitation. To build accurate paleo-reconstructions we need to separate the impact of each parameter on proxy behavior and use these data to build a chemical-scale understanding of mineral growth. Biomineral culture and inorganic precipitation experiments, where growth parameters can be manipulated independently, are uniquely suited to calibrate proxies and probe mechanism. Culture and precipitation experiments often involve overgrowth of an initial material. For example, seed crystals are used to control mineralogy and avoid nucleation during inorganic precipitation, while culture experiments in marine organisms typically start with wild specimens. New growth corresponding to the experimental conditions must be resolved from the initial material. Separation is typically achieved using microanalysis, skeletal dissection, or estimates of the initial mass and composition. Each approach imposes limits on the accuracy, precision or types of materials that can be analyzed. Slow growth rates and complicated geometries can make these techniques especially challenging when applied to biominerals. We present a method of compositional analysis for use in biological culture and inorganic growth experiments that overcomes many of these challenges. This method relies on growth in a mixed element stable isotope spike, requires neither the initial mass nor the initial composition to be known, harnesses the precision and sensitivity of bulk analysis, and applies even when it is impossible to physically identify newly grown material. Error analysis suggests this method can significantly improve the precision of metal/calcium measurements in experimentally grown material compared to current methods. Furthermore, the method can isolate different events through time, separating, for example, the impact of day and night cycles on biomineral composition. We will present metal/calcium ratios measured using the

  19. Mesopic luminance assessed with minimum motion photometry.

    PubMed

    Raphael, Sabine; MacLeod, Donald I A

    2011-08-25

    We measured the relative contribution of rods and cones to luminance across a range of photopic, mesopic, and scotopic adaptation levels and at various retinal eccentricities. We isolated the luminance channel by setting motion-based luminance nulls (minimum motion photometry) using annular stimuli. Luminance nulls between differently colored stimuli require equality in a weighted sum of rod and cone excitations. The relative cone weight increases smoothly from the scotopic range, where rods dominate, to photopic levels, where rod influence becomes negligible. The change from rod to cone vision does not occur uniformly over the visual field. The more peripheral the stimulus location, the higher is the light level required for cones to participate strongly. The relative cone contribution can be described by a sigmoid function of intensity, with two parameters that each depend on the eccentricity and spatial frequency of the stimulus. One parameter determines the "meso-mesopic" luminance--the center of the mesopic range, at which rod and cone contributions are balanced. This increases with eccentricity, reflecting an increase in the meso-mesopic luminance from 0.04 scotopic cd/m(2) at 2° eccentricity to 0.44 scotopic cd/m(2) at 18°. The second parameter represents the slope of the log-log threshold-versus-intensity curve (TVI curve) for rod vision. This parameter inversely scales the width of the mesopic range and increases only slightly with eccentricity (from 0.73 at 2° to 0.78 for vision at 18° off-axis).

  20. Surface Electrical Conductivity Prediction by Soil Moisture and Electromagnetic Mapping Techniques: Implication for Landmine Detection Technologies

    NASA Astrophysics Data System (ADS)

    Katsube, J.; McNairn, H.; Keating, P. K.; Das, Y.; Dyke, L.; Best, M. E.; Singhroy, V.; Connell-Madore, S.; Hunter, J.; Klassen, R.; Dilabio, R.; Moore, A.

    2004-05-01

    Electrical conductivity (EC) can be a source of significant signal interference in landmine detection, implying that there is a necessity for soil EC prediction in order to carry out safe demining operations in landmine affected countries in the world. A fundamental study on soil EC mechanisms and their relationship to moisture content has been carried out in order to increase the soil EC prediction accuracy when using data from various sensors, such as remote sensing, airborne and surficial electromagnetic (EM) methods. Results indicate that soil moisture consists of free water filling pore spaces and bound water which forms adsorbed water layers on the grain surfaces. The response of these two water phases to drying rates and EC are very different, to the extent that a moist clay poor soil may have low EC but a dry clay rich soil may have higher EC. This is a result of not only the bound water layers being a significant source of EC, but of the capillary component of the free water reacting differently to the different grain-sizes of the soil. The capillary water forms important electrical conductive bridges between the adsorbed water layers on the grains that constitute the soil. This implies that information on soil texture, mineralogy and their distribution are required for accurate EC prediction. Whereas information on these soil characteristics may be acquired by remote sensing and soil maps, soil moisture content is likely to vary from the time of data acquisition to that of demining operations, implying methods to predict these changes are required. In addition, soil type inhomogeniety, such as vertical and horizontal variation can also be a source of inaccuracies in moisture and EC predictions. However, these investigations also indicate that a wide band electrical frequency signal may have the possibility of providing information on, not only metallic mineral content, but on pore space, clay mineral type and water content. In addition, applications of

  1. Regional Mapping of Coupled Fluxes of Carbon and Water Using Multi-Sensor Fusion Techniques

    NASA Astrophysics Data System (ADS)

    Schull, M. A.; Anderson, M. C.; Semmens, K. A.; Yang, Y.; Gao, F.; Hain, C.; Houborg, R.

    2014-12-01

    In an ever-changing climate there is an increasing need to measure the fluxes of water, energy and carbon for decision makers to implement policies that will help mitigate the effects of climate change. In an effort to improve drought monitoring, water resource management and agriculture assessment capabilities, a multi-scale and multi-sensor framework for routine mapping of land-surface fluxes of water and energy at field to regional scales has been established. The framework uses the ALEXI (Atmosphere Land Exchange Inverse)/DisALEXI (Disaggregated ALEXI) suite of land-surface models forced by remotely sensed data from Landsat, MODIS (MODerate resolution Imaging Spectroradiometer), and GOES (Geostationary Operational Environmental Satellite). Land-surface temperature (LST) can be an effective substitute for in-situ surface moisture observations and a valuable metric for constraining land-surface fluxes at sub-field scales. The adopted multi-scale thermal-based land surface modeling framework facilitates regional to local downscaling of water and energy fluxes by using a combination of shortwave reflective and thermal infrared (TIR) imagery from GOES (4-10 km; hourly), MODIS (1 km; daily), and Landsat (30-100 m; bi-weekly). In this research the ALEXI/DisALEXI modeling suite is modified to incorporate carbon fluxes using a stomatal resistance module, which replaces the Priestley-Taylor latent heat approximation. In the module, canopy level nominal light-use-efficiency (βn) is the parameter that modulates the flux of water and carbon in and out of the canopy. Leaf chlorophyll (Chl) is a key parameter for quantifying variability in photosynthetic efficiency to facilitate the spatial distribution of coupled carbon and water retrievals. Spatial distribution of Chl are retrieved from Landsat (30 m) using a surface reflectance dataset as input to the REGularized canopy reFLECtance (REGFLEC) tool. The modified ALEXI/DisALEXI suite is applied to regions of rain fed and

  2. Tracking Steady Light Sources Amid Luminous Transients

    NASA Technical Reports Server (NTRS)

    Kissh, Frank; Fowski, Walter; Miklus, Kenneth; Abreu, Rene; Bolin, Kenneth; Flynn, David

    1994-01-01

    The Transient Event Rejection for Acquisition and Tracking (TERAT) algorithm governs operation of image-data-acquisition and -processing system. TERAT processes digitized image data to acquire (that is, identify) candidate steady source of light, validate candidate source, and track validated source, all in presence of real or apparent luminous transients represented in image data. Source of light tracked could be star or distant luminous beacon. Transients caused by impacts of ionizing radiation on imaging array of photodetectors or by unsteady light sources not meant to be tracked. TERAT functions with limited data-processing resources. TERAT algorithm currently operational on NASA's TOPEX mission.

  3. An easy-to-use technique to characterize cardiodynamics from first-return maps on ΔRR-intervals

    NASA Astrophysics Data System (ADS)

    Fresnel, Emeline; Yacoub, Emad; Freitas, Ubiratan; Kerfourn, Adrien; Messager, Valérie; Mallet, Eric; Muir, Jean-François; Letellier, Christophe

    2015-08-01

    Heart rate variability analysis using 24-h Holter monitoring is frequently performed to assess the cardiovascular status of a patient. The present retrospective study is based on the beat-to-beat interval variations or ΔRR, which offer a better view of the underlying structures governing the cardiodynamics than the common RR-intervals. By investigating data for three groups of adults (with normal sinus rhythm, congestive heart failure, and atrial fibrillation, respectively), we showed that the first-return maps built on ΔRR can be classified according to three structures: (i) a moderate central disk, (ii) a reduced central disk with well-defined segments, and (iii) a large triangular shape. These three very different structures can be distinguished by computing a Shannon entropy based on a symbolic dynamics and an asymmetry coefficient, here introduced to quantify the balance between accelerations and decelerations in the cardiac rhythm. The probability P111111 of successive heart beats without large beat-to-beat fluctuations allows to assess the regularity of the cardiodynamics. A characteristic time scale, corresponding to the partition inducing the largest Shannon entropy, was also introduced to quantify the ability of the heart to modulate its rhythm: it was significantly different for the three structures of first-return maps. A blind validation was performed to validate the technique.

  4. Scatterer size and concentration estimation technique based on a 3D acoustic impedance map from histologic sections

    NASA Astrophysics Data System (ADS)

    Mamou, Jonathan; Oelze, Michael L.; O'Brien, William D.; Zachary, James F.

    2001-05-01

    Accurate estimates of scatterer parameters (size and acoustic concentration) are beneficial adjuncts to characterize disease from ultrasonic backscatterer measurements. An estimation technique was developed to obtain parameter estimates from the Fourier transform of the spatial autocorrelation function (SAF). A 3D impedance map (3DZM) is used to obtain the SAF of tissue. 3DZMs are obtained by aligning digitized light microscope images from histologic preparations of tissue. Estimates were obtained for simulated 3DZMs containing spherical scatterers randomly located: relative errors were less than 3%. Estimates were also obtained from a rat fibroadenoma and a 4T1 mouse mammary tumor (MMT). Tissues were fixed (10% neutral-buffered formalin), embedded in paraffin, serially sectioned and stained with H&E. 3DZM results were compared to estimates obtained independently against ultrasonic backscatter measurements. For the fibroadenoma and MMT, average scatterer diameters were 91 and 31.5 μm, respectively. Ultrasonic measurements yielded average scatterer diameters of 105 and 30 μm, respectively. The 3DZM estimation scheme showed results similar to those obtained by the independent ultrasonic measurements. The 3D impedance maps show promise as a powerful tool to characterize ultrasonic scattering sites of tissue. [Work supported by the University of Illinois Research Board.

  5. [Working hypothesis of simplified techniques for the first mapping of occupational hazards in handicrafts. Part two: physical and chemical risks].

    PubMed

    Montomoli, L; Di Leone, G; Aprea, M C; Sciarra, G; Pinto, I; Sartorelli, P

    2009-01-01

    Craft features a strong link with tradition which preserves and perpetuates techniques created and consolidated over time, in a particular historical, cultural and geographical situation. Italy is home to excellent craftsmanship: fashion, furniture, giftware, design. The framework law for the craft n. 433/1985 defines the craft business carried on by the craftsman for a predominant objective of producing goods, including semi-finished or services. The D.L.gs 81/2008 defines the manner in which the employer must conduct risk assessment in the workplace and process the risk assessment document. The aim of the study is to provide employers with a simple methodology that allows the drafting of the first mapping of hazards/dangers identificating risk sources in the working process. The developed methodology is expressed as a model of information gathering, which brings to a first overview of all major risk indicators that may arise in the craftsmanship.

  6. An interpretation of a geologic map of Fannin County, Texas, prepared by ADP techniques from ERTS MSS data

    NASA Technical Reports Server (NTRS)

    Henderson, J. A., Jr.; Gardner, J. J.; Cipra, J. E.

    1973-01-01

    Multispectral scanner data, collected by ERTS-1 from an area in northeast Texas underlain by structurally simple cretaceous sedimentary rocks, were analyzed using several automatic data processing techniques. Training areas for the computer analysis were selected in three ways using: (1) an unsupervised classifier, (2) the unsupervised classifier to further refine training areas, and (3) available ground information. The first two methods utilize the capability of the unsupervised classifier to the unsupervised classifier to group resolution elements with similar spectral properties. With the imagery produced by these procedures, several geologic features can be identified. The most easily recognizable features are streams and other water bodies. The approximate location of contacts between several rock units can be mapped.

  7. Modelling of high-frequency structure-borne sound transmission on FEM grids using the Discrete Flow Mapping technique

    NASA Astrophysics Data System (ADS)

    Hartmann, Timo; Tanner, Gregor; Xie, Gang; Chappell, David; Bajars, Janis

    2016-09-01

    Dynamical Energy Analysis (DEA) combined with the Discrete Flow Mapping technique (DFM) has recently been introduced as a mesh-based high frequency method modelling structure borne sound for complex built-up structures. This has proven to enhance vibro-acoustic simulations considerably by making it possible to work directly on existing finite element meshes circumventing time-consuming and costly re-modelling strategies. In addition, DFM provides detailed spatial information about the vibrational energy distribution within a complex structure in the mid-to-high frequency range. We will present here progress in the development of the DEA method towards handling complex FEM-meshes including Rigid Body Elements. In addition, structure borne transmission paths due to spot welds are considered. We will present applications for a car floor structure.

  8. A Comparison of Techniques for Mapping the Distribution of Sediment on the Bed of the Colorado River in Grand Canyon

    NASA Astrophysics Data System (ADS)

    Fuller, E.; Kaplinski, M.; Rubin, D. M.

    2004-12-01

    The Grand Canyon Monitoring and Research Center is charged with establishing and implementing monitoring projects to provide scientific information to the Glen Canyon Dam Adaptive Management Program (GCDAMP) on the effects of operating Glen Canyon Dam on the downstream resources of the Colorado River ecosystem. One primary resource of concern to the GCDAMP is fine-grained sediment. Glen Canyon Dam traps approximately 94% of the pre-dam sand supply to the Colorado River in Grand Canyon, resulting in a decline in the size of eddy sand bars (25% decline in surface area over the past 15 years). Sand bars are an important resource because they provide habitat for endangered native fish, protect archeological sites, provide substrate for vegetation, are used as campsites and are a distinctive feature of the pre-dam environment. A combination of traditional survey techniques and multi-beam bathymetry has been used to determine the size and elevation of sandbars and to obtain topographic maps of the riverbed. These techniques have proven useful in evaluating the spatial changes and channel morphology along the Colorado River ecosystem. While previous studies have been very effective in measuring volumetric and spatial changes, a method is needed map the distribution of sediment along the submerged portion of the river channel. The distribution of fine-grained sediment is needed to evaluate the potential for deposition onto high elevation sand bars during proposed experimental high flows. This study used high-resolution multibeam bathymetry, acoustic backscatter and underwater video images collected on expeditions in 2002 and 2004 to evaluate the different methodologies. The purpose of this study was to evaluate possible technologies to be used in determining the distribution of sediment along the bed of the Colorado River in Grand Canyon. These technologies include: 1) visual interpretation of shaded relief images produced from multibeam bathymetry; 2) visual

  9. Efficient calculation of luminance variation of a luminaire that uses LED light sources

    NASA Astrophysics Data System (ADS)

    Goldstein, Peter

    2007-09-01

    Many luminaires have an array of LEDs that illuminate a lenslet-array diffuser in order to create the appearance of a single, extended source with a smooth luminance distribution. Designing such a system is challenging because luminance calculations for a lenslet array generally involve tracing millions of rays per LED, which is computationally intensive and time-consuming. This paper presents a technique for calculating an on-axis luminance distribution by tracing only one ray per LED per lenslet. A multiple-LED system is simulated with this method, and with Monte Carlo ray-tracing software for comparison. Accuracy improves, and computation time decreases by at least five orders of magnitude with this technique, which has applications in LED-based signage, displays, and general illumination.

  10. Visualizing Mars Using Virtual Reality: A State of the Art Mapping Technique Used on Mars Pathfinder

    NASA Technical Reports Server (NTRS)

    Stoker, C.; Zbinden, E.; Blackmon, T.; Nguyen, L.

    1999-01-01

    We describe an interactive terrain visualization system which rapidly generates and interactively displays photorealistic three-dimensional (3-D) models produced from stereo images. This product, first demonstrated in Mars Pathfinder, is interactive, 3-D, and can be viewed in an immersive display which qualifies it for the name Virtual Reality (VR). The use of this technology on Mars Pathfinder was the first use of VR for geologic analysis. A primary benefit of using VR to display geologic information is that it provides an improved perception of depth and spatial layout of the remote site. The VR aspect of the display allows an operator to move freely in the environment, unconstrained by the physical limitations of the perspective from which the data were acquired. Virtual Reality offers a way to archive and retrieve information in a way that is intuitively obvious. Combining VR models with stereo display systems can give the user a sense of presence at the remote location. The capability, to interactively perform measurements from within the VR model offers unprecedented ease in performing operations that are normally time consuming and difficult using other techniques. Thus, Virtual Reality can be a powerful a cartographic tool. Additional information is contained in the original extended abstract.

  11. Mapping of groundwater potential zones in Salem Chalk Hills, Tamil Nadu, India, using remote sensing and GIS techniques.

    PubMed

    Thilagavathi, N; Subramani, T; Suresh, M; Karunanidhi, D

    2015-04-01

    This study proposes to introduce the remote sensing and geographic information system (GIS) techniques in mapping the groundwater potential zones. Remote sensing and GIS techniques have been used to map the groundwater potential zones in Salem Chalk Hills, Tamil Nadu, India. Charnockites and fissile hornblende biotite gneiss are the major rock types in this region. Dunites and peridodites are the ultramafic rocks which cut across the foliation planes of the gneisses and are highly weathered. It comprises magnesite and chromite deposits which are excavated by five mining companies by adopting bench mining. The thickness of weathered and fracture zone varies from 2.2 to 50 m in gneissic formation and 5.8 to 55 m in charnockite. At the contacts of gneiss and charnockite, the thickness ranges from 9.0 to 90.8 m favoring good groundwater potential. The mine lease area is underlined by fractured and sheared hornblende biotite gneiss where groundwater potential is good. Water catchment tanks in this area of 5 km radius are small to moderate in size and are only seasonal. They remain dry during summer seasons. As perennial water resources are remote, the domestic and agricultural activities in this region depend mainly upon the groundwater resources. The mines are located in gently slope area, and accumulation of water is not observed except in mine pits even during the monsoon period. Therefore, it is essential to map the groundwater potential zones for proper management of the aquifer system. Satellite imageries were also used to extract lineaments, hydrogeomorphic landforms, drainage patterns, and land use, which are the major controlling factors for the occurrence of groundwater. Various thematic layers pertaining to groundwater existence such as geology, geomorphology, land use/land cover, lineament, lineament density, drainage, drainage density, slope, and soil were generated using GIS tools. By integrating all the above thematic layers based on the ranks and

  12. Mapping of groundwater potential zones in Salem Chalk Hills, Tamil Nadu, India, using remote sensing and GIS techniques.

    PubMed

    Thilagavathi, N; Subramani, T; Suresh, M; Karunanidhi, D

    2015-04-01

    This study proposes to introduce the remote sensing and geographic information system (GIS) techniques in mapping the groundwater potential zones. Remote sensing and GIS techniques have been used to map the groundwater potential zones in Salem Chalk Hills, Tamil Nadu, India. Charnockites and fissile hornblende biotite gneiss are the major rock types in this region. Dunites and peridodites are the ultramafic rocks which cut across the foliation planes of the gneisses and are highly weathered. It comprises magnesite and chromite deposits which are excavated by five mining companies by adopting bench mining. The thickness of weathered and fracture zone varies from 2.2 to 50 m in gneissic formation and 5.8 to 55 m in charnockite. At the contacts of gneiss and charnockite, the thickness ranges from 9.0 to 90.8 m favoring good groundwater potential. The mine lease area is underlined by fractured and sheared hornblende biotite gneiss where groundwater potential is good. Water catchment tanks in this area of 5 km radius are small to moderate in size and are only seasonal. They remain dry during summer seasons. As perennial water resources are remote, the domestic and agricultural activities in this region depend mainly upon the groundwater resources. The mines are located in gently slope area, and accumulation of water is not observed except in mine pits even during the monsoon period. Therefore, it is essential to map the groundwater potential zones for proper management of the aquifer system. Satellite imageries were also used to extract lineaments, hydrogeomorphic landforms, drainage patterns, and land use, which are the major controlling factors for the occurrence of groundwater. Various thematic layers pertaining to groundwater existence such as geology, geomorphology, land use/land cover, lineament, lineament density, drainage, drainage density, slope, and soil were generated using GIS tools. By integrating all the above thematic layers based on the ranks and

  13. Surface charge microscopy: novel technique for mapping charge-mosaic surfaces in electrolyte solutions.

    PubMed

    Yin, Xihui; Drelich, Jaroslaw

    2008-08-01

    The effective surface potential, called the zeta potential, is commonly determined from electrophoretic mobility measurements for particles moving in a solution in response to an electric field applied between two electrodes. The situation can be reversed, with the solution being forced to flow through a plug of packed particles, and the streaming potential of the particles can be calculated. A significant limitation of these electrokinetic measurements is that only an average value of the zeta potential/streaming potential is measured--regardless of whether the surface charge distribution is homogeneous or otherwise. However, in real-world situations, nearly all solids (and liquids) of technological significance exhibit surface heterogeneities. To detect heterogeneities in surface charge, analytical tools which provide accurate and spatially resolved information about the material surface potential--particularly at microscopic and submicroscopic resolutions--are needed. In this study, atomic force microscopy (AFM) was used to measure the surface interaction forces between a silicon nitride AFM cantilever and a multiphase volcanic rock. The experiments were conducted in electrolyte solutions with different ionic strengths and pH values. The colloidal force measurements were carried out stepwise across the boundary between adjacent phases. At each location, the force-distance curves were recorded. Surface charge densities were then calculated by fitting the experimental data with a DLVO theoretical model. Significant differences between the surface charge densities of the two phases and gradual transitions in the surface charge density at the interface were observed. It is demonstrated that this novel technique can be applied to examine one- and two-dimensional distributions of the surface potential. PMID:18620435

  14. Surface charge microscopy: novel technique for mapping charge-mosaic surfaces in electrolyte solutions.

    PubMed

    Yin, Xihui; Drelich, Jaroslaw

    2008-08-01

    The effective surface potential, called the zeta potential, is commonly determined from electrophoretic mobility measurements for particles moving in a solution in response to an electric field applied between two electrodes. The situation can be reversed, with the solution being forced to flow through a plug of packed particles, and the streaming potential of the particles can be calculated. A significant limitation of these electrokinetic measurements is that only an average value of the zeta potential/streaming potential is measured--regardless of whether the surface charge distribution is homogeneous or otherwise. However, in real-world situations, nearly all solids (and liquids) of technological significance exhibit surface heterogeneities. To detect heterogeneities in surface charge, analytical tools which provide accurate and spatially resolved information about the material surface potential--particularly at microscopic and submicroscopic resolutions--are needed. In this study, atomic force microscopy (AFM) was used to measure the surface interaction forces between a silicon nitride AFM cantilever and a multiphase volcanic rock. The experiments were conducted in electrolyte solutions with different ionic strengths and pH values. The colloidal force measurements were carried out stepwise across the boundary between adjacent phases. At each location, the force-distance curves were recorded. Surface charge densities were then calculated by fitting the experimental data with a DLVO theoretical model. Significant differences between the surface charge densities of the two phases and gradual transitions in the surface charge density at the interface were observed. It is demonstrated that this novel technique can be applied to examine one- and two-dimensional distributions of the surface potential.

  15. 78 FR 70964 - Luminant Generation Company, LLC

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-27

    ... for four consecutive weeks of a combined license (COL) application from Luminant Generation Company.../reactors/new-reactors/col.html . NRC's PDR: You may examine and purchase copies of public documents at the... INFORMATION: The following party has filed applications for COLs with the NRC, pursuant to Section 103 of...

  16. CO excitation in four IR luminous galaxies

    NASA Technical Reports Server (NTRS)

    Radford, Simon J. E.; Solomon, P. M.; Downes, Dennis

    1990-01-01

    The correlation between the CO and far infrared luminosities of spiral galaxies is well established. The luminosity ration, L sub FIR/L sub CO in IR luminous active galaxies is, however, systematically five to ten times higher than in ordinary spirals and molecular clouds in our Galaxy. Furthermore, the masses of molecular hydrogen in luminous galaxies are large, M (H2) approx. equals 10(exp 10) solar magnitude, which indicates the observed luminosity ratios are due to an excess of infrared output, rather than a deficiency of molecular gas. These large amounts of molecular gas may fuel luminous galaxies through either star formation or nuclear activity. This interpretation rests on applying the M (H2)/L sub CO ratio calibrated in our Galaxy to galaxies with strikingly different luminosity ratios. But are the physical conditions of the molecular gas different in galaxies with different luminosity ratios. And, if so, does the proportionality between CO and H2 also vary among galaxies. To investigate these questions researchers observed CO (2 to 1) and (1 to 0) emission from four luminous galaxies with the Institute for Radio Astronomy in the Millimeter range (IRAM) 30 m telescope. Researchers conclude that most of the CO emission from these Arp 193, Arp 220, and Mrk 231 arises in regions with moderate ambient densities similar to the clouds in the Milky Way molecular ring. The emission is neither from dense hot cloud cores nor from the cold low density gas characteristic of the envelopes of dark clouds.

  17. Kinematics of luminous blue compact galaxies

    NASA Astrophysics Data System (ADS)

    Östlin, Göran; Amram, Philippe; Boulesteix, Jaques; Bergvall, Nils; Masegosa, Josefa; Márquez, Isabel

    We present results from a Fabry-Perot study of the Hα velocity fields and morphologies of a sample of luminous blue compact galaxies. We estimate masses from photometry and kinematics and show that many of these BCGs are not rotationally supported. Mergers or strong interactions appear to be the triggering mechanism of the extreme starbursts seen in these galaxies.

  18. Techniques for Revealing 3d Hidden Archeological Features: Morphological Residual Models as Virtual-Polynomial Texture Maps

    NASA Astrophysics Data System (ADS)

    Pires, H.; Martínez Rubio, J.; Elorza Arana, A.

    2015-02-01

    The recent developments in 3D scanning technologies are not been accompanied by visualization interfaces. We are still using the same types of visual codes as when maps and drawings were made by hand. The available information in 3D scanning data sets is not being fully exploited by current visualization techniques. In this paper we present recent developments regarding the use of 3D scanning data sets for revealing invisible information from archaeological sites. These sites are affected by a common problem, decay processes, such as erosion, that never ceases its action and endangers the persistence of last vestiges of some peoples and cultures. Rock art engravings, or epigraphical inscriptions, are among the most affected by these processes because they are, due to their one nature, carved at the surface of rocks often exposed to climatic agents. The study and interpretation of these motifs and texts is strongly conditioned by the degree of conservation of the imprints left by our ancestors. Every single detail in the remaining carvings can make a huge difference in the conclusions taken by specialists. We have selected two case-studies severely affected by erosion to present the results of the on-going work dedicated to explore in new ways the information contained in 3D scanning data sets. A new method for depicting subtle morphological features in the surface of objects or sites has been developed. It allows to contrast human patterns still present at the surface but invisible to naked eye or by any other archaeological inspection technique. It was called Morphological Residual Model (MRM) because of its ability to contrast the shallowest morphological details, to which we refer as residuals, contained in the wider forms of the backdrop. Afterwards, we have simulated the process of building Polynomial Texture Maps - a widespread technique that as been contributing to archaeological studies for some years - in a 3D virtual environment using the results of MRM

  19. Luminance in computer-aided lighting design

    SciTech Connect

    Ward, G.J.; Rubinstein, F.M.; Grynberg, A.

    1987-08-01

    Traditionally, the lighting engineering community has emphasized illuminance, the amount of light reaching a surface, as the primary design goal. The Illuminating Engineering Society (IES) provides tables of illuminances for different types of tasks which lighting engineers consult in designing lighting systems. Illuminance has proven to be a popular metric because it corresponds closely to the amount of energy needed to light a building as well as the initial cost of the lighting system. Perhaps more importantly, illuminance is easy to calculate, especially in simple unobstructed spaces with direct lighting. However,illuminance is not well correlated with visual performance, which is the real reason for installing a lighting system in the first place. Visual performance is a psychophysiological quantity that has been tied to physical quantities such as contrast, size and adaptation level by subject experiments. These physical quantities can be approximated from illuminance using a host of assumptions about the environment, or derived directly from the distribution of luminance. Luminance is the quantity of light traveling through a point in a certain direction, and it is this quantity that the eye actually sees''. However, the difficulty of calculating luminance for common tasks has made it an unpopular metric. Despite its importance to lighting design, luminance is rarely used because there is a lack of the necessary computational tools.In this paper, we will demonstrate a computer calculation of luminance that has significant advantages for lighting design. As well as providing an immediate evaluation of visual quality for task performance, less quantifiable factors such as aesthetics can be studied in synthetic images produced by the program.

  20. Luminance in computer-aided lighting design

    SciTech Connect

    Ward, G.J.; Rubinstein, F.M.; Grynberg, A.

    1987-08-01

    Traditionally, the lighting engineering community has emphasized illuminance, the amount of light reaching a surface, as the primary design goal. The Illuminating Engineering Society (IES) provides tables of illuminances for different types of tasks which lighting engineers consult in designing lighting systems. Illuminance has proven to be a popular metric because it corresponds closely to the amount of energy needed to light a building as well as the initial cost of the lighting system. Perhaps more importantly, illuminance is easy to calculate, especially in simple unobstructed spaces with direct lighting. However,illuminance is not well correlated with visual performance, which is the real reason for installing a lighting system in the first place. Visual performance is a psychophysiological quantity that has been tied to physical quantities such as contrast, size and adaptation level by subject experiments. These physical quantities can be approximated from illuminance using a host of assumptions about the environment, or derived directly from the distribution of luminance. Luminance is the quantity of light traveling through a point in a certain direction, and it is this quantity that the eye actually ``sees``. However, the difficulty of calculating luminance for common tasks has made it an unpopular metric. Despite its importance to lighting design, luminance is rarely used because there is a lack of the necessary computational tools.In this paper, we will demonstrate a computer calculation of luminance that has significant advantages for lighting design. As well as providing an immediate evaluation of visual quality for task performance, less quantifiable factors such as aesthetics can be studied in synthetic images produced by the program.

  1. The effect of spatial luminance distribution on dark adaptation.

    PubMed

    Stokkermans, Mariska G M; Vogels, Ingrid M L C; Heynderickx, Ingrid E J

    2016-06-01

    Recent studies show that dark adaptation in the visual system depends on local luminance levels surrounding the viewing direction. These studies, however, do not explain to what extent veiling luminance is responsible for the outcome. To address the latter, in this study dark adaptation was measured for three different spatial luminance distributions surrounding a target to be detected, while keeping the veiling luminance at the location of the target equivalent. The results show that a background with bright areas close to the viewing direction yields longer adaptation times than a background with bright areas at a larger visual angle. Therefore, we conclude that dark adaptation is affected to a great extent by local luminance, even when controlling for veiling luminance. Based on our results, a simple but adequate model is proposed to predict the adaptation luminance threshold for backgrounds having a nonuniform luminance distribution.

  2. Luminal perfusion of isolated gastric glands.

    PubMed

    Waisbren, S J; Geibel, J; Boron, W F; Modlin, I M

    1994-04-01

    We have extended to rabbit gastric glands the technique for perfusing single isolated renal tubules. We isolated glands by hand dissection and used concentric glass pipettes to hold them and perfuse their lumina. Parietal cells (PCs), which tended to be located toward the gland opening, were identified by their pyramidal shape, large size, and autofluorescence. Chief cells (CCs) were identified by their round shape and smaller size. In some experiments, we perfused the lumen with hydroxypyrenetrisulfonate, a pH-sensitive fluorophore, at pH 7.4 and used digital image processing to monitor luminal pH (pH1). Solutions were buffered with N-2-hydroxyethylpiperazine-N'-2-ethanesulfonic acid to pH 7.4 at 37 degrees C. With fast perfusion, we found no evidence of decreased pH1, even with stimulation by 10 microM carbachol. With slow perfusion, pH1 often fell below the dye's sensitive range (pH < 5), especially at low perfusate buffering power. In other experiments, we loaded cells with the pH-sensitive dye 2',7'-bis(2-carboxyethyl)-5(6)-carboxyfluorescein and monitored intracellular pH (pHi) in multiple individual PCs and CCs in a single gland. Mean pHi was 7.21 +/- 0.02 (n = 136 cells) for PCs and 7.27 +/- 0.03 (n = 103) for CCs. To examine the response to decreased pH1 and basolateral pH (pHb), we lowered pHb to 6.4 or lowered pH1 to 3.4 or 1.4. Lowering pHb to 6.4 for approximately 1 min caused pHi to fall reversibly by 0.39 +/- 0.05 (n = 53) in PCs and 0.58 +/- 0.03 (n = 50) in CCs. Lowering pH1 to 3.4 or 1.4 caused no significant pHi changes in PCs (n = 38 and 82) or in CCs (n = 44 and 77). Carbachol did not affect the response to changes in pH1 or pHb. We conclude that the apical surfaces of PCs and CCs are unusually resistant to extreme pH gradients.

  3. BIA interpretation techniques for vegetation mapping using thematic mapper false color composites (interim report for San Carlos Reservation)

    USGS Publications Warehouse

    Bonner, W.J.; English, T.C.; Haas, R.H.; Feagan, T.R.; McKinley, R.A.

    1987-01-01

    The Bureau of Indian Affairs (BIA) is responsible for the natural resource management of approximately 52 million acres of Trust lands in the contiguous United States. The lands are distributed in a "patchwork" fashion throughout the country. Management responsibilities on these areas include: minerals, range, timber, fish and wildlife, agricultural, cultural, and archaeological resources. In an age of decreasing natural resources and increasing natural resource values, effective multiple resource management is critical. BIA has adopted a "systems approach" to natural resource management which utilizes Geographic Information System (GIS) technology. The GIS encompasses a continuum of spatial and relational data elements, and included functional capabilities such as: data collection, data entry, data base development, data analysis, data base management, display, and report generalization. In support of database development activities, BIA and BLM/TGS conducted a cooperative effort to investigate the potential of 1:100,000 scale Thematic Mapper (TM) False Color Composites (FCCs) for providing vegetation information suitable for input to the GIS and to later be incorporated as a generalized Bureau wide land cover map. Land cover information is critical as the majority of reservations currently have no land cover information in either map or digital form. This poster outlines an approach which includes the manual interpretation of land cover using TM FCCs, the digitizing of interpreted polygons, and the editing of digital data, used upon ground truthing exercises. An efficient and cost-effective methodology for generating large area land cover information is illustrated for the Mineral Strip area on the San Carlos Indian Reservation in Arizona. Techniques which capitalize on the knowledge of the local natural resources professionals, while minimizing machine processing requirements, are suggested.

  4. Mapping forested wetlands in the Great Zhan River Basin through integrating optical, radar, and topographical data classification techniques.

    PubMed

    Na, X D; Zang, S Y; Wu, C S; Li, W L

    2015-11-01

    Knowledge of the spatial extent of forested wetlands is essential to many studies including wetland functioning assessment, greenhouse gas flux estimation, and wildlife suitable habitat identification. For discriminating forested wetlands from their adjacent land cover types, researchers have resorted to image analysis techniques applied to numerous remotely sensed data. While with some success, there is still no consensus on the optimal approaches for mapping forested wetlands. To address this problem, we examined two machine learning approaches, random forest (RF) and K-nearest neighbor (KNN) algorithms, and applied these two approaches to the framework of pixel-based and object-based classifications. The RF and KNN algorithms were constructed using predictors derived from Landsat 8 imagery, Radarsat-2 advanced synthetic aperture radar (SAR), and topographical indices. The results show that the objected-based classifications performed better than per-pixel classifications using the same algorithm (RF) in terms of overall accuracy and the difference of their kappa coefficients are statistically significant (p<0.01). There were noticeably omissions for forested and herbaceous wetlands based on the per-pixel classifications using the RF algorithm. As for the object-based image analysis, there were also statistically significant differences (p<0.01) of Kappa coefficient between results performed based on RF and KNN algorithms. The object-based classification using RF provided a more visually adequate distribution of interested land cover types, while the object classifications based on the KNN algorithm showed noticeably commissions for forested wetlands and omissions for agriculture land. This research proves that the object-based classification with RF using optical, radar, and topographical data improved the mapping accuracy of land covers and provided a feasible approach to discriminate the forested wetlands from the other land cover types in forestry area.

  5. Mapping forested wetlands in the Great Zhan River Basin through integrating optical, radar, and topographical data classification techniques.

    PubMed

    Na, X D; Zang, S Y; Wu, C S; Li, W L

    2015-11-01

    Knowledge of the spatial extent of forested wetlands is essential to many studies including wetland functioning assessment, greenhouse gas flux estimation, and wildlife suitable habitat identification. For discriminating forested wetlands from their adjacent land cover types, researchers have resorted to image analysis techniques applied to numerous remotely sensed data. While with some success, there is still no consensus on the optimal approaches for mapping forested wetlands. To address this problem, we examined two machine learning approaches, random forest (RF) and K-nearest neighbor (KNN) algorithms, and applied these two approaches to the framework of pixel-based and object-based classifications. The RF and KNN algorithms were constructed using predictors derived from Landsat 8 imagery, Radarsat-2 advanced synthetic aperture radar (SAR), and topographical indices. The results show that the objected-based classifications performed better than per-pixel classifications using the same algorithm (RF) in terms of overall accuracy and the difference of their kappa coefficients are statistically significant (p<0.01). There were noticeably omissions for forested and herbaceous wetlands based on the per-pixel classifications using the RF algorithm. As for the object-based image analysis, there were also statistically significant differences (p<0.01) of Kappa coefficient between results performed based on RF and KNN algorithms. The object-based classification using RF provided a more visually adequate distribution of interested land cover types, while the object classifications based on the KNN algorithm showed noticeably commissions for forested wetlands and omissions for agriculture land. This research proves that the object-based classification with RF using optical, radar, and topographical data improved the mapping accuracy of land covers and provided a feasible approach to discriminate the forested wetlands from the other land cover types in forestry area. PMID

  6. Lithostratigraphic interpretation from joint analysis of seismic tomography and magnetotelluric resistivity models using self-organizing map techniques

    NASA Astrophysics Data System (ADS)

    Bauer, K.; Muñoz, G.; Moeck, I.

    2012-12-01

    The combined interpretation of different models as derived from seismic tomography and magnetotelluric (MT) inversion represents a more efficient approach to determine the lithology of the subsurface compared with the separate treatment of each discipline. Such models can be developed independently or by application of joint inversion strategies. After the step of model generation using different geophysical methodologies, a joint interpretation work flow includes the following steps: (1) adjustment of a joint earth model based on the adapted, identical model geometry for the different methods, (2) classification of the model components (e.g. model blocks described by a set of geophysical parameters), and (3) re-mapping of the classified rock types to visualise their distribution within the earth model, and petrophysical characterization and interpretation. One possible approach for the classification of multi-parameter models is based on statistical pattern recognition, where different models are combined and translated into probability density functions. Classes of rock types are identified in these methods as isolated clusters with high probability density function values. Such techniques are well-established for the analysis of two-parameter models. Alternatively we apply self-organizing map (SOM) techniques, which have no limitations in the number of parameters to be analysed in the joint interpretation. Our SOM work flow includes (1) generation of a joint earth model described by so-called data vectors, (2) unsupervised learning or training, (3) analysis of the feature map by adopting image processing techniques, and (4) application of the knowledge to derive a lithological model which is based on the different geophysical parameters. We show the usage of the SOM work flow for a synthetic and a real data case study. Both tests rely on three geophysical properties: P velocity and vertical velocity gradient from seismic tomography, and electrical resistivity

  7. Superluminal, luminal, and subluminal nondiffracting pulses applied to free-space optical systems: theoretical description.

    PubMed

    Garay-Avendaño, Roger L; Zamboni-Rached, Michel

    2016-03-01

    In this paper, we show theoretically nondiffracting pulses with arbitrary peak velocities that are suitable for data signal transmission without distortion over long distances using different techniques of signal modulation. Our results provide closed-form analytical solutions to the wave equation describing superluminal, luminal, and subluminal ideal nondiffracting pulses with frequency spectra commonly used in the field of optical communications. PMID:26974644

  8. High-Resolution Sea Ice Topography Mapping using UAS-based Sensors and Structure-from-Motion Techniques

    NASA Astrophysics Data System (ADS)

    Saiet, E., II

    2015-12-01

    Digital Elevation Models (DEMs) of sea-ice are key descriptors of the surface boundary between ice and atmosphere. High resolution (meter-scale) and accurate (cm-scale) DEM data are required to correctly understand surface-atmosphere interactions in arctic environments. Beyond that, high-quality DEM data is also needed to understand sea ice stability and quantify the suitability of sea ice as a means of navigation both on and through the ice. Due to the stringent accuracy requirements of sea ice topography mapping, Lidar data are often used to generate sea ice DEMs. Despite their proven performance, Lidar sensors are expensive and difficult to operate especially in harsh and remote Arctic environments. Hence, alternative more efficient solutions need to be found. To address this issue, this study is investigating the applicability of two recent technical innovations to sea ice DEM production: (1) We analyze the performance of Structure from Motion (SfM) techniques for sea ice topography mapping. SfM is an image processing technique that has recently gained momentum in the geosciences and enables high-quality DEM production from images of uncalibrated off-the-shelf cameras; (2) we investigate the applicability of Unmanned Aerial Systems (UAS) as platform for our camera systems. UAS have significant advantages for Arctic applications due to their high flexibility, low-cost, and ability to fly in environments deemed risky for manned operations. Combined, SfM and UAS may result in an intriguing alternative to traditional sensors. Using data from a 2015 field campaign near Barrow, Alaska, we showcase the DEM measurement performance that can be achieved with UAS-based sensors and SfM processing. In addition to showing examples of DEM products, we will provide results of an extensive performance analysis where DEM measurements were compared to ground observations and DEMs from alternative sources. To analyze the impact of flight-track information on DEM quality, we first

  9. Combination of techniques for mapping structural and functional connectivity of soil erosion processes: a case study in a small watershed

    NASA Astrophysics Data System (ADS)

    Seeger, Manuel; Taguas, Encarnación; Brings, Christine; Wirtz, Stefan; Rodrigo Comino, Jesus; Albert, Enrique; Ries, Johabbes B.

    2016-04-01

    experiment). The measurements were conducted at representative areas identified in advance by precedent mapping. Preliminary results show that the rills are highly effective in producing sediments, but also in connecting fast the different sources with the catchment's outlet. But also they act as a disconnecting feature to the areas of observation, as they may lead the runoff (and the transported sediments) outside the catchment. On the other side, the experiments showed that the evidently degraded areas produce only very delayed runoff, and thus also sediments, whereas the areas with stable deep soils show evidences of fast runoff and erosive responses. The preliminary results of the combination of mapping and experimental techniques demonstrate the different levels at where functional and structural connectivity have to be evaluated. The latter one may be, as a geomorphological feature, the result of former process distributions, whereas the directly observable (functional) connectivity may shift in time due to internal feedbacks, such as the result of soil degradation.

  10. Optimal Electromagnetic (EM) Geophysical Techniques to Map the Concentration of Subsurface Ice and Adsorbed Water on Mars and the Moon

    NASA Astrophysics Data System (ADS)

    Stillman, D. E.; Grimm, R. E.

    2013-12-01

    Water ice is ubiquitous in our Solar System and is a probable target for planetary exploration. Mapping the lateral and vertical concentration of subsurface ice from or near the surface could determine the origin of lunar and martian ice and quantify a much-needed resource for human exploration. Determining subsurface ice concentration on Earth is not trivial and has been attempted previously with electrical resistivity tomography (ERT), ground penetrating radar (GPR), airborne EM (AEM), and nuclear magnetic resonance (NMR). These EM geophysical techniques do not actually detect ice, but rather the absence of unfrozen water. This causes a non-unique interpretation of frozen and dry subsurface sediments. This works well in the arctic because most locations are not dry. However, for planetary exploration, liquid water is exceedingly rare and subsurface mapping must discriminate between an ice-rich and a dry subsurface. Luckily, nature has provided a unique electrical signature of ice: its dielectric relaxation. The dielectric relaxation of ice creates a temperature and frequency dependence of the electrical properties and varies the relative dielectric permittivity from ~3.1 at radar frequencies to >100 at low frequencies. On Mars, sediments smaller than silt size can hold enough adsorbed unfrozen water to complicate the measurement. This is because the presence of absorbed water also creates frequency-dependent electrical properties. The dielectric relaxation of adsorbed water and ice can be separated as they have different shapes and frequency ranges as long as a spectrum spanning the two relaxations is measured. The volume concentration of ice and adsorbed water is a function of the strength of their relaxations. Therefore, we suggest that capacitively-coupled dielectric spectroscopy (a.k.a. spectral induced polarization or complex resistivity) can detect the concentration of both ice and adsorbed water in the subsurface. To prove this concept we have collected

  11. Low luminance/eyes closed and monochromatic stimulations reduce variability of flash visual evoked potential latency

    PubMed Central

    Subramanian, Senthil Kumar; Gaur, Giriwar Singh; Narayan, Sunil K.

    2013-01-01

    Context: Visual evoked potentials are useful in investigating the physiology and pathophysiology of the human visual system. Flash visual evoked potential (FVEP), though technically easier, has less clinical utility because it shows great variations in both latency and amplitude for normal subjects. Aim: To study the effect of eye closure, low luminance, and monochromatic stimulation on the variability of FVEPs. Subjects and Methods: Subjects in self-reported good health in the age group of 18-30 years were divided into three groups. All participants underwent FVEP recording with eyes open and with white light at 0.6 J luminance (standard technique). Next recording was done in group 1 with closed eyes, group 2 with 1.2 and 20 J luminance, and group 3 with red and blue lights, while keeping all the other parameters constant. Two trials were given for each eye, for each technique. The same procedure was repeated at the same clock time on the following day. Statistical Analysis: Variation in FVEP latencies between the individuals (interindividual variability) and the variations within the same individual for four trials (intraindividual variability) were assessed using coefficient of variance (COV). The technique with lower COV was considered the better method. Results: Recording done with closed eyes, 0.6 J luminance, and monochromatic light (blue > red) showed lower interindividual and intraindividual variability in P2 and N2 as compared to standard techniques. Conclusions: Low luminance flash stimulations and monochromatic light will reduce FVEP latency variability and may be clinically useful modifications of FVEP recording technique. PMID:24339591

  12. Deep Resistivity Sounding (DRS) technique for mapping of sub-trappean sediments - A case study from central India

    NASA Astrophysics Data System (ADS)

    Veeraiah, B.; Ashok Babu, G.

    2014-06-01

    Mapping of subtrappean Mesozoic sediments is a complex geophysical problem faced by the oil industry world over which is of greater interest. Sediments underlying the traps are of varying thickness and causing considerable uncertainty in the interpretation of geophysical data. Deep Resistivity Sounding (DRS) technique is an effective non invasive electrical geophysical tool for delineation of the sub-trappean sediments up to a large depth (~ 3-4 km), due to marked resistivity contrast exist between the trap, sediment and basement. As a part of an integrated geophysical study, eighty-one deep resistivity soundings were conducted in the eastern part of Deccan flood basalt region, central India. The region is situated in the western part of the Indian subcontinent, which is covered by the Deccan traps, mostly exposed, except for a few places where thin Tertiary sediments are present. DRS data were acquired with Schlumberger electrode configuration up to a maximum current electrode separation of 10 km using high power deep resistivity equipment. The transmitter is capable of sending maximum current in the order of 20 A, and the receiver can measure signals in micro volts. Layer parameters i.e., resistivities and thicknesses of various layers have been determined from 1D modeling of all the eighty-one DRS stations. In order to explain the layered earth structure, geo-electric sections were prepared along few profiles and the results from two profiles showing maximum sediments are discussed in the present paper.

  13. Evaluation of SLAR and simulated thematic mapper MSS data for forest cover mapping using computer-aided analysis techniques

    NASA Technical Reports Server (NTRS)

    Hoffer, R. M.; Dean, M. E.; Knowlton, D. J.; Latty, R. S.

    1982-01-01

    Kershaw County, South Carolina was selected as the study site for analyzing simulated thematic mapper MSS data and dual-polarized X-band synthetic aperture radar (SAR) data. The impact of the improved spatial and spectral characteristics of the LANDSAT D thematic mapper data on computer aided analysis for forest cover type mapping was examined as well as the value of synthetic aperture radar data for differentiating forest and other cover types. The utility of pattern recognition techniques for analyzing SAR data was assessed. Topics covered include: (1) collection and of TMS and reference data; (2) reformatting, geometric and radiometric rectification, and spatial resolution degradation of TMS data; (3) development of training statistics and test data sets; (4) evaluation of different numbers and combinations of wavelength bands on classification performance; (5) comparison among three classification algorithms; and (6) the effectiveness of the principal component transformation in data analysis. The collection, digitization, reformatting, and geometric adjustment of SAR data are also discussed. Image interpretation results and classification results are presented.

  14. Orbital masses of nearby luminous galaxies

    SciTech Connect

    Karachentsev, Igor D.; Kudrya, Yuri N. E-mail: yukudrya@gmail.com

    2014-09-01

    We use observational properties of galaxies accumulated in the Updated Nearby Galaxy Catalog to derive a dark matter mass of luminous galaxies via motions of their companions. The data on orbital-to-stellar mass ratio are presented for 15 luminous galaxies situated within 11 Mpc from us: the Milky Way, M31, M81, NGC 5128, IC342, NGC 253, NGC 4736, NGC 5236, NGC 6946, M101, NGC 4258, NGC 4594, NGC 3115, NGC 3627, and NGC 3368, as well as for a composite suite around other nearby galaxies of moderate and low luminosity. The typical ratio for these galaxies is M {sub orb}/M {sub *} = 31, corresponding to the mean local density of matter Ω {sub m} = 0.09, i.e., one-third of the global cosmic density. This quantity seems to be rather an upper limit of dark matter density, since the peripheric population of the suites may suffer from the presence of fictitious unbound members. We note that the Milky Way and M31 halos have lower dimensions and lower stellar masses than those of the other 13 nearby luminous galaxies. However, the dark-to-stellar mass ratio for both the Milky Way and M31 is typical for other neighboring luminous galaxies. The distortion in the Hubble flow, observed around the Local Group and five other neighboring groups, yields their total masses within the radius of a zero velocity surface, R {sub 0}; these masses are slightly lower than the orbital and virial values. This difference may be due to the effect of dark energy producing a kind of 'mass defect' within R {sub 0}.

  15. Luminal Conversion and Immunoregulation by Probiotics

    PubMed Central

    Ganesh, Bhanu Priya; Versalovic, James

    2015-01-01

    Beneficial microbes are responsible for the synthesis of nutrients and metabolites that are likely important for the maintenance of mammalian health. Many nutrients and metabolites derived from the gut microbiota by luminal conversion have been implicated in the development, homeostasis and function of innate and adaptive immunity. These factors clearly suggest that intestinal microbiota may influence host immunity via microbial metabolite-dependent mechanisms. We describe how intestinal microbes including probiotics generate microbial metabolites that modulate mucosal and systemic immunity. PMID:26617521

  16. Relation between acid back-diffusion and luminal surface hydrophobicity in canine gastric mucosa: Effects of salicylate and prostaglandin

    SciTech Connect

    Goddard, P.J.

    1989-01-01

    The stomach is thought to be protected from luminal acid by a gastric mucosal barrier that restricts the diffusion of acid into tissue. This study tested the hypothesis that the hydrophobic luminal surface of canine gastric mucosa incubated in Ussing chambers, impedes the back-diffusion of luminal acid into the tissue. Isolated sheets of mucosa were treated with cimetidine to inhibit spontaneous acid secretion, and incubated under conditions that prevented significant secretion of luminal bicarbonate. By measuring acid loss from the luminal compartment using the pH-stat technique, acid back-diffusion was continuously monitored; potential difference (PD) was measured as an index of tissue viability. Tissue luminal surface hydrophobicity was estimated by contact angle analysis at the end of each experiment. Addition of 16,16-dimethyl prostaglandin E{sub 2} to the nutrient compartment enhanced luminal surface hydrophobicity, but did not reduce acid back-diffusion in tissues that maintained a constant PD. 10 mM salicylate at pH 4.00 in the luminal compartment reduced surface hydrophobicity, but this decrease did not occur if 1 ug/ml prostaglandin was present in the nutrient solution. Despite possessing relatively hydrophilic and relatively hydrophobic surface properties, respectively, acid back-diffusion in the absence of salicylate was not significantly different between these two groups. Neither group maintained a PD after incubation with salicylate. Lastly, radiolabeled salicylate was used to calculate the free (non-salicylate associated) acid loss in tissues incubated with salicylate and/or prostaglandin. No significant correlation was found between free acid back-diffusion and luminal surface hydrophobicity. These data do not support the hypothesis that acid back-diffusion in impeded by the hydrophobic surface presented by isolated canine gastric mucosa.

  17. Nonlinearities in the binocular combination of luminance and contrast.

    PubMed

    Baker, Daniel H; Wallis, Stuart A; Georgeson, Mark A; Meese, Tim S

    2012-03-01

    We studied the rules by which visual responses to luminous targets are combined across the two eyes. Previous work has found very different forms of binocular combination for targets defined by increments and by decrements of luminance, with decrement data implying a severe nonlinearity before binocular combination. We ask whether this difference is due to the luminance of the target, the luminance of the background, or the sign of the luminance excursion. We estimated the pre-binocular nonlinearity (power exponent) by fitting a computational model to ocular equibrightness matches. The severity of the nonlinearity had a monotonic dependence on the signed difference between target and background luminance. For dual targets, in which there was both a luminance increment and a luminance decrement (e.g. contrast), perception was governed largely by the decrement. The asymmetry in the nonlinearities derived from the subjective matching data made a clear prediction for visual performance: there should be more binocular summation for detecting luminance increments than for detecting luminance decrements. This prediction was confirmed by the results of a subsequent experiment. We discuss the relation between these results and luminance nonlinearities such as a logarithmic transform, as well as the involvement of contemporary model architectures of binocular vision.

  18. Evolution Of Map Display Optical Systems

    NASA Astrophysics Data System (ADS)

    Boot, Alan

    1983-06-01

    It is now over 20 years since Ferranti plc introduced optically projected map displays into operational aircraft navigation systems. Then, as now, it was the function of the display to present an image of a topographical map to a pilot or navigator with his present position clearly identified. Then, as now, the map image was projected from a reduced image stored on colour micro film. Then, as now, the fundamental design problems are the same.In the exposed environment of an aircraft cockpit where brightness levels may vary from those associated with direct sunlight on the one hand, to starlight on the other, how does one design an optical system with sufficient luminance, contrast and resolution where in the daytime sunlight may fall on the display or in the pilot's eyes, and at night time the display luminance must not detract from the pilot's ability to pick up external clues? This paper traces the development of Ferranti plc optically projected map displays from the early V Bomber and the ill-fated TSR2 displays to the Harrier and Concorde displays. It then goes on to the development of combined map and electronic displays (COMED), showing how an earlier design, as fitted to Tornado, has been developed into the current COMED design which is fitted to the F-18 and Jaguar aircraft. In each of the above display systems particular features of optical design interest are identified and their impact on the design as a whole are discussed. The use of prisms both for optical rotation and translation, techniques for the maximisation of luminance, the problems associated with contrast enhancement, particularly with polarising filters in the presence of optically active materials, the use of aerial image combining systems and the impact of the pilot interface on the system parameter are all included.Perhaps the most interesting result in considering the evolution of map displays has not been so much the designer's solutions in overcoming the various design problems but

  19. Facilitating the exploitation of ERTS imagery using snow enhancement techniques. [geological mapping of New England test area

    NASA Technical Reports Server (NTRS)

    Wobber, F. J.; Martin, K. R. (Principal Investigator); Amato, R. V.; Leshendok, T.

    1974-01-01

    The author has identified the following significant results. The procedure for conducting a regional geological mapping program utilizing snow-enhanced ERTS-1 imagery has been summarized. While it is recognized that mapping procedures in geological programs will vary from area to area and from geologist to geologist, it is believed that the procedure tested in this project is applicable over a wide range of mapping programs. The procedure is designed to maximize the utility and value of ERTS-1 imagery and aerial photography within the initial phase of geological mapping programs. Sample products which represent interim steps in the mapping formula (e.g. the ERTS Fracture-Lineament Map) have been prepared. A full account of these procedures and products will be included within the Snow Enhancement Users Manual.

  20. The Effect of Learner Constructed, Fill in the Map Concept Map Technique, and Summarizing Strategy on Iranian Pre-University Students' Reading Comprehension

    ERIC Educational Resources Information Center

    Soleimani, Hassan; Nabizadeh, Fatemeh

    2012-01-01

    Concept maps (CM) are powerful tools which have different uses in educational contexts; however, this study limited its extension and explored its impact on the reading comprehension skill of Iranian EFL students. To this purpose, a proficiency test was employed and 90 intermediate pre-university students were chosen and divided into three groups:…

  1. Analysis of the nonlinear behavior of shear-Alfvén modes in tokamaks based on Hamiltonian mapping techniques

    NASA Astrophysics Data System (ADS)

    Briguglio, S.; Wang, X.; Zonca, F.; Vlad, G.; Fogaccia, G.; Di Troia, C.; Fusco, V.

    2014-11-01

    We present a series of numerical simulation experiments set up to illustrate the fundamental physics processes underlying the nonlinear dynamics of Alfvénic modes resonantly excited by energetic particles in tokamak plasmas and of the ensuing energetic particle transports. These phenomena are investigated by following the evolution of a test particle population in the electromagnetic fields computed in self-consistent MHD-particle simulation performed by the HMGC code. Hamiltonian mapping techniques are used to extract and illustrate several features of wave-particle dynamics. The universal structure of resonant particle phase space near an isolated resonance is recovered and analyzed, showing that bounded orbits and untrapped trajectories, divided by the instantaneous separatrix, form phase space zonal structures, whose characteristic non-adiabatic evolution time is the same as the nonlinear time of the underlying fluctuations. Bounded orbits correspond to a net outward resonant particle flux, which produces a flattening and/or gradient inversion of the fast ion density profile around the peak of the linear wave-particle resonance. The connection of this phenomenon to the mode saturation is analyzed with reference to two different cases: a Toroidal Alfvén eigenmode in a low shear magnetic equilibrium and a weakly unstable energetic particle mode for stronger magnetic shear. It is shown that, in the former case, saturation is reached because of radial decoupling (resonant particle redistribution matching the mode radial width) and is characterized by a weak dependence of the mode amplitude on the growth rate. In the latter case, saturation is due to resonance detuning (resonant particle redistribution matching the resonance width) with a stronger dependence of the mode amplitude on the growth rate.

  2. Application of advanced seismic reflection imaging techniques to mapping permeable zones at Dixie Valley, Nevada. Final technical report

    SciTech Connect

    1998-02-18

    Multifold seismic reflection data from the Dixie Valley geothermal field in Nevada were reprocessed using a nonlinear optimization scheme called simulated annealing to model subsurface acoustic velocities, followed by a pre-stack Kirchhoff migration to produce accurate and detailed depth-migrated images of subsurface structure. In contrast to conventional processing techniques, these methods account for significant lateral variations in velocity and thus have the potential ability to image steeply-dipping faults and fractures that may affect permeability within geothermal fields. The optimization scheme develops two-dimensional velocity models to within 6% of velocities obtained from well and surface geologic data. Only the seismic data (i.e., first arrival times of P waves) are used to construct the velocity models and pre-stack migration images, and no other a priori assumptions are invoked. Velocities obtained by processing individual seismic tracks were integrated to develop a block diagram of velocities to 2.3 km depth within the Dixie Valley geothermal field. Details of the tectonic and stratigraphic structure allowed three dimensional extension of the interpretations of two dimensional data. Interpretations of the processed seismic data are compared with well data, surface mapping, and other geophysical data. The Dixie Valley fault along the southeastern Stillwater Range Piedmont is associated with a pronounced lateral velocity gradient that is interpreted to represent the juxtaposition of relatively low velocity basin-fill strata in the hanging wall against higher velocity crystalline rocks in the footwall. The down-dip geometry of the fault was evaluated by inverting arrival times from a negative move-out event, which we associate with the dipping fault plane, on individual shot gathers for seismic line SRC-3 for the location and depth of the associated reflection points on the fault.

  3. Analysis of the nonlinear behavior of shear-Alfvén modes in tokamaks based on Hamiltonian mapping techniques

    SciTech Connect

    Briguglio, S. Vlad, G.; Fogaccia, G.; Di Troia, C.; Fusco, V.; Wang, X.; Zonca, F.

    2014-11-15

    We present a series of numerical simulation experiments set up to illustrate the fundamental physics processes underlying the nonlinear dynamics of Alfvénic modes resonantly excited by energetic particles in tokamak plasmas and of the ensuing energetic particle transports. These phenomena are investigated by following the evolution of a test particle population in the electromagnetic fields computed in self-consistent MHD-particle simulation performed by the HMGC code. Hamiltonian mapping techniques are used to extract and illustrate several features of wave-particle dynamics. The universal structure of resonant particle phase space near an isolated resonance is recovered and analyzed, showing that bounded orbits and untrapped trajectories, divided by the instantaneous separatrix, form phase space zonal structures, whose characteristic non-adiabatic evolution time is the same as the nonlinear time of the underlying fluctuations. Bounded orbits correspond to a net outward resonant particle flux, which produces a flattening and/or gradient inversion of the fast ion density profile around the peak of the linear wave-particle resonance. The connection of this phenomenon to the mode saturation is analyzed with reference to two different cases: a Toroidal Alfvén eigenmode in a low shear magnetic equilibrium and a weakly unstable energetic particle mode for stronger magnetic shear. It is shown that, in the former case, saturation is reached because of radial decoupling (resonant particle redistribution matching the mode radial width) and is characterized by a weak dependence of the mode amplitude on the growth rate. In the latter case, saturation is due to resonance detuning (resonant particle redistribution matching the resonance width) with a stronger dependence of the mode amplitude on the growth rate.

  4. The Formation of Primordial Luminous Objects

    SciTech Connect

    Ripamonti, Emanuele; Abel, Tom; /KIPAC, Menlo Park

    2005-08-04

    structure formation. In these notes we will leave the discussion of feedback to lecture notes by Ferrara & Salvaterra and by Madau & Haardt in this same book and focus only on the aspects of the formation of the first objects. The advent of cosmological numerical hydrodynamics in particular allow a fresh new look at these questions. Hence, these notes will touch on aspects of theoretical cosmology to chemistry, computer science, hydrodynamics and atomic physics. For further reading and more references on the subject we refer the reader to other relevant reviews such as Barkana & Loeb 2001, and more recently Ciardi & Ferrara 2004, Glover 2004 and Bromm & Larson 2004. In these notes, we try to give a brief introduction to only the most relevant aspects. We will start with a brief overview of the relevant cosmological concepts in section 2, followed by a discussion of the properties of primordial material (with particular emphasis to its cooling and its chemistry) in section 3. We will then review the technique and the results of numerical simulations in sections 4 and 5: the former will deal with detailed 3D simulations of the formation of gaseous clouds which are likely to transform into luminous objects, while the latter will examine results (mostly from 1D codes) about the modalities of such transformation. Finally, in section 6 we will critically discuss the results of the previous sections, examining their consequences and comparing them to our present knowledge of the universe.

  5. Topical report on subsurface fracture mapping from geothermal wellbores. Phase I. Pulsed radar techniques. Phase II. Conventional logging methods. Phase III. Magnetic borehole ranging

    SciTech Connect

    Hartenbaum, B.A.; Rawson, G.

    1980-09-01

    To advance the state-of-the-art in Hot Dry Rock technology, an evaluation is made of (i) the use of radar to map far-field fractures, (ii) the use of more than twenty different conventional well logging tools to map borehole-fracture intercepts, and (iii) the use of magnetic dipole ranging to determine the relative positions of the injection well and the production well within the fractured zone. It is found that according to calculations, VHF backscatter radar has the potential for mapping fractures within a distance of 50 +- 20 meters from the wellbore. A new technique for improving fracture identification is presented. Analyses of extant data indicate that when used synergistically the (1) caliper, (2) resistivity dipmeter, (3) televiewer, (4) television, (5) impression packer, and (6) acoustic transmission are useful for mapping borehole-fracture intercepts. Improvements in both data interpretation techniques and high temperature operation are required. The surveying of one borehole from another appears feasible at ranges of up to 200 to 500 meters by using a low frequency magnetic field generated by a moderately strong dipole source (a solenoid) located in one borehole, a sensitive B field detector that traverses part of the second borehole, narrow band filtering, and special data inversion techniques.

  6. Luminal B breast cancer subtype displays a dicotomic epigenetic pattern.

    PubMed

    Bediaga, Naiara G; Beristain, Elena; Calvo, Borja; Viguri, María A; Gutierrez-Corres, Borja; Rezola, Ricardo; Ruiz-Diaz, Irune; Guerra, Isabel; de Pancorbo, Marian M

    2016-01-01

    Luminal B breast tumors have aggressive clinical and biological features, and constitute the most heterogeneous molecular subtype, both clinically and molecularly. Unfortunately, the immunohistochemistry correlate of the luminal B subtype remains still imprecise, and it has now become of paramount importance to define a classification scheme capable of segregating luminal tumors into clinically meaningful subgroups that may be used clinically to guide patient management. With the aim of unraveling the DNA methylation profiles of the luminal subtypes currently being most used in the clinical setting, we have quantified the DNA methylation level of 27,578 CpG sites in 17 luminal B (ER+, Ki67 ≥ 20 % or PgR < 20 % and HER2-), 8 luminal A (ER+ and Ki67 > 20 %) and 4 luminal B-HER2+ (ER+ and HER2+) breast cancer samples by using the Illumina Infinium methylation microarray approach. Unsupervised hierarchical clustering revealed that DNA methylation stratifies luminal B samples in two categories with differing epigenetic and clinical features. One subgroup of luminal B samples showed a methylator phenotype and clustered with the lumB-HER tumors, while the other showed less methylated events, clustered with the luminal A. A 3 CpG marker panel capable of discriminating methylator versus non-methylator luminal B samples was identified and further validated in an independent cohort of patients. Our results provide evidence that DNA methylation and, more specifically, a panel of 3 CpG markers, enables the stratification of luminal B samples in two categories with differing epigenetic and clinical features and support the utilization of this panel for therapeutic stratification of patients with luminal breast cancer. PMID:27330889

  7. Constraining Emission Models of Luminous Blazar Sources

    SciTech Connect

    Sikora, Marek; Stawarz, Lukasz; Moderski, Rafal; Nalewajko, Krzysztof; Madejski, Greg; /KIPAC, Menlo Park /SLAC

    2009-10-30

    Many luminous blazars which are associated with quasar-type active galactic nuclei display broad-band spectra characterized by a large luminosity ratio of their high-energy ({gamma}-ray) and low-energy (synchrotron) spectral components. This large ratio, reaching values up to 100, challenges the standard synchrotron self-Compton models by means of substantial departures from the minimum power condition. Luminous blazars have also typically very hard X-ray spectra, and those in turn seem to challenge hadronic scenarios for the high energy blazar emission. As shown in this paper, no such problems are faced by the models which involve Comptonization of radiation provided by a broad-line-region, or dusty molecular torus. The lack or weakness of bulk Compton and Klein-Nishina features indicated by the presently available data favors production of {gamma}-rays via up-scattering of infrared photons from hot dust. This implies that the blazar emission zone is located at parsec-scale distances from the nucleus, and as such is possibly associated with the extended, quasi-stationary reconfinement shocks formed in relativistic outflows. This scenario predicts characteristic timescales for flux changes in luminous blazars to be days/weeks, consistent with the variability patterns observed in such systems at infrared, optical and {gamma}-ray frequencies. We also propose that the parsec-scale blazar activity can be occasionally accompanied by dissipative events taking place at sub-parsec distances and powered by internal shocks and/or reconnection of magnetic fields. These could account for the multiwavelength intra-day flares occasionally observed in powerful blazars sources.

  8. Evolving Our Evaluation of Luminous Environments

    NASA Technical Reports Server (NTRS)

    Clark, Toni

    2016-01-01

    The advance in solid state light emitting technologies and optics for lighting and visual communication necessitates the evaluation of how NASA envisions spacecraft lighting architectures and how NASA uses industry standards for the design and evaluation of lighting systems. Current NASA lighting standards and requirements for existing architectures focus on the separate ability of a lighting system to throw light against a surface or the ability of a display system to provide the appropriate visual contrast. This project investigated large luminous surface lamps as an alternative or supplement to overhead lighting. The efficiency of the technology was evaluated for uniformity and power consumption.

  9. Luminous variables in the Quintuplet cluster

    NASA Astrophysics Data System (ADS)

    Glass, I. S.; Matsumoto, S.; Carter, B. S.; Sekiguchi, K.

    1999-03-01

    We report observations of variability amongst the stars of the `Quintuplet' cluster located about 30 pc in projection from the centre of the Galaxy. Two of the five cocoon star members, which may be protostars or peculiar Wolf-Rayet stars, are seen to vary slowly with moderate amplitude (0.4-0.5 mag). The bright star within the `Pistol' H ii region, suspected of being a luminous blue variable (LBV), has in fact been found to show variability, confirming its tentative classification. A second nearby presumed LBV also varies. One of the apparent Quintuplet members is likely to be a Mira variable along the same line of sight.

  10. Making large class basic histology lectures more interactive: The use of draw-along mapping techniques and associated educational activities.

    PubMed

    Kotzé, Sanet Henriët; Mole, Calvin Gerald

    2015-01-01

    At Stellenbosch University, South Africa, basic histology is taught to a combination class of almost 400 first-year medical, physiotherapy, and dietetic students. Many students often find the amount of work in basic histology lectures overwhelming and consequently loose interest. The aim was to determine if a draw-along mapping activity would focus students during large class lectures. After each lecture on three basic histology tissues, a guided draw-along mapping session covering the work from the lecture was introduced in the form of a click-advance PowerPoint presentation which was used to demonstrate the unfolding of an "ideal" map. The lecturer simultaneously drew a similar map using an overhead projector allowing the students to draw their own maps on blank sheets of paper along with the lecturer. Students remained attentive during the activity and many participated in answering informal questions posed by the lecturer as the map-making session progressed. After the last session, students completed an anonymous, voluntary questionnaire (response rate of 78%). The majority of students found the draw-along maps useful (94%) and believed that its use should be continued in the future (93%). A significant increase (P < 0.001) was found in the test results of student cohorts who were given the current intervention compared to cohorts from previous years who were given mind maps as handouts only or had no intervention. The use of the draw-along mapping sessions were successful in focusing students during large class lectures while also providing them with a useful tool for their studies.

  11. Use of remote sensing techniques for geological hazard surveys in vegetated urban regions. [multispectral imagery for lithological mapping

    NASA Technical Reports Server (NTRS)

    Stow, S. H.; Price, R. C.; Hoehner, F.; Wielchowsky, C.

    1976-01-01

    The feasibility of using aerial photography for lithologic differentiation in a heavily vegetated region is investigated using multispectral imagery obtained from LANDSAT satellite and aircraft-borne photography. Delineating and mapping of localized vegetal zones can be accomplished by the use of remote sensing because a difference in morphology and physiology results in different natural reflectances or signatures. An investigation was made to show that these local plant zones are affected by altitude, topography, weathering, and gullying; but are controlled by lithology. Therefore, maps outlining local plant zones were used as a basis for lithologic map construction.

  12. A methodology for small scale rural land use mapping in semi-arid developing countries using orbital imagery. Part 5: Experimental and operational techniques of mapping land use

    NASA Technical Reports Server (NTRS)

    Vangenderen, J. L. (Principal Investigator); Lock, B. F.

    1976-01-01

    The author has identified the following significant results. Scope of the preprocessing techniques was restricted to standard material from the EROS Data Center accompanied by some enlarging procedures and the use of the diazo process. Investigation has shown that the most appropriate sampling strategy for this study is the stratified random technique. A viable sampling procedure, together with a method for determining minimum number of sample points in order to test results of any interpretation are presented.

  13. Standard deviation of luminance distribution affects lightness and pupillary response.

    PubMed

    Kanari, Kei; Kaneko, Hirohiko

    2014-12-01

    We examined whether the standard deviation (SD) of luminance distribution serves as information of illumination. We measured the lightness of a patch presented in the center of a scrambled-dot pattern while manipulating the SD of the luminance distribution. Results showed that lightness decreased as the SD of the surround stimulus increased. We also measured pupil diameter while viewing a similar stimulus. The pupil diameter decreased as the SD of luminance distribution of the stimuli increased. We confirmed that these results were not obtained because of the increase of the highest luminance in the stimulus. Furthermore, results of field measurements revealed a correlation between the SD of luminance distribution and illuminance in natural scenes. These results indicated that the visual system refers to the SD of the luminance distribution in the visual stimulus to estimate the scene illumination. PMID:25606770

  14. Luminance cues constrain chromatic blur discrimination in natural scene stimuli.

    PubMed

    Sharman, Rebecca J; McGraw, Paul V; Peirce, Jonathan W

    2013-01-01

    Introducing blur into the color components of a natural scene has very little effect on its percept, whereas blur introduced into the luminance component is very noticeable. Here we quantify the dominance of luminance information in blur detection and examine a number of potential causes. We show that the interaction between chromatic and luminance information is not explained by reduced acuity or spatial resolution limitations for chromatic cues, the effective contrast of the luminance cue, or chromatic and achromatic statistical regularities in the images. Regardless of the quality of chromatic information, the visual system gives primacy to luminance signals when determining edge location. In natural viewing, luminance information appears to be specialized for detecting object boundaries while chromatic information may be used to determine surface properties.

  15. Standard deviation of luminance distribution affects lightness and pupillary response.

    PubMed

    Kanari, Kei; Kaneko, Hirohiko

    2014-12-01

    We examined whether the standard deviation (SD) of luminance distribution serves as information of illumination. We measured the lightness of a patch presented in the center of a scrambled-dot pattern while manipulating the SD of the luminance distribution. Results showed that lightness decreased as the SD of the surround stimulus increased. We also measured pupil diameter while viewing a similar stimulus. The pupil diameter decreased as the SD of luminance distribution of the stimuli increased. We confirmed that these results were not obtained because of the increase of the highest luminance in the stimulus. Furthermore, results of field measurements revealed a correlation between the SD of luminance distribution and illuminance in natural scenes. These results indicated that the visual system refers to the SD of the luminance distribution in the visual stimulus to estimate the scene illumination.

  16. Study of USGS/NASA land use classification system. [compatibility of land use classification system with computer processing techniques employed for land use mapping from ERTS data

    NASA Technical Reports Server (NTRS)

    Spann, G. W.; Faust, N. L.

    1974-01-01

    It is known from several previous investigations that many categories of land-use can be mapped via computer processing of Earth Resources Technology Satellite data. The results are presented of one such experiment using the USGS/NASA land-use classification system. Douglas County, Georgia, was chosen as the test site for this project. It was chosen primarily because of its recent rapid growth and future growth potential. Results of the investigation indicate an overall land-use mapping accuracy of 67% with higher accuracies in rural areas and lower accuracies in urban areas. It is estimated, however, that 95% of the State of Georgia could be mapped by these techniques with an accuracy of 80% to 90%.

  17. Attaining Meaningful Learning of Concepts in Genetics and Ecology: An Examination of the Potency of the Concept-Mapping Technique.

    ERIC Educational Resources Information Center

    Okebukola, Peter Akinsola

    1990-01-01

    Discussed is the relationship of concept-mapping by students to the meaningful learning of genetics and ecological concepts. The implications of these results for teacher education in biology are addressed. (KR)

  18. Peripheral visual response time and retinal luminance-area relations

    NASA Technical Reports Server (NTRS)

    Haines, R. F.

    1975-01-01

    Experiments were undertaken to elucidate the stimulus luminance-retinal area relationship that underlies response time (RT) behavior. Mean RT was significantly faster to stimuli imaged beyond about 70 deg of arc from the fovea when their luminance was increased by an amount equal to the foveal stimulus luminance multiplied by the cosine of the angle between the peripheral stimuli and the line of sight. This and additional data are discussed in relation to previous psychophysical data and to possible response mechanisms.

  19. Contribution of a luminance-dependent S-cone mechanism to non-assimilative color spreading in the watercolor configuration

    PubMed Central

    Kimura, Eiji; Kuroki, Mikako

    2014-01-01

    In the watercolor configuration composed of wavy double contours, both assimilative and non-assimilative color spreading have been demonstrated depending on the luminance conditions of the inner and outer contours (IC and OC, respectively). This study investigated how the induced color in the watercolor configuration was modulated by combinations of the IC and the OC color, particularly addressing non-assimilative color spreading. In two experiments, the IC color was fixed to a certain color and combined with various colors selected from a hue circle centered at the background white color. Color spreading was quantified with a chromatic cancelation technique. Results showed that both the magnitude and the apparent hue of the color spreading were largely changed with the luminance condition. When the IC contrast (Weber contrast of the IC to the background luminance) was smaller in size than the OC contrast (higher IC luminance condition), the color spreading was assimilative. When the luminance condition was reversed and the IC contrast was greater than the OC contrast (lower IC luminance condition), the color spreading was non-assimilative and yellowish. When the color spreading was analyzed in terms of cone-opponent excitations, the results were consistent with the interpretation that the color spreading is explainable by a combination of chromatic diffusion from the IC and chromatically opponent induction from the OC. The color spreading in the higher IC luminance condition mainly reflected the chromatic diffusion by both (L–M) and S cone-opponent mechanisms. The non-assimilative color spreading in the lower IC luminance condition mostly reflected S-cone mediated opponent induction and the contribution of −S inducing mechanisms was differentially large. These findings provided several constraints on possible visual mechanisms underlying the watercolor effect. PMID:25538602

  20. Computed tomography landmark-based semi-automated mesh morphing and mapping techniques: generation of patient specific models of the human pelvis without segmentation.

    PubMed

    Salo, Zoryana; Beek, Maarten; Wright, David; Whyne, Cari Marisa

    2015-04-13

    Current methods for the development of pelvic finite element (FE) models generally are based upon specimen specific computed tomography (CT) data. This approach has traditionally required segmentation of CT data sets, which is time consuming and necessitates high levels of user intervention due to the complex pelvic anatomy. The purpose of this research was to develop and assess CT landmark-based semi-automated mesh morphing and mapping techniques to aid the generation and mechanical analysis of specimen-specific FE models of the pelvis without the need for segmentation. A specimen-specific pelvic FE model (source) was created using traditional segmentation methods and morphed onto a CT scan of a different (target) pelvis using a landmark-based method. The morphed model was then refined through mesh mapping by moving the nodes to the bone boundary. A second target model was created using traditional segmentation techniques. CT intensity based material properties were assigned to the morphed/mapped model and to the traditionally segmented target models. Models were analyzed to evaluate their geometric concurrency and strain patterns. Strains generated in a double-leg stance configuration were compared to experimental strain gauge data generated from the same target cadaver pelvis. CT landmark-based morphing and mapping techniques were efficiently applied to create a geometrically multifaceted specimen-specific pelvic FE model, which was similar to the traditionally segmented target model and better replicated the experimental strain results (R(2)=0.873). This study has shown that mesh morphing and mapping represents an efficient validated approach for pelvic FE model generation without the need for segmentation.

  1. A novel flood detection mapping using multi-temporal RADARSAT-2 and TerraSAR-X images through an optimized Taguchi based classification technique

    NASA Astrophysics Data System (ADS)

    Pradhan, Biswajeet

    2016-07-01

    Floods are considered as one of the most common natural disasters in Malaysia. Preparation of an actuate flood inventory map is the basic step in flood risk management. Flood detection is yet significantly complex process due to the presence of cloud coverage in the tropical areas especially in Malaysia. Moreover, the most available techniques are expensive and time-consuming. Therefore, in the present study an efficient approach is presented to identify the flooded areas by means of multi-temporal RADARSAT-2 and single Terra-SAR-X images. The proposed framework was tested at two tropical areas in Malaysia: Kelantan (2014 flood); and Kuala Terengganu (2009 flood) to map the flooded areas. Multi-temporal RADARSAT-2 and single TerrSAR-X and Landsat images were classified based on a rule-based object-oriented technique. Then, different levels of image segmentation was performed to distinguish the boundaries of various dimensions and scales of objects. Finally, a novel Taguchi based method was employed to optimize the segmentation parameters. After the completion of segmentation, the rules were defined and the images were classified to produce an accurate flood inventory map for both 2014 Kelantan flood event as well as 2009 flood event in Kuala Terengganu. Finally, the results of classification were measured through the confusion matrix. In this research, the combination of techniques and the optimization approach were applied as a pioneering approach for flood detection. The flood inventory map which was obtained by using the proposed approach is showing the efficiency of the methodology which can be applied by other researchers and decision makers to construct the flood inventory maps. Keywords: Radarsat 2; Multispectral imagery; flood detection; Taguchi; rule-based classification

  2. Luminance range compression for video film digitizers.

    PubMed

    Bednarek, D R; Rudin, S; Wong, R

    1991-01-01

    Video cameras are used in many film digitization and teleradiology systems. However, the density range of medical radiographs often exceeds the dynamic range of the camera, and all diagnostic information in the original image may not be captured. Information in both the high and low density areas of the film can be captured in a single video frame if the transmitted luminance range of the radiograph is reduced. This can be accomplished by spatially modulating the back illumination of the film such that areas of lesser density receive less illumination while areas of greater density receive greater illumination. In this work, the use of a video monitor is shown to be an effective means to provide spatially modulated light for compressing the transmitted luminance range and thereby expanding the apparent dynamic range of the video camera. A simple computer-interfaced video feedback system that determines the appropriate compression mask and a scheme for linearization of system response are described. This system provides an interactive means for control of the degree of range compression. PMID:2046605

  3. Characterization of a subwavelength-scale 3D void structure using the FDTD-based confocal laser scanning microscopic image mapping technique.

    PubMed

    Choi, Kyongsik; Chon, James W; Gu, Min; Lee, Byoungho

    2007-08-20

    In this paper, a simple confocal laser scanning microscopic (CLSM) image mapping technique based on the finite-difference time domain (FDTD) calculation has been proposed and evaluated for characterization of a subwavelength-scale three-dimensional (3D) void structure fabricated inside polymer matrix. The FDTD simulation method adopts a focused Gaussian beam incident wave, Berenger's perfectly matched layer absorbing boundary condition, and the angular spectrum analysis method. Through the well matched simulation and experimental results of the xz-scanned 3D void structure, we first characterize the exact position and the topological shape factor of the subwavelength-scale void structure, which was fabricated by a tightly focused ultrashort pulse laser. The proposed CLSM image mapping technique based on the FDTD can be widely applied from the 3D near-field microscopic imaging, optical trapping, and evanescent wave phenomenon to the state-of-the-art bio- and nanophotonics.

  4. Adaptive display luminance for viewing smartphones under low illuminance.

    PubMed

    Na, Nooree; Suk, Hyeon-Jeong

    2015-06-29

    The study investigates the optimal display luminance for viewing smartphones in conditions of low illuminance. This proposes a model of adaptive display in that display luminance changes gradually with the passage of watching time. It starts at a fairly low display luminance of 10 cd/m2, and after 10 seconds, the luminance increases slowly until it reaches 40 cd/m2 for 20 seconds and maintains the luminance. For the development of the model, an experiment was conducted to identify the optimal luminance for initial viewing and that for continuous viewing, as well as the change speed of display luminance. In order to validate the model, users' subjective judgments and activation of alpha rhythm were observed, and the result confirmed the superiority of the adaptive display luminance compared to the current display luminance in terms of physiological comfort and psychological satisfaction. It is expected that this study contributes to the pleasing use of displays at night under low illuminance by applying to diverse types of display devices. PMID:26191702

  5. Adaptive display luminance for viewing smartphones under low illuminance.

    PubMed

    Na, Nooree; Suk, Hyeon-Jeong

    2015-06-29

    The study investigates the optimal display luminance for viewing smartphones in conditions of low illuminance. This proposes a model of adaptive display in that display luminance changes gradually with the passage of watching time. It starts at a fairly low display luminance of 10 cd/m2, and after 10 seconds, the luminance increases slowly until it reaches 40 cd/m2 for 20 seconds and maintains the luminance. For the development of the model, an experiment was conducted to identify the optimal luminance for initial viewing and that for continuous viewing, as well as the change speed of display luminance. In order to validate the model, users' subjective judgments and activation of alpha rhythm were observed, and the result confirmed the superiority of the adaptive display luminance compared to the current display luminance in terms of physiological comfort and psychological satisfaction. It is expected that this study contributes to the pleasing use of displays at night under low illuminance by applying to diverse types of display devices.

  6. MEASUREMENT OF THE BROAD-LINE REGION SIZE IN A LUMINOUS MACHO QUASAR

    SciTech Connect

    Chelouche, Doron; Daniel, Eliran; Kaspi, Shai E-mail: shai@wise.tau.ac.il

    2012-05-10

    We measure the broad emission line region (BLR) size of a luminous, L {approx} 10{sup 47} erg s{sup -1}, high-z quasar using broadband photometric reverberation mapping. To this end, we analyze {approx}7.5 years of photometric data for MACHO 13.6805.324 (z {approx_equal} 1.72) in the B and R MACHO bands and find a time delay of 180 {+-} 40 days in the rest frame of the object. Given the spectral-variability properties of high-z quasars, we associate this lag with the rest-UV iron emission blends. Our findings are consistent with a simple extrapolation of the BLR size-luminosity relation in local active galactic nuclei to the more luminous, high-z quasar population. Long-term spectroscopic monitoring of MACHO 13.6805.324 may be able to directly measure the line-to-continuum time delay and test our findings.

  7. Near surface geophysics techniques and geomorphological approach to reconstruct the hazard cave map in historical and urban areas

    NASA Astrophysics Data System (ADS)

    Lazzari, M.; Loperte, A.; Perrone, A.

    2009-04-01

    This work, carried out with an integrated methodological approach, focuses on the use of near surface geophysics techniques, such as ground penetrating radar GPR and electrical resistivity tomography (ERT), and geomorphological analysis, in order to reconstruct the cave distribution and geometry in a urban context and, in particular, in historical centres. In fact, there are a lot of historical Mediterranean sites born on an original rupestrian settlement, of which often the new generations forgot the memory and new urban areas built on them burying any marks. The interaction during recent centuries between human activity (caves excavation, birth and growth of an urban area) and the characters of the natural environment were the reasons of a progressive increase in hazard and vulnerability levels of several sites. The reconstruction of a detailed cave map distribution is the first step to define the anthropic and geomorphological hazard in urban areas, fundamental basis for planning and assessing the risk. The integrated near surface geophysics and geomorphological techniques have been applied to the case study of Tursi hilltop town and its older nucleus called Rabatana, located in the south-western sector of the Basilicata (southern Italy), representing an interesting example of the deep bond between natural and man-made environments such as precious cultural heritage. The history of this settlement has always been deeply connected with the characteristics of the neighbouring environment and it seems possible that the first settlement was built by excavating the slopes of the sandy relief. It was a typical rupestrian settlement, where meteoric water was stored inside some cisterns excavated on the slopes. During recent centuries, the increase in territory development by humans produced an increase in cave excavation in the Tursi-Rabatana urban area. To reconstruct the extremely complex near-surface hypogeal environment excavated in the sandy layers, a geophysical

  8. Visual Comfort Analysis of Innovative Interior and Exterior Shading Systems for Commercial Buildings using High Resolution Luminance Images

    SciTech Connect

    Konis, Kyle; Lee, Eleanor; Clear, Robert

    2011-01-11

    The objective of this study was to explore how calibrated high dynamic range (HDR) images (luminance maps) acquired in real world daylit environments can be used to characterize, evaluate, and compare visual comfort conditions of innovative facade shading and light-redirecting systems. Detailed (1536 x 1536 pixel) luminance maps were time-lapse acquired from two view positions in an unoccupied full scale testbed facility. These maps were analyzed using existing visual comfort metrics to quantify how innovative interior and exterior shading systems compare to conventional systems under real sun and sky conditions over a solstice-to-solstice test interval. The results provide a case study in the challenges and potential of methods of visualizing, evaluating and summarizing daily and seasonal variation of visual comfort conditions computed from large sets of image data.

  9. Three-Dimensional Mapping of Ozone-Induced Injury in the Nasal Airways of Monkeys Using Magnetic Resonance Imaging and Morphometric Techniques

    SciTech Connect

    Carey, Stephen A.; Minard, Kevin R.; Trease, Lynn L.; Wagner, James G.; Garcia, Guilherme M.; Ballinger, Carol A.; Kimbell, Julia; Plopper, Charles G.; Corley, Rick A.; Postlewait, Ed; Harkema, Jack R.

    2007-03-01

    ABSTRACT Age-related changes in gross and microscopic structure of the nasal cavity can alter local tissue susceptibility as well as the dose of inhaled toxicant delivered to susceptible sites. This article describes a novel method for the use of magnetic resonance imaging, 3-dimensional airway modeling, and morphometric techniques to characterize the distribution and magnitude of ozone-induced nasal injury in infant monkeys. Using this method, we are able to generate age-specific, 3-dimensional, epithelial maps of the nasal airways of infant Rhesus macaques. The principal nasal lesions observed in this primate model of ozone-induced nasal toxicology were neutrophilic rhinitis, along with necrosis and exfoliation of the epithelium lining the anterior maxilloturbinate. These lesions, induced by acute or cyclic (episodic) exposures, were examined by light microscopy, quantified by morphometric techniques, and mapped on 3-dimensional models of the nasal airways. Here, we describe the histopathologic, imaging, and computational biology methods developed to efficiently characterize, localize, quantify, and map these nasal lesions. By combining these techniques, the location and severity of the nasal epithelial injury were correlated with epithelial type, nasal airway geometry, and local biochemical and molecular changes on an individual animal basis. These correlations are critical for accurate predictive modeling of exposure-dose-response relationships in the nasal airways, and subsequent extrapolation of nasal findings in animals to humans for developing risk assessment.

  10. OBSCURATION BY GAS AND DUST IN LUMINOUS QUASARS

    SciTech Connect

    Usman, S. M.; Murray, S. S.; Hickox, R. C.; Brodwin, M.

    2014-06-10

    We explore the connection between absorption by neutral gas and extinction by dust in mid-infrared (IR) selected luminous quasars. We use a sample of 33 quasars at redshifts 0.7 < z ≲ 3 in the 9 deg{sup 2} Boötes multiwavelength survey field that are selected using Spitzer Space Telescope Infrared Array Camera colors and are well-detected as luminous X-ray sources (with >150 counts) in Chandra observations. We divide the quasars into dust-obscured and unobscured samples based on their optical to mid-IR color, and measure the neutral hydrogen column density N {sub H} through fitting of the X-ray spectra. We find that all subsets of quasars have consistent power law photon indices Γ ≈ 1.9 that are uncorrelated with N {sub H}. We classify the quasars as gas-absorbed or gas-unabsorbed if N {sub H} > 10{sup 22} cm{sup –2} or N {sub H} < 10{sup 22} cm{sup –2}, respectively. Of 24 dust-unobscured quasars in the sample, only one shows clear evidence for significant intrinsic N {sub H}, while 22 have column densities consistent with N {sub H} < 10{sup 22} cm{sup –2}. In contrast, of the nine dust-obscured quasars, six show evidence for intrinsic gas absorption, and three are consistent with N {sub H} < 10{sup 22} cm{sup –2}. We conclude that dust extinction in IR-selected quasars is strongly correlated with significant gas absorption as determined through X-ray spectral fitting. These results suggest that obscuring gas and dust in quasars are generally co-spatial, and confirm the reliability of simple mid-IR and optical photometric techniques for separating quasars based on obscuration.

  11. Application of Geophysical Techniques to Identify and Map the Benthic Habitat and Sub- bottom Sediments of Delaware Bay

    NASA Astrophysics Data System (ADS)

    Madsen, J.; Wilson, B.; Carter, D.

    2006-12-01

    The Coastal Program of the Delaware Department of Natural Resources and Environmental Control is engaged in a bottom and sub-bottom imaging project to identify and map the benthic habitat and sub-bottom sediments of the Delaware Bay. The project, now in the third of its five year implementation, integrates the use of three types of acoustical systems: RoxAnn seabed classification, chirp sub-bottom profiling, and multi-beam bathymetric mapping. Verification of the acoustic data with bottom and sub-bottom sediments is performed through the collection of grab and core samples and underwater video images. All this information is being compiled into a GIS database to enable coastal decision makers to effectively manage this estuarine resource. The RoxAnn system measures bottom hardness and roughness along a ship's track, and when correlated with analyses of ground-truth grab samples and video, can be used to classify bottom sediments. Regional maps of sediment type based on the RoxAnn classification are constructed by interpolating between track lines. This project has identified the spatial extent and relative density of oyster shell on the bay bottom, while also determining the composition of regional surrounding sediments. Sub-bottom profiles, collected using an Edgetech X-STAR chirp sonar system, are being used to map potential, and past, offshore sand borrow sites that can be/have been used for beach replenishment and to study the deposition and/or erosion of sediments and the evolution of the Delaware Estuary over the past 10,000 years. In locations along Delaware Bay where beach replenishment is necessary, the chirp data has been integrated with additional information from sediment cores to develop maps of suitable sand deposits based upon location, thickness, overburden, and grain size. The sub-bottom data has been used to map the Holocene/Pre-Holocene boundary and to constrain the paleo-environmental setting of the recent transgressive sea-level sequence in the

  12. Efficient visual system processing of spatial and luminance statistics in representational and non-representational art

    NASA Astrophysics Data System (ADS)

    Graham, Daniel J.; Friedenberg, Jay D.; Rockmore, Daniel N.

    2009-02-01

    An emerging body of research suggests that artists consistently seek modes of representation that are efficiently processed by the human visual system, and that these shared properties could leave statistical signatures. In earlier work, we showed evidence that perceived similarity of representational art could be predicted using intensity statistics to which the early visual system is attuned, though semantic content was also found to be an important factor. Here we report two studies that examine the visual perception of similarity. We test a collection of non-representational art, which we argue possesses useful statistical and semantic properties, in terms of the relationship between image statistics and basic perceptual responses. We find two simple statistics-both expressed as single values-that predict nearly a third of the overall variance in similarity judgments of abstract art. An efficient visual system could make a quick and reasonable guess as to the relationship of a given image to others (i.e., its context) by extracting these basic statistics early in the visual stream, and this may hold for natural scenes as well as art. But a major component of many types of art is representational content. In a second study, we present findings related to efficient representation of natural scene luminances in landscapes by a well-known painter. We show empirically that elements of contemporary approaches to high-dynamic range tone-mapping-which are themselves deeply rooted in an understanding of early visual system coding-are present in the way Vincent Van Gogh transforms scene luminances into painting luminances. We argue that global tone mapping functions are a useful descriptor of an artist's perceptual goals with respect to global illumination and we present evidence that mapping the scene to a painting with different implied lighting properties produces a less efficient mapping. Together, these studies suggest that statistical regularities in art can shed

  13. Near surface geophysics techniques and geomorphological approach to reconstruct the hazard cave map in historical and urban areas

    NASA Astrophysics Data System (ADS)

    Lazzari, M.; Loperte, A.; Perrone, A.

    2009-04-01

    This work, carried out with an integrated methodological approach, focuses on the use of near surface geophysics techniques, such as ground penetrating radar GPR and electrical resistivity tomography (ERT), and geomorphological analysis, in order to reconstruct the cave distribution and geometry in a urban context and, in particular, in historical centres. In fact, there are a lot of historical Mediterranean sites born on an original rupestrian settlement, of which often the new generations forgot the memory and new urban areas built on them burying any marks. The interaction during recent centuries between human activity (caves excavation, birth and growth of an urban area) and the characters of the natural environment were the reasons of a progressive increase in hazard and vulnerability levels of several sites. The reconstruction of a detailed cave map distribution is the first step to define the anthropic and geomorphological hazard in urban areas, fundamental basis for planning and assessing the risk. The integrated near surface geophysics and geomorphological techniques have been applied to the case study of Tursi hilltop town and its older nucleus called Rabatana, located in the south-western sector of the Basilicata (southern Italy), representing an interesting example of the deep bond between natural and man-made environments such as precious cultural heritage. The history of this settlement has always been deeply connected with the characteristics of the neighbouring environment and it seems possible that the first settlement was built by excavating the slopes of the sandy relief. It was a typical rupestrian settlement, where meteoric water was stored inside some cisterns excavated on the slopes. During recent centuries, the increase in territory development by humans produced an increase in cave excavation in the Tursi-Rabatana urban area. To reconstruct the extremely complex near-surface hypogeal environment excavated in the sandy layers, a geophysical

  14. Topographic mapping

    USGS Publications Warehouse

    ,

    2008-01-01

    The U.S. Geological Survey (USGS) produced its first topographic map in 1879, the same year it was established. Today, more than 100 years and millions of map copies later, topographic mapping is still a central activity for the USGS. The topographic map remains an indispensable tool for government, science, industry, and leisure. Much has changed since early topographers traveled the unsettled West and carefully plotted the first USGS maps by hand. Advances in survey techniques, instrumentation, and design and printing technologies, as well as the use of aerial photography and satellite data, have dramatically improved mapping coverage, accuracy, and efficiency. Yet cartography, the art and science of mapping, may never before have undergone change more profound than today.

  15. Shell nebulae around luminous evolved stars

    NASA Technical Reports Server (NTRS)

    Dufour, Reginald J.

    1989-01-01

    Shell nebulae around luminous Population I Wolf-Rayet, Of, and P-Cygni stars are astrophysically interesting since they are indicators of pre-supernova mass loss and how such massive stars prepare their surrounding interstellar medium prior to explosion. Some twenty-odd such nebulae are known, for which detailed study of their morphological and spectroscopic characteristics have only begun in this decade. In this paper, some of these characteristics are reviewed in general, and new observations are reported. Emphasis has been placed on several 'prototype 'objects (NGC 7635, NGC 2359, NGC 6888, and the Eta Carinae condensations) to illustrate the varied massive-star mass-loss, the physics of their winds and shell ejecta, and related nucleosynthesis effects in the compositions of the winds and shells.

  16. Identifying the occurrence of lightning and transient luminous events by nadir spectrophotometric observation

    NASA Astrophysics Data System (ADS)

    Adachi, Toru; Sato, Mitsuteru; Ushio, Tomoo; Yamazaki, Atsushi; Suzuki, Makoto; Kikuchi, Masayuki; Takahashi, Yukihiro; Inan, Umran S.; Linscott, Ivan; Hobara, Yasuhide; Frey, Harald U.; Mende, Stephen B.; Chen, Alfred B.; Hsu, Rue-Ron; Kusunoki, Kenichi

    2016-07-01

    We propose a new technique to identify the occurrence of lightning and transient luminous events (TLEs) using multicolor photometric data obtained by space borne nadir measurements. We estimate the spectral characteristics of lightning and TLEs by converting the optical data obtained by the ISUAL limb experiment to the GLIMS nadir geometry. We find that the estimated spectral shapes of TLE-accompanied lightning are clearly different from those of pure lightning. The obtained results show that (1) the intensity of FUV signals and (2) the ratio of 337/red (609-753 nm) spectral irradiance are useful to identify the occurrence of TLEs. The occurrence probabilities of TLEs are 10%, 40%, 80%, in the case of lightning events having the 337/red spectral irradiance ratio of 0.95, 2.95, 14.79, respectively. By using the 60% criterion of the 337/red ratio and the existence of FUV emissions, we classify the 1039 GLIMS-observed lightning events into 828 pure lightning and 211 TLE-accompanied lightning. Since the GLIMS trigger level is adjusted to observe extremely-bright events, the occurrence probability of TLEs obtained here most probably reflects the characteristics of energetic lightning. The estimated global map is consistent with previously determined distributions: the highest activities of lightning and TLEs are found over the North/South American continents, African continent, and Asian maritime regions. While the absolute occurrence number of pure lightning and TLE-accompanied lightning are found to maximize in the equatorial region, the occurrence probability of TLEs possibly increase somewhat in the mid-latitude region. Since the occurrence probabilities of TLEs are higher over the ocean than over land, it is likely that the GLIMS-observed TLEs are due primarily to elves which tends to occur more frequently over the ocean.

  17. THE MOST LUMINOUS GALAXIES DISCOVERED BY WISE

    SciTech Connect

    Tsai, Chao-Wei; Eisenhardt, Peter R. M.; Stern, Daniel; Moustakas, Leonidas A.; Wu, Jingwen; Wright, Edward L.; Assef, Roberto J.; Blain, Andrew W.; Bridge, Carrie R.; Sayers, Jack; Benford, Dominic J.; Leisawitz, David T.; Cutri, Roc M.; Masci, Frank J.; Yan, Lin; Griffith, Roger L.; Jarrett, Thomas H.; Lonsdale, Carol J.; Petty, Sara M.; Stanford, S. Adam; and others

    2015-06-01

    We present 20 Wide-field Infrared Survey Explorer (WISE)-selected galaxies with bolometric luminosities L{sub bol} > 10{sup 14} L{sub ☉}, including five with infrared luminosities L{sub IR} ≡ L{sub (rest} {sub 8–1000} {sub μm)} > 10{sup 14} L{sub ☉}. These “extremely luminous infrared galaxies,” or ELIRGs, were discovered using the “W1W2-dropout” selection criteria which requires marginal or non-detections at 3.4 and 4.6 μm (W1 and W2, respectively) but strong detections at 12 and 22 μm in the WISE survey. Their spectral energy distributions are dominated by emission at rest-frame 4–10 μm, suggesting that hot dust with T{sub d} ∼ 450 K is responsible for the high luminosities. These galaxies are likely powered by highly obscured active galactic nuclei (AGNs), and there is no evidence suggesting these systems are beamed or lensed. We compare this WISE-selected sample with 116 optically selected quasars that reach the same L{sub bol} level, corresponding to the most luminous unobscured quasars in the literature. We find that the rest-frame 5.8 and 7.8 μm luminosities of the WISE-selected ELIRGs can be 30%–80% higher than that of the unobscured quasars. The existence of AGNs with L{sub bol} > 10{sup 14} L{sub ☉} at z > 3 suggests that these supermassive black holes are born with large mass, or have very rapid mass assembly. For black hole seed masses ∼10{sup 3} M{sub ☉}, either sustained super-Eddington accretion is needed, or the radiative efficiency must be <15%, implying a black hole with slow spin, possibly due to chaotic accretion.

  18. ISM Properties of Local Luminous Infrared Galaxies

    NASA Astrophysics Data System (ADS)

    Diaz-Santos, Tanio; Armus, Lee; Stierwalt, Sabrina; Elbaz, David; Malhotra, Sangeeta

    2015-08-01

    Luminous and Ultra-luminous Infrared Galaxies ((U)LIRGs) represent the most important galaxy population at redshifts z > 1 as they account for more than 50% of all star formation produced in the Universe at those epochs; and encompass what it is called the main-sequence (MS) of star-forming galaxies. Investigating their local counterparts -low luminosity LIRGs- is therefore key to understand the physical properties and phases of their inter-stellar medium (ISM) - a task that is rather challenging in the distant Universe. On the other hand, high-z star-bursting (out of the MS) systems, although small in number, account for a modest yet still significant fraction of the total energy production. Here I present far-IR line emission observations ([CII]158μm, [OI]63μm, [OIII]88μm and [NII]122μm) obtained with Herschel for two large samples of nearby LIRGs: The Great Observatories All-sky LIRG Survey (GOALS), a sample of more than 240 relatively cold LIRGs, and a survey of 30 LIRGs selected to have very warm mid- to far-IR colors, suggestive of an ongoing intense nuclear starburst and/or an AGN. Using photo-dissociation region (PDR) models we derive the basic characteristics of the ISM (ionization intensity and density) for both samples and study differences among systems as a function of AGN activity, merger stage, dust temperature, and compactness of the starburst - parameters that are thought to control the life cycle of galaxies moving in and out of the MS, locally and at high-z.

  19. Covariation of Color and Luminance Facilitate Object Individuation in Infancy

    ERIC Educational Resources Information Center

    Woods, Rebecca J.; Wilcox, Teresa

    2010-01-01

    The ability to individuate objects is one of our most fundamental cognitive capacities. Recent research has revealed that when objects vary in color or luminance alone, infants fail to individuate those objects until 11.5 months. However, color and luminance frequently covary in the natural environment, thus providing a more salient and reliable…

  20. Effects of luminance and spatial noise on interferometric contrast sensitivity

    NASA Astrophysics Data System (ADS)

    Coletta, Nancy J.; Sharma, Vineeta

    1995-10-01

    Optical properties of the eye contribute to the reduced visibility of spatial patterns at low luminance. To study the limits of spatial vision when optical factors are minimized, we measured contrast-sensitivity functions (CSF's) for 543.5-nm laser interference fringes imaged directly on the retina. Measurements were made in the fovea at four luminance levels, ranging from 0.3 to 300 photopic trolands (Td). At each luminance the fraction of coherent light in the stimulus pattern was varied to assess the masking effects of laser speckle, which is visible as spatial noise in fields of coherent light. Compared with published CSF's obtained under natural viewing conditions, interferometric CSF's were similar in height but broader, with the range of visibility being extended to higher spatial frequencies. The masking effects of speckle were greatest at the highest luminance and were negligible at the lowest luminance. For low coherent fractions, contrast sensitivity improved over the entire luminance range at a rate consistent with a square-root law; with purely coherent light, sensitivity tended to level off at approximately 30 Td because of speckle masking. The results indicate that the optical quality of the eye reduces the spatial bandwidth of vision even at luminances near the foveal threshold. The change in interference fringe visibility with luminance is consistent with noise-limited behavior, and the masking

  1. Making Large Class Basic Histology Lectures More Interactive: The Use of Draw-Along Mapping Techniques and Associated Educational Activities

    ERIC Educational Resources Information Center

    Kotzé, Sanet Henriët; Mole, Calvin Gerald

    2015-01-01

    At Stellenbosch University, South Africa, basic histology is taught to a combination class of almost 400 first-year medical, physiotherapy, and dietetic students. Many students often find the amount of work in basic histology lectures overwhelming and consequently loose interest. The aim was to determine if a draw-along mapping activity would…

  2. Referential Mapping: A Technique for Attaching Functional Significance to the Innovative Utterances of an African Grey Parrot (Psittascus Erithacus).

    ERIC Educational Resources Information Center

    Pepperberg, Irene M.

    1990-01-01

    Spontaneous combinations and phonological variations of the vocalizations of an African Grey parrot were treated as if they were intentional requests or comments. The success of these "referential mapping" procedures in attaching functional significance to the parrot's vocalizations may have implications for intervention programs for humans with…

  3. EVALUATION OF TEMPORAL VARIATIONS IN HYDRAULIC CAPTURE DUE TO CHANGING FLOW PATTERNS USING MAPPING AND MODELING TECHNIQUES

    SciTech Connect

    SPILIOTOPOULOS AA; SWANSON LC; SHANNON R; TONKIN MJ

    2011-04-07

    Robust performance evaluation represents one of the most challenging aspects of groundwater pump-and-treat (P&T) remedy implementation. In most cases, the primary goal of the P&T system is hydraulic containment, and ultimately recovery, of contaminants to protect downgradient receptors. Estimating the extent of hydraulic containment is particularly challenging under changing flow patterns due to variable pumping, boundaries and/or other conditions. We present a systematic approach to estimate hydraulic containment using multiple lines of evidence based on (a) water-level mapping and (b) groundwater modeling. Capture Frequency Maps (CFMs) are developed by particle tracking on water-level maps developed for each available water level data set using universal kriging. In a similar manner, Capture Efficiency Maps (CEMs) are developed by particle tracking on water-levels calculated using a transient groundwater flow model: tracking is undertaken independently for each stress period using a very low effective porosity, depicting the 'instantaneous' fate of each particle each stress period. Although conceptually similar, the two methods differ in their underlying assumptions and their limitations: their use together identifies areas where containment may be reliable (i.e., where the methods are in agreement) and where containment is uncertain (typically, where the methods disagree). A field-scale example is presented to illustrate these concepts.

  4. Perceptions of Pre-Service Social Sciences Teachers Regarding the Concept of "Geography" by Mind Mapping Technique

    ERIC Educational Resources Information Center

    Ozturk Demirbas, Cagri

    2013-01-01

    The objective of this study is to present the perceptions of preservice social sciences teachers regarding the concept of geography. In the study, the study group consists of 46 preservice social sciences teachers, who receive education at Ahi Evran University. The data were collected in December, 2010. Mind maps were used as data collection tools…

  5. Mapping the intracellular distribution of carbon nanotubes after targeted delivery to carcinoma cells using confocal Raman imaging as a label-free technique

    NASA Astrophysics Data System (ADS)

    Lamprecht, C.; Gierlinger, N.; Heister, E.; Unterauer, B.; Plochberger, B.; Brameshuber, M.; Hinterdorfer, P.; Hild, S.; Ebner, A.

    2012-04-01

    The uptake of carbon nanotubes (CNTs) by mammalian cells and their distribution within cells is being widely studied in recent years due to their increasing use for biomedical purposes. The two main imaging techniques used are confocal fluorescence microscopy and transmission electron microscopy (TEM). The former, however, requires labeling of the CNTs with fluorescent dyes, while the latter is a work-intensive technique that is unsuitable for in situ bio-imaging. Raman spectroscopy, on the other hand, presents a direct, straightforward and label-free alternative. Confocal Raman microscopy can be used to image the CNTs inside cells, exploiting the strong Raman signal connected to different vibrational modes of the nanotubes. In addition, cellular components, such as the endoplasmic reticulum and the nucleus, can be mapped. We first validate our method by showing that only when using the CNTs’ G band for intracellular mapping accurate results can be obtained, as mapping of the radial breathing mode (RBM) only shows a small fraction of CNTs. We then take a closer look at the exact localization of the nanotubes inside cells after folate receptor-mediated endocytosis and show that, after 8-10 h incubation, the majority of CNTs are localized around the nucleus. In summary, Raman imaging has enormous potential for imaging CNTs inside cells, which is yet to be fully realized. The authors declare no conflict of interest.

  6. An evaluation of machine processing techniques of ERTS-1 data for user applications. [urban land use and soil association mapping in Indiana

    NASA Technical Reports Server (NTRS)

    Landgrebe, D.

    1974-01-01

    A broad study is described to evaluate a set of machine analysis and processing techniques applied to ERTS-1 data. Based on the analysis results in urban land use analysis and soil association mapping together with previously reported results in general earth surface feature identification and crop species classification, a profile of general applicability of this procedure is beginning to emerge. Put in the hands of a user who knows well the information needed from the data and also is familiar with the region to be analyzed it appears that significantly useful information can be generated by these methods. When supported by preprocessing techniques such as the geometric correction and temporal registration capabilities, final products readily useable by user agencies appear possible. In parallel with application, through further research, there is much potential for further development of these techniques both with regard to providing higher performance and in new situations not yet studied.

  7. Can Selforganizing Maps Accurately Predict Photometric Redshifts?

    NASA Technical Reports Server (NTRS)

    Way, Michael J.; Klose, Christian

    2012-01-01

    We present an unsupervised machine-learning approach that can be employed for estimating photometric redshifts. The proposed method is based on a vector quantization called the self-organizing-map (SOM) approach. A variety of photometrically derived input values were utilized from the Sloan Digital Sky Survey's main galaxy sample, luminous red galaxy, and quasar samples, along with the PHAT0 data set from the Photo-z Accuracy Testing project. Regression results obtained with this new approach were evaluated in terms of root-mean-square error (RMSE) to estimate the accuracy of the photometric redshift estimates. The results demonstrate competitive RMSE and outlier percentages when compared with several other popular approaches, such as artificial neural networks and Gaussian process regression. SOM RMSE results (using delta(z) = z(sub phot) - z(sub spec)) are 0.023 for the main galaxy sample, 0.027 for the luminous red galaxy sample, 0.418 for quasars, and 0.022 for PHAT0 synthetic data. The results demonstrate that there are nonunique solutions for estimating SOM RMSEs. Further research is needed in order to find more robust estimation techniques using SOMs, but the results herein are a positive indication of their capabilities when compared with other well-known methods

  8. The integrated analyses of digital field mapping techniques and traditional field methods: implications from the Burdur-Fethiye Shear Zone, SW Turkey as a case-study

    NASA Astrophysics Data System (ADS)

    Elitez, İrem; Yaltırak, Cenk; Zabcı, Cengiz; Şahin, Murat

    2015-04-01

    The precise geological mapping is one of the most important issues in geological studies. Documenting the spatial distribution of geological bodies and their contacts play a crucial role on interpreting the tectonic evolution of any region. Although the traditional field techniques are still accepted to be the most fundamental tools in construction of geological maps, we suggest that the integration of digital technologies to the classical methods significantly increases the resolution and the quality of such products. We simply follow the following steps in integration of the digital data with the traditional field observations. First, we create the digital elevation model (DEM) of the region of interest by interpolating the digital contours of 1:25000 scale topographic maps to 10 m of ground pixel resolution. The non-commercial Google Earth satellite imagery and geological maps of previous studies are draped over the interpolated DEMs in the second stage. The integration of all spatial data is done by using the market leading GIS software, ESRI ArcGIS. We make the preliminary interpretation of major structures as tectonic lineaments and stratigraphic contacts. These preliminary maps are controlled and precisely coordinated during the field studies by using mobile tablets and/or phablets with GPS receivers. The same devices are also used in measuring and recording the geologic structures of the study region. Finally, all digitally collected measurements and observations are added to the GIS database and we finalise our geological map with all available information. We applied this integrated method to map the Burdur-Fethiye Shear Zone (BFSZ) in the southwest Turkey. The BFSZ is an active sinistral 60-to-90 km-wide shear zone, which prolongs about 300 km-long between Suhut-Cay in the northeast and Köyceğiz Lake-Kalkan in the southwest on land. The numerous studies suggest contradictory models not only about the evolution but also about the fault geometry of this

  9. The Review of Nuclear Microscopy Techniques: An Approach for Nondestructive Trace Elemental Analysis and Mapping of Biological Materials

    PubMed Central

    Mulware, Stephen Juma

    2015-01-01

    The properties of many biological materials often depend on the spatial distribution and concentration of the trace elements present in a matrix. Scientists have over the years tried various techniques including classical physical and chemical analyzing techniques each with relative level of accuracy. However, with the development of spatially sensitive submicron beams, the nuclear microprobe techniques using focused proton beams for the elemental analysis of biological materials have yielded significant success. In this paper, the basic principles of the commonly used microprobe techniques of STIM, RBS, and PIXE for trace elemental analysis are discussed. The details for sample preparation, the detection, and data collection and analysis are discussed. Finally, an application of the techniques to analysis of corn roots for elemental distribution and concentration is presented. PMID:26664356

  10. Meter-scale Urban Land Cover Mapping for EPA EnviroAtlas Using Machine Learning and OBIA Remote Sensing Techniques

    NASA Astrophysics Data System (ADS)

    Pilant, A. N.; Baynes, J.; Dannenberg, M.; Riegel, J.; Rudder, C.; Endres, K.

    2013-12-01

    US EPA EnviroAtlas is an online collection of tools and resources that provides geospatial data, maps, research, and analysis on the relationships between nature, people, health, and the economy (http://www.epa.gov/research/enviroatlas/index.htm). Using EnviroAtlas, you can see and explore information related to the benefits (e.g., ecosystem services) that humans receive from nature, including clean air, clean and plentiful water, natural hazard mitigation, biodiversity conservation, food, fuel, and materials, recreational opportunities, and cultural and aesthetic value. EPA developed several urban land cover maps at very high spatial resolution (one-meter pixel size) for a portion of EnviroAtlas devoted to urban studies. This urban mapping effort supported analysis of relations among land cover, human health and demographics at the US Census Block Group level. Supervised classification of 2010 USDA NAIP (National Agricultural Imagery Program) digital aerial photos produced eight-class land cover maps for several cities, including Durham, NC, Portland, ME, Tampa, FL, New Bedford, MA, Pittsburgh, PA, Portland, OR, and Milwaukee, WI. Semi-automated feature extraction methods were used to classify the NAIP imagery: genetic algorithms/machine learning, random forest, and object-based image analysis (OBIA). In this presentation we describe the image processing and fuzzy accuracy assessment methods used, and report on some sustainability and ecosystem service metrics computed using this land cover as input (e.g., carbon sequestration from USFS iTREE model; health and demographics in relation to road buffer forest width). We also discuss the land cover classification schema (a modified Anderson Level 1 after the National Land Cover Data (NLCD)), and offer some observations on lessons learned. Meter-scale urban land cover in Portland, OR overlaid on NAIP aerial photo. Streets, buildings and individual trees are identifiable.

  11. Riverbed-Sediment Mapping in the Edwards Dam Impoundment on the Kennebec River, Maine By Use of Geophysical Techniques

    USGS Publications Warehouse

    Dudley, Robert W.

    1999-01-01

    INTRODUCTION In July 1997, the Federal Energy Regulatory Commission (FERC) issued a Final Environmental Impact Statement recommending that the 162-year-old Edwards Dam on the Kennebec River in Augusta, Maine, be removed. The impoundment formed by Edwards Dam extends about 15 mi to the city of Waterville, near the confluence of the Sebasticook River with the Kennebec River. The impoundment has a surface area of 1,143 acres, a gross storage of approximately 740 million ft3, and a usable storage of about 184 million ft3 (Stone and Webster, 1995a). According to FERC, removal of the 917-ft-long, 24-ft-high timber crib and concrete structure would restore 15 mi of riverine habitat, improve passage of ocean-migrating fish species native to the Kennebec River, and result in substantial recreational enhancements (Federal Energy Regulatory Commission, 1997). Because the removal of Edwards Dam would change the hydraulic characteristics of the river in the present-day impoundment, the potential transport of erodible, fine-grained sediment currently in the impoundment is a concern. Of particular concern is the erosion and transport of this sediment to areas downstream from the dam, a process that could introduce possible bacterial and chemical contamination and could impede river navigation as a result of sediment deposition. In an effort to build upon available information on the composition of the riverbed, the U.S. Geological Survey (USGS), in cooperation with the Maine State Planning Office, classified riverbed sediment types and mapped their areal extents in the lower (southern) half of the Edwards Dam impoundment. This report describes the methods used to collect and analyze the data used to create a map of sediment types in the Edwards Dam impoundment. The map is included with this report. Data used to map riverbed sediment types were also used to estimate the volume of observed mud and mud-containing sediment in the study area.

  12. The Luminous Convolution Model-The light side of dark matter

    NASA Astrophysics Data System (ADS)

    Cisneros, Sophia; Oblath, Noah; Formaggio, Joe; Goedecke, George; Chester, David; Ott, Richard; Ashley, Aaron; Rodriguez, Adrianna

    2014-03-01

    We present a heuristic model for predicting the rotation curves of spiral galaxies. The Luminous Convolution Model (LCM) utilizes Lorentz-type transformations of very small changes in the photon's frequencies from curved space-times to construct a dynamic mass model of galaxies. These frequency changes are derived using the exact solution to the exterior Kerr wave equation, as opposed to a linearized treatment. The LCM Lorentz-type transformations map between the emitter and the receiver rotating galactic frames, and then to the associated flat frames in each galaxy where the photons are emitted and received. This treatment necessarily rests upon estimates of the luminous matter in both the emitter and the receiver galaxies. The LCM is tested on a sample of 22 randomly chosen galaxies, represented in 33 different data sets. LCM fits are compared to the Navarro, Frenk & White (NFW) Dark Matter Model and to the Modified Newtonian Dynamics (MOND) model when possible. The high degree of sensitivity of the LCM to the initial assumption of a luminous mass to light ratios (M/L), of the given galaxy, is demonstrated. We demonstrate that the LCM is successful across a wide range of spiral galaxies for predicting the observed rotation curves. Through the generous support of the MIT Dr. Martin Luther King Jr. Fellowship program.

  13. oriGNAI3: a narrow zone of preferential replication initiation in mammalian cells identified by 2D gel and competitive PCR replicon mapping techniques.

    PubMed Central

    Toledo, F; Baron, B; Fernandez, M A; Lachagès, A M; Mayau, V; Buttin, G; Debatisse, M

    1998-01-01

    The nature of mammalian origins of DNA replication remains controversial and this is primarily because two-dimensional gel replicon mapping techniques have identified broad zones of replication initiation whereas several other techniques, such as quantitative PCR, have disclosed more discrete sites of initiation at the same chromosomal loci. In this report we analyze the replication of an amplified genomic region encompassing the 3'-end of the GNAI3 gene, the entire GNAT2 gene and the intergenic region between them in exponentially growing Chinese hamster fibroblasts. These cells express GNAI3 but not GNAT2 . The replication pattern was first analyzed by two-dimensional neutral-alkaline gel electrophoresis. Surprisingly, the results revealed a small preferential zone of replication initiation, of at most 1.7 kb, located in a limited part of the GNAI3 - GNAT2 intergenic region. Mapping of this initiation zone was then confirmed by quantitative PCR. The agreement between the two techniques exploited here strengthens the hypothesis that preferred sites of replication initiation do exist in mammalian genomes. PMID:9580680

  14. Independence of color and luminance edges in natural scenes.

    PubMed

    Hansen, Thorsten; Gegenfurtner, Karl R

    2009-01-01

    Form vision is traditionally regarded as processing primarily achromatic information. Previous investigations into the statistics of color and luminance in natural scenes have claimed that luminance and chromatic edges are not independent of each other and that any chromatic edge most likely occurs together with a luminance edge of similar strength. Here we computed the joint statistics of luminance and chromatic edges in over 700 calibrated color images from natural scenes. We found that isoluminant edges exist in natural scenes and were not rarer than pure luminance edges. Most edges combined luminance and chromatic information but to varying degrees such that luminance and chromatic edges were statistically independent of each other. Independence increased along successive stages of visual processing from cones via postreceptoral color-opponent channels to edges. The results show that chromatic edge contrast is an independent source of information that can be linearly combined with other cues for the proper segmentation of objects in natural and artificial vision systems. Color vision may have evolved in response to the natural scene statistics to gain access to this independent information. PMID:19152717

  15. The effect of luminance on simulated driving speed.

    PubMed

    Pritchard, Sarah J; Hammett, Stephen T

    2012-01-01

    Perceived speed is modulated by a range of stimulus attributes such as contrast, luminance and adaptation duration. It has been suggested that such changes in perceived speed may influence driving behaviour. In order to evaluate the effect of luminance on driving speed we have measured subjects' driving speed in a driving simulator for a range of luminance and speed over time. The results indicate that reducing luminance results in a decrease in driving speed for all speeds measured. This reduction in driving speed at low luminance is consistent with previous findings that perceived speed increases at low luminance. However, the results also indicated that driving speed remained stable over a 30s period. The stability of driving speed over time is inconsistent with previous findings that perceived speed reduces exponentially as a function of adaptation duration. The results are suggestive of a scheme whereby driving speed is consistent with the known effects of luminance upon perceived speed but may also be modulated by higher order processes that serve to maintain a constant speed over time.

  16. A functional role for trans-saccadic luminance differences.

    PubMed

    Ludwig, Casimir J H; Davies, J Rhys; Gegenfurtner, Karl R

    2012-01-01

    In typical natural environments, the visual system receives different inputs in quick succession as gaze moves around. We examined whether local trans-saccadic differences in luminance, contrast, and orientation influenced perception and target selection in the eye movement system. Observers initially fixated a peripheral position in a preview display that consisted of four patterns. They subsequently made a saccade to the center of the configuration. During the movement, two of the preview patterns were eliminated, and a small change in the luminance contrast of the remaining patterns was introduced. Observers had to make a second saccade to the test patch with the greater luminance contrast relative to the background. During the second fixation, test patterns could be in the same retinotopic location as one of the preview patterns during the initial fixation (a retinotopic match) or at a retinotopic location that was empty during the preview epoch (a retinotopic onset). We consistently found a preference to fixate retinotopic onsets over retinotopically matched patterns, but only when the patterns were defined by a luminance difference. Direct measurement of perceived luminance showed that the visual response to retinotopically matched inputs was attenuated, possibly because of retinotopic adaptation. As a consequence, the visual system responds more strongly to trans-saccadic differences in local luminance. We argue that a trans-saccadic comparison of the local luminance at the same retinotopic location is a simple way of finding high spatial frequency edge information in the visual scene. This information is important for image segmentation and interpretation.

  17. Luminance distribution modifies the perceived freshness of strawberries

    PubMed Central

    Arce-Lopera, Carlos; Masuda, Tomohiro; Kimura, Atsushi; Wada, Yuji; Okajima, Katsunori

    2012-01-01

    Material perception studies focus on the analysis of visual cues that may underlie the ability to distinguish between the different properties of an object. Herein, we investigated the effects of luminance distribution on the perceived freshness of a strawberry independent from its colour information. We took photographs of the degradation of a strawberry over 169 hours in a controlled environment, then presented cropped square patches of the original images to subjects who then rated the perceived freshness using a visual analogue scale. Freshness ratings were significantly highly correlated with statistical measures of the luminance and colour channels. To clarify which of these visual cues affects freshness perception, we created artificial images by modifying only the luminance distribution and keeping the colour information unchanged. The modification of luminance resulted in a highly correlated change in freshness perception, suggesting that luminance visual cues are essential to the freshness perception of strawberries. Finally, by eliminating the colour information from the images, we examined the contribution of luminance distribution independent of colour information and found that luminance information suffices for accurate estimation of strawberry freshness. PMID:23145288

  18. The effect of chromatic and luminance information on reaction times.

    PubMed

    O'Donell, Beatriz M; Barraza, Jose F; Colombo, Elisa M

    2010-07-01

    We present a series of experiments exploring the effect of chromaticity on reaction time (RT) for a variety of stimulus conditions, including chromatic and luminance contrast, luminance, and size. The chromaticity of these stimuli was varied along a series of vectors in color space that included the two chromatic-opponent-cone axes, a red-green (L-M) axis and a blue-yellow [S - (L + M)] axis, and intermediate noncardinal orientations, as well as the luminance axis (L + M). For Weber luminance contrasts above 10-20%, RTs tend to the same asymptote, irrespective of chromatic direction. At lower luminance contrast, the addition of chromatic information shortens the RT. RTs are strongly influenced by stimulus size when the chromatic stimulus is modulated along the [S - (L + M)] pathway and by stimulus size and adaptation luminance for the (L-M) pathway. RTs are independent of stimulus size for stimuli larger than 0.5 deg. Data are modeled with a modified version of Pieron's formula with an exponent close to 2, in which the stimulus intensity term is replaced by a factor that considers the relative effects of chromatic and achromatic information, as indexed by the RMS (square-root of the cone contrast) value at isoluminance and the Weber luminance contrast, respectively. The parameters of the model reveal how RT is linked to stimulus size, chromatic channels, and adaptation luminance and how they can be interpreted in terms of two chromatic mechanisms. This equation predicts that, for isoluminance, RTs for a stimulus lying on the S-cone pathway are higher than those for a stimulus lying on the L-M-cone pathway, for a given RMS cone contrast. The equation also predicts an asymptotic trend to the RT for an achromatic stimulus when the luminance contrast is sufficiently large.

  19. Mapping erosion-sensitive areas after wildfires using fieldwork, remote sensing, and geographic information systems techniques on a regional scale

    NASA Astrophysics Data System (ADS)

    PéRez-Cabello, F.; de La Riva FernáNdez, J.; Montorio LloveríA, R.; GarcíA-MartíN, A.

    2006-12-01

    Alterations in the hydrological cycle following wildfire due to the loss of ground cover vegetation and changes in soil properties have been documented in many studies. Nevertheless, the rapid process of vegetation recovery reduces such negative effects. Vegetation cover before fire, fire severity, and geophysical properties are important factors that control spatial discontinuities involved in the vegetation-covering process. The objective of this study was to estimate the probability of high erosion in order to map erosion-sensitive areas after fire. The analysis was carried out in different plant communities burnt by summer wildfires in the pre-Pyrenean area (Spain). Three-year Landsat Thematic Mapper (TM) images have been used for mapping wildfire areas and severity levels. Conversion to spectral reflectance has been applied for radiometric correction by normalizing topographic and atmospheric effects. Likewise, other physical variables have also been incorporated into the geographic information system (GIS): vegetation types, parent material, illumination, slope, aspect, and precipitation. The dependent variable has been characterized by means of fieldwork and a photointerpretation process based on high-resolution digital aerial orthophotographs taken 11-12 years after the fire. Different logistic regression models have been used for mapping the probability of erosion. Results indicate that prefire normalized difference vegetation index values and aspect are the most important variables for estimating erosion-sensitive areas after fire (Nagelkerke r2 = 0.66; Kappa values = 0.65). Finally, the use of nonparametric models with environmental digital information based on GIS can facilitate the management of burnt areas.

  20. Evaluation of expanded uncertainties in luminous intensity and illuminance calibrations

    SciTech Connect

    Sametoglu, Ferhat

    2008-11-01

    Detector-based calibrating methods and expressions for calculation of photometric uncertainties related to uncertainties in the calibrations of luminous intensity of a light source, illuminance responsivity of a photometer head, and calibration factors of an illuminance meter are discussed. These methods permit luminous intensity calibrations of incandescent light sources, luminous responsivity calibrations of photometer heads, and calibration factors of illuminance meters to be carried out with relative expanded uncertainties (with a level of confidence of 95.45%) of 0.4%, 0.4%, and 0.6%, respectively.

  1. Evaluation of expanded uncertainties in luminous intensity and illuminance calibrations.

    PubMed

    Sametoglu, Ferhat

    2008-11-01

    Detector-based calibrating methods and expressions for calculation of photometric uncertainties related to uncertainties in the calibrations of luminous intensity of a light source, illuminance responsivity of a photometer head, and calibration factors of an illuminance meter are discussed. These methods permit luminous intensity calibrations of incandescent light sources, luminous responsivity calibrations of photometer heads, and calibration factors of illuminance meters to be carried out with relative expanded uncertainties (with a level of confidence of 95.45%) of 0.4%, 0.4%, and 0.6%, respectively.

  2. A Second Luminous Blue Variable in the Quintuplet Cluster

    NASA Astrophysics Data System (ADS)

    Geballe, T. R.; Najarro, F.; Figer, D. F.

    2000-02-01

    H- and K-band moderate-resolution and 4 μm high-resolution spectra have been obtained for FMM 362, a bright star in the Quintuplet Cluster near the Galactic center. The spectral features in these bands closely match those of the Pistol Star, a luminous blue variable and one of the most luminous stars known. The new spectra and previously obtained photometry imply a very high luminosity for FMM 362, L>=106 Lsolar, and a temperature of 10,000-13,000 K. Based on its luminosity, temperature, photometric variability, and similarities to the Pistol Star, we conclude that FMM 362 is a luminous blue variable.

  3. A Second Luminous Blue Variable in the Quintuplet Cluster.

    PubMed

    Geballe; Najarro; Figer

    2000-02-20

    H- and K-band moderate-resolution and 4 µm high-resolution spectra have been obtained for FMM 362, a bright star in the Quintuplet Cluster near the Galactic center. The spectral features in these bands closely match those of the Pistol Star, a luminous blue variable and one of the most luminous stars known. The new spectra and previously obtained photometry imply a very high luminosity for FMM 362, L>/=106 L middle dot in circle, and a temperature of 10,000-13,000 K. Based on its luminosity, temperature, photometric variability, and similarities to the Pistol Star, we conclude that FMM 362 is a luminous blue variable.

  4. Ultra Luminous X-ray Sources

    NASA Astrophysics Data System (ADS)

    Webb, N. A.; Godet, O.

    2015-12-01

    Ultra Luminous X-ray sources (ULXs) are X-ray bright objects that are not coincident with the central nucleus of the host galaxy and which have luminosities that exceed the Eddington limit for a stellar mass black hole, typically L > 3 × 10^{39} erg s^{-1} for a black hole of 20 M_⊙. The nature of these objects is still unclear. However, it is possible that these sources do not form a single class of objects. Many ULXs may house stellar mass black holes accreting at super-Eddington rates, even if the physical mechanism for such high accretion rates is still not understood. Some ULXs may contain intermediate mass black holes (˜1 × 10^{2} - ˜1 × 10^{5} M_⊙). These elusive black holes are thought to be the building blocks of the more massive supermassive black holes, observed at the centre of many galaxies. Other ULXs may not be accreting black holes at all. Recent evidence for the different types of ULXs is presented in this paper.

  5. Extremely Luminous Far-infrared Sources (ELFS)

    NASA Technical Reports Server (NTRS)

    Harwit, Martin; Houck, James R.; Soifer, B. Thomas; Palumbo, Giorgio G. C.

    1987-01-01

    The Infrared Astronomical Satellite (IRAS) survey uncovered a class of Extremely Luminous Far Infrared Sources (ELFS), exhibiting luminosities up to and occasionally exceeding 10 to the 12th power L sub 0. Arguments are presented to show that sources with luminosities L equal to or greater than 3 x 10 to the 10th power L sub 0 may represent gas rich galaxies in collision. The more conventional explanation of these sources as sites of extremely active star formation fails to explain the observed low optical luminosities of ELFS as well as their high infrared excess. In contrast, a collisional model heats gas to a temperature of approx. 10 to the 6th power K where cooling takes place in the extreme ultraviolet. The UV is absorbed by dust and converted into far infrared radiation (FIR) without generation of appreciable optical luminosity. Gas recombination as it cools generates a Lyman alpha photon only once for every two extreme ultraviolet approx. 50eV photons emitted by the 10 to the 6th power gas. That accounts for the high infrared excess. Finally, the model also is able to explain the observed luminosity distribution of ELFS as well as many other traits.

  6. Galaxy Clustering Around Nearby Luminous Quasars

    NASA Technical Reports Server (NTRS)

    Fisher, Karl B.; Bahcall, John N.; Kirhakos, Sofia; Schneider, Donald P.

    1996-01-01

    We examine the clustering of galaxies around a sample of 20 luminous low redshift (z approx. less than 0.30) quasars observed with the Wide Field Camera-2 on the Hubble Space Telescope (HST). The HST resolution makes possible galaxy identification brighter than V = 24.5 and as close as 1 min or 2 min to the quasar. We find a significant enhancement of galaxies within a projected separation of approx. less than 100 1/h kpc of the quasars. If we model the QSO/galaxy correlation function as a power law with a slope given by the galaxy/galaxy correlation function, we find that the ratio of the QSO/galaxy to galaxy/galaxy correlation functions is 3.8 +/- 0.8. The galaxy counts within r less than 15 1/h kpc of the quasars are too high for the density profile to have an appreciable core radius (approx. greater than 100 1/h kpc). Our results reinforce the idea that low redshift quasars are located preferentially in groups of 10-20 galaxies rather than in rich clusters. We see no significant difference in the clustering amplitudes derived from radio-loud and radio-quiet subsamples.

  7. Fluorescence lifetime imaging of optically levitated aerosol: a technique to quantitatively map the viscosity of suspended aerosol particles.

    PubMed

    Fitzgerald, C; Hosny, N A; Tong, H; Seville, P C; Gallimore, P J; Davidson, N M; Athanasiadis, A; Botchway, S W; Ward, A D; Kalberer, M; Kuimova, M K; Pope, F D

    2016-08-21

    We describe a technique to measure the viscosity of stably levitated single micron-sized aerosol particles. Particle levitation allows the aerosol phase to be probed in the absence of potentially artefact-causing surfaces. To achieve this feat, we combined two laser based techniques: optical trapping for aerosol particle levitation, using a counter-propagating laser beam configuration, and fluorescent lifetime imaging microscopy (FLIM) of molecular rotors for the measurement of viscosity within the particle. Unlike other techniques used to measure aerosol particle viscosity, this allows for the non-destructive probing of viscosity of aerosol particles without interference from surfaces. The well-described viscosity of sucrose aerosol, under a range of relative humidity conditions, is used to validate the technique. Furthermore we investigate a pharmaceutically-relevant mixture of sodium chloride and salbutamol sulphate under humidities representative of in vivo drug inhalation. Finally, we provide a methodology for incorporating molecular rotors into already levitated particles, thereby making the FLIM/optical trapping technique applicable to real world aerosol systems, such as atmospheric aerosols and those generated by pharmaceutical inhalers. PMID:27430158

  8. Alterations in rat intestinal mucin patterns following luminal infusion of acetylsalicylic acid and prostaglandin derivatives.

    PubMed

    Satchithanandam, S; Cassidy, M M; Kharroubi, A T; Calvert, R J; Leeds, A R; Vahouny, G V

    1990-12-01

    The secretion of gastrointestinal mucin and/or the formation of mucoid caps have been implicated in cytoprotective or repair mechanisms related to mucosal injury models. In this study, rats were treated with acetylsalicylic acid (ASA) or prostaglandins (PG), and their effects on the synthesis and secretion of small intestinal mucin were examined. A newly developed polyclonal antibody to rat intestinal mucin was used for immunoassay of rat intestinal luminal and tissue mucin. The mucin antigen source was obtained by vacuum aspiration of luminal mucus. A high-molecular-weight glycoprotein (2 x 10(6) Da) fraction injected into rabbits produced a primary mucin antibody. A sensitive and quantitative enzyme-linked immunosorbent assay (ELISA) was developed that yielded a highly reproducible and linear response with mucin aliquots containing 0-20 ng of protein/ml. Incorporation of the plasma tracers ([3H]glucose and [35S]sodium sulfate) into mucin derived from hexadecyltrimethylammonium bromide precipitation after treatment with ASA (100 mg/kg body wt) decreased, although administration of dimethylprostaglandin E2 (100 micrograms/kg body wt) significantly increased the specific tracer incorporation values for the sialomucin and sulfomucin indices in luminal mucin fractions. The immunoassay data pattern for the ELISA technique was virtually identical to the results of the radiolabeled tracer method obtained for the same pharmacologic treatments. These experiments demonstrate that the estimation of synthesized mucin (tissue source) or secreted mucin (luminal source) as determined by the ELISA technique is similar to that obtained by the time-consuming and labor-intensive tracer incorporation methodology. PMID:1701376

  9. Mapping of Moho and Moho Transition Zone (MTZ) in Samail ophiolites of Sultanate of Oman using remote sensing technique

    NASA Astrophysics Data System (ADS)

    Rajendran, Sankaran; Nasir, Sobhi

    2015-08-01

    Moho and Moho Transition Zone (MTZ) of the Samail ophiolite of Sultanate of Oman are characteristic to potential occurrences of chromite deposit, hydrothermal mineralization and serpentinization. Mapping of Moho and MTZ, and discriminating them in between the mafic and ultramafic rocks in ophiolite sequence are more significant and important. The present study describes the remote sensing spectral characters of minerals and rocks of the Moho and MTZ and discriminates the Moho of Wadi Al Abyad of Nakhl massif, and Wadi Nidab and Wadi Abda regions of Sumail massif in the visible and near infrared (VNIR), and short wavelength infrared (SWIR) spectral regions using low-cost multispectral satellite data of the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER). Results of this study show that the red-green-blue (RGB) color composite images of ASTER spectral bands 8, 3 and 1, and 8, 7 and 4 are able to delineate the Moho and MTZ of the regions. The RGB images of ASTER band ratios (4/8, 4/1, 3/2 * 4/3 and (1 + 3)/2, (4 + 6)/5, (7 + 9)/8) are capable to discriminate the mantle material (ultramafic harzburgites) and crustal rocks (mafic gabbros). The occurrence of such rock types is demonstrated by detection of their minerals using Spectral Angle Mapper (SAM) image processing method. The presence of Moho and MTZ, and associated lithologies are verified in field at Wadi Al Abyad, Wadi Nidab, Wadi Abda, Wadi Tayin, Wadi Fizh and several locations of Nakhl regions of Samail ophiolites. The laboratory study shows the occurrence of typical minerals namely olivine, orthopyroxene and clinopyroxene in the harzburgite and the minerals such as plagioclase, clinopyroxene, hornblende, orthopyroxene and olivine in the layered gabbro. The spectral properties of the rocks are studied using Portable Infrared Mineral Analyzer (PIMA) spectrometer and the occurrences of minerals are confirmed by X-ray diffraction (XRD) analyses. This study demonstrates the sensor

  10. Development of techniques for producing static strata maps and development of photointerpretive methods based on multitemporal LANDSAT data

    NASA Technical Reports Server (NTRS)

    Colwell, R. N. (Principal Investigator); Hay, C. M.; Thomas, R. W.; Benson, A. S.

    1977-01-01

    Progress in the evaluation of the static stratification procedure and the development of alternative photointerpretive techniques to the present LACIE procedure for the identification of training fields is reported. Statistically significant signature controlling variables were defined for use in refining the stratification procedure. A subset of the 1973-74 Kansas LACIE segments for wheat was analyzed.

  11. Development of techniques for producing static strata maps and development of photointerpretation methods based on multitemporal LANDSAT data

    NASA Technical Reports Server (NTRS)

    Colwell, R. N. (Principal Investigator); Hay, C. M.; Thomas, R. W.; Benson, A. S.

    1976-01-01

    The progress of research conducted in support of the Large Area Crop Inventory Experiment (LACIE) is documented. Specific tasks include (1) evaluation of the static stratification procedure and modification of that procedure if warranted, and (2) the development of alternative photointerpretative techniques to the present LACIE procedures for the identification and selection of training fields (areas).

  12. Evaluation of SLAR and thematic mapper MSS data for forest cover mapping using computer-aided analysis techniques. [south carolina

    NASA Technical Reports Server (NTRS)

    Hoffer, R. M. (Principal Investigator)

    1979-01-01

    A literature review on radar and spectral band information was conducted and a NC-130 mission was flown carrying the NS001 scanner system which basically corresponds to the channel configuration of the proposed thematic mapper. Aerial photography and other reference data were obtained for the study site, an area approximately 290 sq miles in north central South Carolina. A cover type map was prepared and methods were devised for reformatting and geometrically correcting MSS CRT data. Arrangements were made to obtain LANDSAT data for dates approximating the NC-130 mission. Because of the waveband employed to obtain SEASAT radar data, it was decided to determine if X-band (2.40 cm to 3.75 cm wavelength) imagery is available.

  13. [Working hypothesis of simplified techniques for the first mapping of occupational hazards in handicraft. First part: ergonomics hazards].

    PubMed

    Colombini, D; Di Leone, G; Occhipinti, E; Montomoli, L; Ruschioni, A; Giambartolomei, M; Ardissone, S; Fanti, M; Pressiani, S; Placci, M; Cerbai, M; Preite, S

    2009-01-01

    During the last Congress of the International Ergonomics Association (IEA), Beijing - China August 2009, in collaboration with World Health Organization an international group for developing a "toolkit for MSD prevention" was founded. Possible users of toolkits are: members of a health and safety committee; health and safety representatives; line supervisors; foremen; workers; government representatives; health workers implementing basic occupational health services; occupational health and safety specialists. According with ISO standard 11228 series and their ISO Application document for the Key enters and Quick Assessment (green/red conditions), our group developed a first mapping methodology of occupational hazards in handicraft, working with the support of the information technology (Excel). This methodology, utilizing specific key enters and quick evaluation, allows a simple risk estimation. So it is possible to decide for which occupational hazards will be necessary an exhaustive assessment and to which professional consultant it is better to direct them to (worker's doctor, engineer, chemical, etc.).

  14. Techniques of low technology sampling of air pollution by metals: a comparison of concentrations and map patterns.

    PubMed

    Lloyd, O L; Gailey, F A

    1987-07-01

    During a 17 month survey of air pollution in the town of Armadale, central Scotland, the concentrations of some metals (iron, manganese, zinc, lead, copper, chrome, nickel, cadmium, and cobalt) were measured in seven types of low technology sampler--four indigenous and three transplanted--at 47 sites. The geographical patterns of the concentrations in the samplers were compared on two types of map. For most metals, sites with high concentrations were present close to the foundry and also in the north of the town. The differences between the patterns of pollution shown by the various types of sampler probably reflected differing mechanisms for collection and different affinities for various sizes and types of metal particle.

  15. [Working hypothesis of simplified techniques for the first mapping of occupational hazards in handicraft. First part: ergonomics hazards].

    PubMed

    Colombini, D; Di Leone, G; Occhipinti, E; Montomoli, L; Ruschioni, A; Giambartolomei, M; Ardissone, S; Fanti, M; Pressiani, S; Placci, M; Cerbai, M; Preite, S

    2009-01-01

    During the last Congress of the International Ergonomics Association (IEA), Beijing - China August 2009, in collaboration with World Health Organization an international group for developing a "toolkit for MSD prevention" was founded. Possible users of toolkits are: members of a health and safety committee; health and safety representatives; line supervisors; foremen; workers; government representatives; health workers implementing basic occupational health services; occupational health and safety specialists. According with ISO standard 11228 series and their ISO Application document for the Key enters and Quick Assessment (green/red conditions), our group developed a first mapping methodology of occupational hazards in handicraft, working with the support of the information technology (Excel). This methodology, utilizing specific key enters and quick evaluation, allows a simple risk estimation. So it is possible to decide for which occupational hazards will be necessary an exhaustive assessment and to which professional consultant it is better to direct them to (worker's doctor, engineer, chemical, etc.). PMID:19943445

  16. Evaluation of SLAR and thematic mapper MSS data for forest cover mapping using computer-aided analysis techniques

    NASA Technical Reports Server (NTRS)

    Hoffer, R. M. (Principal Investigator)

    1979-01-01

    The spatial characteristics of the data were evaluated. A program was developed to reduce the spatial distortions resulting from variable viewing distance, and geometrically adjusted data sets were generated. The potential need for some level of radiometric adjustment was evidenced by an along track band of high reflectance across different cover types in the Varian imagery. A multiple regression analysis was employed to explore the viewing angle effect on measured reflectance. Areas in the data set which appeared to have no across track stratification of cover type were identified. A program was developed which computed the average reflectance by column for each channel, over all of the scan lines in the designated areas. A regression analysis was then run using the first, second, and third degree polynomials, for each channel. An atmospheric effect as a component of the viewing angle source of variance is discussed. Cover type maps were completed and training and test field selection was initiated.

  17. Luminance uniformity compensation for OLED panels based on FPGA

    NASA Astrophysics Data System (ADS)

    Ou, Peng; Yang, Gang; Jiang, Quan; Yu, Jun-Sheng; Wu, Qi-Peng; Shang, Fu-Hai; Yin, Wei; Wang, Jun; Zhong, Jian; Luo, Kai-Jun

    2009-09-01

    Aiming at the problem of luminance uniformity for organic lighting-emitting diode (OLED) panels, a new brightness calculating method based on bilinear interpolation is proposed. The irradiance time of each pixel reaching the same luminance is figured out by Matlab. Adopting the 64×32-pixel, single color and passive matrix OLED panel as adjusting luminance uniformity panel, a new circuit compensating scheme based on FPGA is designed. VHDL is used to make each pixel’s irradiance time in one frame period written in program. The irradiance brightness is controlled by changing its irradiance time, and finally, luminance compensation of the panel is realized. The simulation result indicates that the design is reasonable.

  18. Cue Combination of Conflicting Color and Luminance Edges.

    PubMed

    Sharman, Rebecca J; McGraw, Paul V; Peirce, Jonathan W

    2015-12-01

    Abrupt changes in the color or luminance of a visual image potentially indicate object boundaries. Here, we consider how these cues to the visual "edge" location are combined when they conflict. We measured the extent to which localization of a compound edge can be predicted from a simple maximum likelihood estimation model using the reliability of chromatic (L-M) and luminance signals alone. Maximum likelihood estimation accurately predicted the pattern of results across a range of contrasts. Predictions consistently overestimated the relative influence of the luminance cue; although L-M is often considered a poor cue for localization, it was used more than expected. This need not indicate that the visual system is suboptimal but that its priors about which cue is more useful are not flat. This may be because, although strong changes in chromaticity typically represent object boundaries, changes in luminance can be caused by either a boundary or a shadow. PMID:27551364

  19. Cue Combination of Conflicting Color and Luminance Edges

    PubMed Central

    Sharman, Rebecca J; McGraw, Paul V

    2015-01-01

    Abrupt changes in the color or luminance of a visual image potentially indicate object boundaries. Here, we consider how these cues to the visual “edge” location are combined when they conflict. We measured the extent to which localization of a compound edge can be predicted from a simple maximum likelihood estimation model using the reliability of chromatic (L−M) and luminance signals alone. Maximum likelihood estimation accurately predicted the pattern of results across a range of contrasts. Predictions consistently overestimated the relative influence of the luminance cue; although L−M is often considered a poor cue for localization, it was used more than expected. This need not indicate that the visual system is suboptimal but that its priors about which cue is more useful are not flat. This may be because, although strong changes in chromaticity typically represent object boundaries, changes in luminance can be caused by either a boundary or a shadow. PMID:27551364

  20. Single luminal epithelial progenitors can generate prostate organoids in culture

    PubMed Central

    Chua, Chee Wai; Shibata, Maho; Lei, Ming; Toivanen, Roxanne; Barlow, LaMont J.; Bergren, Sarah K.; Badani, Ketan K.; McKiernan, James M.; Benson, Mitchell C.; Hibshoosh, Hanina; Shen, Michael M.

    2014-01-01

    The intrinsic ability to display self-organizing morphogenetic properties in ex vivo culture may represent a general property of tissue stem cells. Here we show that single luminal stem/progenitor cells can generate prostate organoids in a three-dimensional culture system in the absence of stroma. Organoids generated from CARNs (castration-resistant Nkx3.1-expressing cells) or normal prostate epithelium exhibit tissue architecture containing luminal and basal cells, undergo long-term expansion in culture, and display functional androgen receptor signaling. Lineage-tracing demonstrates that luminal cells are favored for organoid formation, and generate basal cells in culture. Furthermore, tumor organoids can initiate from CARNs after oncogenic transformation, and from mouse models of prostate cancer, and can facilitate analyses of drug response. Finally, we provide evidence supporting the feasibility of organoid studies of human prostate tissue. Our studies underscore the progenitor properties of luminal cells, and identify in vitro approaches for studying prostate biology. PMID:25241035

  1. High-luminance LEDs replace incandescent lamps in new applications

    NASA Astrophysics Data System (ADS)

    Evans, David L.

    1997-04-01

    The advent of high luminance AlInGaP and InGaN LED technologies has prompted the use of LED devices in new applications formally illuminated by incandescent lamps. The luminous efficiencies of these new LED technologies equals or exceeds that attainable with incandescent sources, with reliability factors that far exceed those of incandescent sources. The need for a highly efficient, dependable, and cost effective replacement for incandescent lamps is being fulfilled with high luminance LED lamps. This paper briefly described some of the new applications incorporating high luminance LED lamps, traffic signals and roadway signs for traffic management, automotive exterior lighting, active matrix and full color displays for commercial advertising, and commercial aircraft panel lighting and military aircraft NVG compatible lighting.

  2. Air Heating Associated with Transient Luminous Events

    NASA Astrophysics Data System (ADS)

    Riousset, J. A.; Pasko, V. P.; Bourdon, A.

    2009-12-01

    The understanding of ambient gas heating processes initiated by needle-shaped filaments of ionization, called streamers, embedded in originally cold air (near room temperature) represents a long standing problem, which is of interest for studies of long laboratory sparks and natural lightning discharges [e.g., Gallimberti et al., C. R. Physique, 3, 1335, 2002]. The observed phenomenology of a subset of the recently observed transient luminous events in the middle atmosphere, which originate from thundercloud tops [e.g, Wescott et al., JGR, 106, 21549, 2001; Pasko et al., Nature, 416, 152, 2002; Su et al., Nature, 423, 974, 2003; Krehbiel et al., Nature Geoscience, 1, 233, 2008; Cummer et al., Nature Geoscience, 2, 617, 2009, Riousset et al., JGR, 10.1029/2009JA014286, 2009, in press], indicate that these events may be related to conventional lightning leader processes and therefore are associated with significant heating of the air in the regions of atmosphere through which they propagate [Pasko and George, JGR, 107, 1458, 2002]. Many of the small scale features observed in sprites at higher altitudes [e.g., Stenbaek-Nielsen et al., GRL, 104, L11105, 2007, and references therein] can be interpreted in terms of corona streamers, which, after appropriate scaling with air density, are fully analogous to those, which initiate spark discharges in relatively short (several cm) gaps at near ground pressure [Liu et al., JGR, 114, A00E03, 2009, and references therein] and which constitute building blocks of streamer zones of conventional lightning leaders in long gaps [Gallimberti et al., 2002]. The recent reports of infrasound bursts originating from 60-80 km altitudes in sprites, with durations consistent with the optical widths of the sprites [e.g., Farges, in Lightning: Principles, Instruments and Applications, p. 417, Betz et al., (eds.), Springer, 2009], provide an additional motivation for studies of the heating of the ambient air and associated chemical effects

  3. Luminance controlled pupil size affects Landolt C task performance

    SciTech Connect

    Berman, S.M. ); Fein, G. ); Jewett, D.L.; Ashford, F. )

    1993-02-01

    Subjects judged the orientation of a 2 min. gap Landolt C located at a distance of 2.4 m. The stimuli were presented in central vision on a CRT, at low to medium contrast. The effects of varying the spectrum and luminance of surround lighting were assessed on both pupil size (measured using infrared pupillometry during task performance) and task accuracy. The task display was protected from the surround lighting, so that its luminance and contrast could be varied independently of the changes in the surround lighting. Indirect surround illumination was provided by either two illuminants of very different scotopic spectral content but with the same photopic luminance (Experiments 1 and 3), or by using the same illuminant at two different luminance levels (Experiment 2). In Experiment 3, the effect of changing surround spectrum was compared to the effect of varying task background luminance between 12 cd/m[sup 2] and 73 cd/m[sup 2]. In all experiments, scotopically enhanced surround lighting produced pupil areas which were reduced by almost 50% in comparison with surround lighting with relatively less scotopic luminance. Concomitantly there was improvement in Landolt C task performance with the scotopically enhanced surround lighting at all contrast and luminance levels. In these experiments, smaller pupil sizes were associated with significantly better visual-task performance in spite of lower task retinal illuminance when compared to the condition with larger pupils. These results suggest that changes in surround spectrum can compensate for the effect on task performance of a reduction in task luminance and supports the hypothesis that lighting energy savings could accrue in the workplace by shifting lamp spectra to obtain greater scotopic efficacy.

  4. Luminance controlled pupil size affects Landolt C task performance. Revision

    SciTech Connect

    Berman, S.M.; Fein, G.; Jewett, D.L.; Ashford, F.

    1993-02-01

    Subjects judged the orientation of a 2 min. gap Landolt C located at a distance of 2.4 m. The stimuli were presented in central vision on a CRT, at low to medium contrast. The effects of varying the spectrum and luminance of surround lighting were assessed on both pupil size (measured using infrared pupillometry during task performance) and task accuracy. The task display was protected from the surround lighting, so that its luminance and contrast could be varied independently of the changes in the surround lighting. Indirect surround illumination was provided by either two illuminants of very different scotopic spectral content but with the same photopic luminance (Experiments 1 and 3), or by using the same illuminant at two different luminance levels (Experiment 2). In Experiment 3, the effect of changing surround spectrum was compared to the effect of varying task background luminance between 12 cd/m{sup 2} and 73 cd/m{sup 2}. In all experiments, scotopically enhanced surround lighting produced pupil areas which were reduced by almost 50% in comparison with surround lighting with relatively less scotopic luminance. Concomitantly there was improvement in Landolt C task performance with the scotopically enhanced surround lighting at all contrast and luminance levels. In these experiments, smaller pupil sizes were associated with significantly better visual-task performance in spite of lower task retinal illuminance when compared to the condition with larger pupils. These results suggest that changes in surround spectrum can compensate for the effect on task performance of a reduction in task luminance and supports the hypothesis that lighting energy savings could accrue in the workplace by shifting lamp spectra to obtain greater scotopic efficacy.

  5. Luminal flow alteration in presence of the stent

    NASA Astrophysics Data System (ADS)

    Bernad, E. S.; Hudrea, C. I.; Bernad, S. I.; Totorean, A. F.; Bosioc, A. I.

    2015-12-01

    Luminally protruding struts alter blood flow, creating areas of recirculation, separation, and stagnation. The impact of flow alterations around struts vary as the strut geometrical parameters change. We quantified the influence of the luminal flow alterations due to the presence of the stent struts by performing two-dimensional numerical simulation. Idealized computer models can facilitate understanding of the in-stent restenosis that is impossible to achieve in vivo.

  6. Luminous Binary Supersoft X-Ray Sources

    NASA Technical Reports Server (NTRS)

    Oliversen, Ronald J. (Technical Monitor); DiStefano, Roseanne

    2005-01-01

    One of the key accomplishments of the two preceding years was our development of an algorithm to select SSSs in external galaxies which have been observed by Chandra or XMM-Newton. By applying this algorithm to data from a number of galaxies, we discovered an extension of the class of SSSs to sources that are somewhat harder (100 - 300 eV, instead of tens of eV), but which are nevertheless much softer than canonical X-ray sources. We call these new sources quasisoft sources (QSSs). During this past year, we have built on and extended this work. We have (1) continued to identify SSSs and QSSs in external galaxies, (2) worked on models for the sources and find that black hole models seem promising for a subset of them, and (3) have studied individual systems, especially M101-ULX1. This special system has been observed as an SSS in its high &ate, with a luminosity in excess of 10(exp 41) erg/s. It has also been observed as a QSS when it is less luminous, and as a hard source in its low state. It is one of the best candidates to be an accreting intermediate-mass black hole. We have several papers in preparation. Below we list papers which are complete, including only new work and papers whose status has changed (e.g., been accepted for publication) since our last report. In addition, our work on QSSs has received some publicity. It was the subject of a Chandra press release and was picked up by several media outlets.

  7. FUSE Observations of Luminous Cool Stars

    NASA Astrophysics Data System (ADS)

    Dupree, A. K.; Young, P. R.; Ake, T. B.

    2000-12-01

    Luminous cool stars can address the evolution of magnetic activity and the dynamics of stellar winds and mass loss. The region of yellow supergiants in the HR diagram contains stars of intermediate mass both with coronas and those possessing a hot outer atmosphere in the presence of a strong wind (the ``hybrid'' stars). These hybrid objects hold particular significance for evolution studies because they represent the physically important connection between solar-like stars (with coronas and fast winds of low-mass loss rate) and the cool supergiant stars (Alpha Ori-like) with cool outer atmospheres and massive winds. The Far Ultraviolet Spectroscopic Explorer (FUSE) measured the chromospheric and transition region emissions of the bright G2 Ib supergiant Beta Draconis (HD 159181) on 9 May 2000. Two exposures through the large aperture totaled 7695 s and were obtained in all channels covering the region λ λ 912-1180. Emission from chromospheric and transition region ions (C III, O VI, Si III, S IV, S VI) is detected along with a number of low ion stages. Profiles of strong lines are asymmetric suggesting the presence of a wind. A short exposure (3260 s) of Alpha Aquarii (HD 209750), a hybrid supergiant also of spectral type G2 Ib was obtained June 29, 2000. Dynamics of the atmospheres can be inferred from line profiles. The atmospheric temperature distribution, densities, and scale sizes can be evaluated from line fluxes to characterize the differences between a coronal star and a hybrid supergiant. FUSE is a NASA Origins mission operated by The Johns Hopkins University. Funding for this research is provided through NASA Contract NAS-532985.

  8. Development of techniques for producing static strata maps and development of photointerpretive methods based on multitemporal LANDSAT data

    NASA Technical Reports Server (NTRS)

    Colwell, R. N. (Principal Investigator)

    1977-01-01

    The results and progress of work conducted in support of the Large Area Crop Inventory Experiment (LACIE) are documented. Research was conducted for two tasks. These tasks include: (1) evaluation of the UCB static stratification procedure and modification of that procedure if warranted; and (2) the development of alternative photointerpretive techniques to the present LACIE procedure for the identification and selection of training areas for machine-processing of LACIE segments.

  9. Ultrasonic attenuation maps of liver based on a conventional B-scan and an amplitude loss technique.

    PubMed

    Shmulewitz, A; Heyman, Z; Walach, E; Ramot, B; Itzchak, Y

    1990-10-01

    The authors present a novel ultrasonic amplitude loss technique, using image processing techniques and designed for computation of local attenuation estimates. Three different estimation approaches were evaluated: the extended Prony, the maximum likelihood, and the least squares approaches. The latter two approaches were found to result in a much higher estimation error than that observed for the Prony method. The attenuation values in the normal population (49 subjects) were 0.44 +/- 0.03 dB/MHz/cm. Three hundred sixty-seven liver scans from 266 patients were evaluated. Hodgkin's lymphoma patients with liver involvement had attenuation values of 0.22 +/- 0.07 dB/MHz/cm. Low attenuation values also were observed for four patients with viral hepatitis (0.31 +/- 0.08 dB/MHz/cm). The detectability of other disease states was not increased by these global attenuation estimates; however, the results demonstrate possible potential uses for the proposed technique for the diagnosis of diffuse liver disease.

  10. Photo-irradiation paradigm: Mapping a remarkable facile technique used for advanced drug, gene and cell delivery.

    PubMed

    Shaker, Mohamed A; Younes, Husam M

    2015-11-10

    Undoubtedly, the progression of photo-irradiation technique has provided a smart engineering tool for the state-of-the-art biomaterials that guide the biomedical and therapeutic domains for promoting the modern pharmaceutical industry. Many investigators had exploited such a potential technique to create/ameliorate numerous pharmaceutical carriers. These carriers show promising applications that vary from small drug to therapeutic protein delivery and from gene to living cell encapsulation design. Harmony between the properties of precisely engineered precursors and the formed network structure broadens the investigator's intellect for both brilliant creations and effective applications. As well, controlling photo-curing at the formulation level, through manipulating the absorption of light stimuli, photoinitiator system and photo-responsive precursor, facilitates the exploration of novel distinctive biomaterials. Discussion of utilizing different photo-curing procedures in designing/formulation of different pharmaceutical carriers is the main emphasis of this review. In addition, recent applications of these intelligent techniques in targeted, controlled, and sustained drug delivery with understanding of photo-irradiation concept and mechanism are illustrated.

  11. Dynamics of backlight luminance for using smartphone in dark environment

    NASA Astrophysics Data System (ADS)

    Na, Nooree; Jang, Jiho; Suk, Hyeon-Jeong

    2014-02-01

    This study developed dynamic backlight luminance, which gradually changes as time passes for comfortable use of a smartphone display in a dark environment. The study was carried out in two stages. In the first stage, a user test was conducted to identify the optimal luminance by assessing the facial squint level, subjective glare evaluation, eye blink frequency and users' subjective preferences. Based on the results of the user test, the dynamics of backlight luminance was designed. It has two levels of luminance: the optimal level for initial viewing to avoid sudden glare or fatigue to users' eyes, and the optimal level for constant viewing, which is comfortable, but also bright enough for constant reading of the displayed material. The luminance for initial viewing starts from 10 cd/m2, and it gradually increases to 40 cd/m2 for users' visual comfort at constant viewing for 20 seconds; In the second stage, a validation test on dynamics of backlight luminance was conducted to verify the effectiveness of the developed dynamics. It involving users' subjective preferences, eye blink frequency, and brainwave analysis using the electroencephalogram (EEG) to confirm that the proposed dynamic backlighting enhances users' visual comfort and visual cognition, particularly for using smartphones in a dark environment.

  12. Evidence for color and luminance invariance of global form mechanisms.

    PubMed

    Rentzeperis, Ilias; Kiper, Daniel C

    2010-01-01

    Human visual cortex contains mechanisms that pool local orientation information over large areas of visual space to support percepts of global form. Initial studies concluded that some of these mechanisms are cue invariant, in that they yield form percepts irrespective of whether the visual signals contain luminance or chromatic information. Later studies reported that these mechanisms are chromatically selective, albeit with a broad tuning in color space. We used Glass patterns and the phenomenon of adaptation to determine whether Glass pattern perception is mediated by mechanisms that are color and/or luminance selective, or not. Subjects were adapted to either a radial or concentric Glass pattern of a given color or luminance polarity. We measured the effect of adaptation on subsequent detection of Glass patterns with the same or different visual attributes. Our results show that adapting to a concentric or radial pattern significantly elevates threshold for the subsequent detection of patterns of the same form, irrespective of their color or luminance polarity, but that adaptation to luminance leads to higher threshold elevations than adaptation to color. We conclude that Glass pattern perception is mediated by perceptual mechanisms that are color invariant but not totally insensitive to the difference between color and luminance information.

  13. Monitoring land-use change by combining participatory land-use maps with standard remote sensing techniques: Showcase from a remote forest catchment on Mindanao, Philippines

    NASA Astrophysics Data System (ADS)

    Mialhe, François; Gunnell, Yanni; Ignacio, J. Andres F.; Delbart, Nicolas; Ogania, Jenifer L.; Henry, Sabine

    2015-04-01

    This paper combines participatory activities (PA) with remote sensing analysis into an integrated methodology to describe and explain land-cover changes. A remote watershed on Mindanao (Philippines) is used to showcase the approach, which hypothesizes that the accuracy of expert knowledge gained from remote sensing techniques can be further enhanced by inputs from vernacular knowledge when attempting to understand complex land mosaics and past land-use changes. Six participatory sessions based on focus-group discussions were conducted. These were enhanced by community-based land-use mapping, resulting in a final total of 21 participatory land-use maps (PLUMs) co-produced by a sample of stakeholders with different sociocultural and ecological perspectives. In parallel, seven satellite images (Landsat MSS, Landsat TM, Landsat ETM+, and SPOT4) were classified following standard techniques and provided snapshots for the years 1976, 1996, and 2010. Local knowledge and collective memory contributed to define and qualify relevant land-use classes. This also provided information about what had caused the land-use changes in the past. Results show that combining PA with remote-sensing analysis provides a unique understanding of land-cover change because the two methods complement and validate one another. Substantive qualitative information regarding the chronology of land-cover change was obtained in a short amount of time across an area poorly covered by scientific literature. The remote sensing techniques contributed to test and to quantify verbal reports of land-use and land-cover change by stakeholders. We conclude that the method is particularly relevant to data-poor areas or conflict zones where rapid reconnaissance work is the only available option. It provides a preliminary but accurate baseline for capturing land changes and for reporting their causes and consequences. A discussion of the main challenges encountered (i.e. how to combine different systems of

  14. STEAM - Statistical Template Estimation for Abnormality Mapping: A personalized DTI analysis technique with applications to the screening of preterm infants.

    PubMed

    Booth, Brian G; Miller, Steven P; Brown, Colin J; Poskitt, Kenneth J; Chau, Vann; Grunau, Ruth E; Synnes, Anne R; Hamarneh, Ghassan

    2016-01-15

    We introduce the STEAM DTI analysis engine: a whole brain voxel-based analysis technique for the examination of diffusion tensor images (DTIs). Our STEAM analysis technique consists of two parts. First, we introduce a collection of statistical templates that represent the distribution of DTIs for a normative population. These templates include various diffusion measures from the full tensor, to fractional anisotropy, to 12 other tensor features. Second, we propose a voxel-based analysis (VBA) pipeline that is reliable enough to identify areas in individual DTI scans that differ significantly from the normative group represented in the STEAM statistical templates. We identify and justify choices in the VBA pipeline relating to multiple comparison correction, image smoothing, and dealing with non-normally distributed data. Finally, we provide a proof of concept for the utility of STEAM on a cohort of 134 very preterm infants. We generated templates from scans of 55 very preterm infants whose T1 MRI scans show no abnormalities and who have normal neurodevelopmental outcome. The remaining 79 infants were then compared to the templates using our VBA technique. We show: (a) that our statistical templates display the white matter development expected over the modeled time period, and (b) that our VBA results detect abnormalities in the diffusion measurements that relate significantly with both the presence of white matter lesions and with neurodevelopmental outcomes at 18months. Most notably, we show that STEAM produces personalized results while also being able to highlight abnormalities across the whole brain and at the scale of individual voxels. While we show the value of STEAM on DTI scans from a preterm infant cohort, STEAM can be equally applied to other cohorts as well. To facilitate this whole-brain personalized DTI analysis, we made STEAM publicly available at http://www.sfu.ca/bgb2/steam. PMID:26515903

  15. STEAM - Statistical Template Estimation for Abnormality Mapping: A personalized DTI analysis technique with applications to the screening of preterm infants.

    PubMed

    Booth, Brian G; Miller, Steven P; Brown, Colin J; Poskitt, Kenneth J; Chau, Vann; Grunau, Ruth E; Synnes, Anne R; Hamarneh, Ghassan

    2016-01-15

    We introduce the STEAM DTI analysis engine: a whole brain voxel-based analysis technique for the examination of diffusion tensor images (DTIs). Our STEAM analysis technique consists of two parts. First, we introduce a collection of statistical templates that represent the distribution of DTIs for a normative population. These templates include various diffusion measures from the full tensor, to fractional anisotropy, to 12 other tensor features. Second, we propose a voxel-based analysis (VBA) pipeline that is reliable enough to identify areas in individual DTI scans that differ significantly from the normative group represented in the STEAM statistical templates. We identify and justify choices in the VBA pipeline relating to multiple comparison correction, image smoothing, and dealing with non-normally distributed data. Finally, we provide a proof of concept for the utility of STEAM on a cohort of 134 very preterm infants. We generated templates from scans of 55 very preterm infants whose T1 MRI scans show no abnormalities and who have normal neurodevelopmental outcome. The remaining 79 infants were then compared to the templates using our VBA technique. We show: (a) that our statistical templates display the white matter development expected over the modeled time period, and (b) that our VBA results detect abnormalities in the diffusion measurements that relate significantly with both the presence of white matter lesions and with neurodevelopmental outcomes at 18months. Most notably, we show that STEAM produces personalized results while also being able to highlight abnormalities across the whole brain and at the scale of individual voxels. While we show the value of STEAM on DTI scans from a preterm infant cohort, STEAM can be equally applied to other cohorts as well. To facilitate this whole-brain personalized DTI analysis, we made STEAM publicly available at http://www.sfu.ca/bgb2/steam.

  16. Energy source and pre-explosion progenitor evolution of Super-Luminous SNe

    NASA Astrophysics Data System (ADS)

    Margutti, Raffaella

    2013-10-01

    With bolometric peak luminosities L >7x10^43 erg/sec, the new class of Super Luminous Super Novae (SLSNe) outshine ordinary SNe of more than a factor 10 and represent the death of the most massive stars in our Universe. Their exceptional luminosity requires exotic explosion mechanisms and/or sources of energy whose nature is unclear. Here we propose a focused XMM program to map their X-ray emission down to unprecedented limits as part of our multi-wavelength effort through our programs on the EVLA (radio), Swift (UV) and optical/NIR facilities. The final aim is to: (i) Pin down the energy source of SLSNe; (ii) Map the diversity of the pre-explosion evolution of their progenitors.

  17. Method for estimating potential wetland extent by utilizing streamflow statistics and flood-inundation mapping techniques: Pilot study for land along the Wabash River near Terre Haute, Indiana

    USGS Publications Warehouse

    Kim, Moon H.; Ritz, Christian T.; Arvin, Donald V.

    2012-01-01

    Potential wetland extents were estimated for a 14-mile reach of the Wabash River near Terre Haute, Indiana. This pilot study was completed by the U.S. Geological Survey in cooperation with the U.S. Department of Agriculture, Natural Resources Conservation Service (NRCS). The study showed that potential wetland extents can be estimated by analyzing streamflow statistics with the available streamgage data, calculating the approximate water-surface elevation along the river, and generating maps by use of flood-inundation mapping techniques. Planning successful restorations for Wetland Reserve Program (WRP) easements requires a determination of areas that show evidence of being in a zone prone to sustained or frequent flooding. Zone determinations of this type are used by WRP planners to define the actively inundated area and make decisions on restoration-practice installation. According to WRP planning guidelines, a site needs to show evidence of being in an "inundation zone" that is prone to sustained or frequent flooding for a period of 7 consecutive days at least once every 2 years on average in order to meet the planning criteria for determining a wetland for a restoration in agricultural land. By calculating the annual highest 7-consecutive-day mean discharge with a 2-year recurrence interval (7MQ2) at a streamgage on the basis of available streamflow data, one can determine the water-surface elevation corresponding to the calculated flow that defines the estimated inundation zone along the river. By using the estimated water-surface elevation ("inundation elevation") along the river, an approximate extent of potential wetland for a restoration in agricultural land can be mapped. As part of the pilot study, a set of maps representing the estimated potential wetland extents was generated in a geographic information system (GIS) application by combining (1) a digital water-surface plane representing the surface of inundation elevation that sloped in the downstream

  18. Luminal flow modulates H+-ATPase activity in the cortical collecting duct (CCD).

    PubMed

    Liu, Wen; Pastor-Soler, Núria M; Schreck, Carlos; Zavilowitz, Beth; Kleyman, Thomas R; Satlin, Lisa M

    2012-01-01

    Epithelial Na(+) channel (ENaC)-mediated Na(+) absorption and BK channel-mediated K(+) secretion in the cortical collecting duct (CCD) are modulated by flow, the latter requiring an increase in intracellular Ca(2+) concentration ([Ca(2+)](i)), microtubule integrity, and exocytic insertion of preformed channels into the apical membrane. As axial flow modulates HCO(3)(-) reabsorption in the proximal tubule due to changes in both luminal Na(+)/H(+) exchanger 3 and H(+)-ATPase activity (Du Z, Yan Q, Duan Y, Weinbaum S, Weinstein AM, Wang T. Am J Physiol Renal Physiol 290: F289-F296, 2006), we sought to test the hypothesis that flow also regulates H(+)-ATPase activity in the CCD. H(+)-ATPase activity was assayed in individually identified cells in microperfused CCDs isolated from New Zealand White rabbits, loaded with the pH-sensitive dye BCECF, and then subjected to an acute intracellular acid load (NH(4)Cl prepulse technique). H(+)-ATPase activity was defined as the initial rate of bafilomycin-inhibitable cell pH (pH(i)) recovery in the absence of luminal K(+), bilateral Na(+), and CO(2)/HCO(3)(-), from a nadir pH of ∼6.2. We found that 1) an increase in luminal flow rate from ∼1 to 5 nl·min(-1)·mm(-1) stimulated H(+)-ATPase activity, 2) flow-stimulated H(+) pumping was Ca(2+) dependent and required microtubule integrity, and 3) basal and flow-stimulated pH(i) recovery was detected in cells that labeled with the apical principal cell marker rhodamine Dolichos biflorus agglutinin as well as cells that did not. We conclude that luminal flow modulates H(+)-ATPase activity in the rabbit CCD and that H(+)-ATPases therein are present in both principal and intercalated cells. PMID:21957178

  19. Water Through Life, A New Technique for Mapping Shallow Water Tables in Arid and Semi-Arid Climates using Color Infrared Aerial Photographs

    NASA Astrophysics Data System (ADS)

    Fielding, G.

    2003-04-01

    Two of the fundamental issues in water resources in arid regions are (1) the ability to accurately predict the presence of groundwater shallow enough to support riparian ecosystems and (2) the ability to assess the vulnerability of those ecosystems to withdrawals by an ever-increasing human population. A new technique for finding areas of shallow groundwater in arid and semi-arid environments has been developed that addresses both of these concerns by using the relative health of natural vegetation as an indicator of perennial shallow groundwater in environments where water is the main biolimiting factor to growth. The technique revolves around the differences in the spectral response between: actively transpiring vegetation; dormant vegetation; and dry, bare soil in the 400-900nm range as recorded by color infrared film in the dry pre-monsoon months. Distilling out only the active vegetation from aerial photographs was achieved through the creation of an index-based filter using readily available, inexpensive photo processing software. The output of the filter was carefully designed to maximize the qualitative interpretability by an analyst through the careful selection of display colors that are tuned to the maximum sensitivity range of the human vision system. When the analyst combines the qualitative interpretation of the spatial distribution of active vegetation with an understanding of the rooting depth of the native species it becomes possible to extrapolate a quantitative, basin-scale reconnaissance level map which defines the lateral extent of areas of shallow <20m(+/-5m) groundwater with a spatial accuracy of just a few meters. The research plan for the development of this technique also explored the potential for conducting the entire analysis procedure in three dimensions by projecting the filtered aerial photographs onto 10m resolution Digital Elevation Models (DEMs). When this is done and the geomorphology of the region is carefully considered the

  20. Comparative study of software techniques for 3D mapping of perforators in deep inferior epigastric artery perforator flap planning

    PubMed Central

    Hunter-Smith, David J.; Rozen, Warren Matthew

    2016-01-01

    Background Computed tomographic (CT) angiography (CTA) is widely considered the gold standard imaging modality for preoperative planning autologous breast reconstruction with deep inferior epigastric artery (DIEA) perforator (DIEP) flap. Improved anatomical understanding from CTA has translated to enhanced clinical outcomes. To achieve this, the use of appropriate CT hardware and software is vital. Various CT scanners and contrast materials have been demonstrated to consistently produce adequate scan data. However, the availability of affordable and easily accessible imaging software capable of generating 3D volume-rendered perforator images to clinically useful quality has been lacking. Osirix (Pixmeo, Geneva, Switzerland) is a free, readily available medical image processing software that shows promise. We have previously demonstrated in a case report the usefulness of Osirix in localizing perforators and their course. Methods In the current case series of 50 consecutive CTA scans, we compare the accuracy of Osirix to a commonly used proprietary 3D imaging software, Siemens Syngo InSpace 4D (Siemens, Erlangen, Germany), in identifying perforator number and location. Moreover, we compared both programs to intraoperative findings. Results We report a high rate of concordance with Osirix and Siemens Syngo InSpace 4D (99.6%). Both programs correlated closely with operative findings (92.2%). Most of the discrepancies were found in the lateral row perforators (90%). Conclusions In the current study, we report the accuracy of Osirix that is comparable to Siemens Syngo InSpace 4D, a proprietary software, in mapping perforators. However, it provides an added advantage of being free, easy-to-use, portable, and potentially a superior quality of 3D reconstructed image. PMID:27047778

  1. Beckwith–Wiedemann syndrome and uniparental disomy 11p: fine mapping of the recombination breakpoints and evaluation of several techniques

    PubMed Central

    Romanelli, Valeria; Meneses, Heloisa N M; Fernández, Luis; Martínez-Glez, Victor; Gracia-Bouthelier, Ricardo; F Fraga, Mario; Guillén, Encarna; Nevado, Julián; Gean, Esther; Martorell, Loreto; Marfil, Victoria Esteban; García-Miñaur, Sixto; Lapunzina, Pablo

    2011-01-01

    Beckwith–Wiedemann syndrome (BWS) is a phenotypically and genotypically heterogeneous overgrowth syndrome characterized by somatic overgrowth, macroglossia and abdominal wall defects. Other usual findings are hemihyperplasia, embryonal tumours, adrenocortical cytomegaly, ear anomalies, visceromegaly, renal abnormalities, neonatal hypoglycaemia, cleft palate, polydactyly and a positive family history. BWS is a complex, multigenic disorder associated, in up to 90% of patients, with alteration in the expression or function of one or more genes in the 11p15.5 imprinted gene cluster. There are several molecular anomalies associated with BWS and the large proportion of cases, about 85%, is sporadic and karyotypically normal. One of the major categories of BWS molecular alteration (10–20% of cases) is represented by mosaic paternal uniparental disomy (pUPD), namely patients with two paternally derived copies of chromosome 11p15 and no maternal contribution for that. In these patients, in addition to the effects of IGF2 overexpression, a decreased level of the maternally expressed gene CDKN1C may contribute to the BWS phenotype. In this paper, we reviewed a series of nine patients with BWS because of pUPD using several methods with the aim to evaluate the percentage of mosaicism, the methylation status at both loci, the extension of the pUPD at the short arm and the breakpoints of recombination. Fine mapping of mitotic recombination breakpoints by single-nucleotide polymorphism-array in individuals with UPD and fine estimation of epigenetic defects will provide a basis for understanding the aetiology of BWS, allowing more accurate prognostic predictions and facilitating management and surveillance of individuals with this disorder. PMID:21248736

  2. MAP3K1

    PubMed Central

    Pham, Trang T.; Angus, Steven P.

    2013-01-01

    MAP3K1 is a member of the mitogen-activated protein kinase kinase kinase (MAP3K) family of serine/threonine kinases. MAP3K1 regulates JNK activation and is unique among human kinases in that it also encodes an E3 ligase domain that ubiquitylates c-Jun and ERK1/2. Full length MAP3K1 regulates cell migration and contributes to pro-survival signaling while its caspase 3-mediated cleavage generates a C-terminal kinase domain that promotes apoptosis. The critical function of MAP3K1 in cell fate decisions suggests that it may be a target for deregulation in cancer. Recent large-scale genomic studies have revealed that MAP3K1 copy number loss and somatic missense or nonsense mutations are observed in a significant number of different cancers, being most prominent in luminal breast cancer. The alteration of MAP3K1 in diverse cancer types demonstrates the importance of defining phenotypes for possible therapeutic targeting of tumor cell vulnerabilities created when MAP3K1 function is lost or gained. PMID:24386504

  3. Contour interaction for foveal acuity targets at different luminances.

    PubMed

    Bedell, Harold E; Siderov, John; Waugh, Sarah J; Zemanová, Romana; Pluháček, František; Musilová, Lenka

    2013-08-30

    Single-letter visual acuity is impaired by nearby flanking stimuli, a phenomenon known as contour interaction. We showed previously that when foveal acuity is degraded by a reduction of letter contrast, both the magnitude and angular spatial extent of foveal contour interaction remain unchanged. In this study, we asked whether contour interaction also remains unchanged when foveal visual acuity is degraded by a reduction of the target's background luminance. Percent correct letter identification was measured for isolated, near-threshold black Sloan letters and for letters surrounded by 4 flanking bars in 10 normal observers, 5 at Anglia Ruskin University, UK (ARU) and 5 at Palacky University, Czech Republic (PU). A stepwise reduction in the background luminance over 3 log units resulted in an approximately threefold increase in the near-threshold letter size. At each background luminance, black flanking bars with a width equal to 1 letter stroke were presented at separations between approximately 0.45 and 4.5 min arc (ARU) or 0.32 and 3.2 min arc (PU). The results indicate that the angular extent of contour interaction remains unchanged at approximately 4 min arc at all background luminances. On the other hand, the magnitude of contour interaction decreases systematically as luminance is reduced, from approximately a 50% reduction to a 30% reduction in percent correct. The constant angular extent and decreasing magnitude of contour interaction with a reduction of background luminance suggest foveal contour interaction is mediated by luminance-dependent lateral inhibition within a fixed angular region.

  4. Preference for luminance histogram regularities in natural scenes.

    PubMed

    Graham, Daniel; Schwarz, Bianca; Chatterjee, Anjan; Leder, Helmut

    2016-03-01

    Natural scene luminance distributions typically have positive skew, and for single objects, there is evidence that higher skew is a correlate (but not a guarantee) of glossiness. Skewness is also relevant to aesthetics: preference for glossy single objects (with high skew) has been shown even in infants, and skewness is a good predictor of fruit freshness. Given that primate vision appears to efficiently encode natural scene luminance variation, and given evidence that natural scene regularities may be a prerequisite for aesthetic perception in the spatial domain, here we ask whether humans in general prefer natural scenes with more positively skewed luminance distributions. If humans generally prefer images with the higher-order regularities typical of natural scenes and/or shiny objects, we would expect this to be the case. By manipulating luminance distribution skewness (holding mean and variance constant) for individual natural images, we show that in fact preference varies inversely with increasing positive skewness. This finding holds for: artistic landscape images and calibrated natural scenes; scenes with and without glossy surfaces; landscape scenes and close-up objects; and noise images with natural luminance histograms. Across conditions, humans prefer images with skew near zero over higher skew images, and they prefer skew lower than that of the unmodified scenes. These results suggest that humans prefer images with luminances that are distributed relatively evenly about the mean luminance, i.e., images with similar amounts of light and dark. We propose that our results reflect an efficient processing advantage of low-skew images over high-skew images, following evidence from prior brain imaging results.

  5. Integrating pan-sharpening and classifier ensemble techniques to map an invasive plant (Spartina alterniflora) in an estuarine wetland using Landsat 8 imagery

    NASA Astrophysics Data System (ADS)

    Ai, Jinquan; Gao, Wei; Gao, Zhiqiang; Shi, Runhe; Zhang, Chao; Liu, Chaoshun

    2016-04-01

    Accurate mapping of invasive species in a cost-effective way is the first step toward understanding and predicting the impact of their invasions. However, it is challenging in coastal wetlands due to confounding effects of biodiversity and tidal effects on spectral reflectance. The aim of this work is to describe a method to improve the accuracy of mapping an invasive plant (Spartina alterniflora), which is based on integration of pan-sharpening and classifier ensemble techniques. A framework was designed to achieve this goal. Five candidate image fusion algorithms, including principal component analysis fusion algorithm, modified intensity-hue-saturation fusion algorithm, wavelet-transform fusion algorithm, Ehlers fusion algorithm, and Gram-Schmidt fusion algorithm, were applied to pan-sharpening Landsat 8 operational land imager (OLI) imagery. We assessed the five fusion algorithms with respect to spectral and spatial fidelity using visual inspection and quantitative quality indicators. The optimal fused image was selected for subsequent analysis. Then, three classifiers, namely, maximum likelihood, artificial neural network, and support vector machine, were employed to preclassify the fused and raw OLI 30-m band images. Final object-based S. alterniflora maps were generated through classifier ensemble analysis of outcomes from the three classifiers. The results showed that the introduced method obtained high classification accuracy, with an overall accuracy of 90.96% and balanced misclassification errors between S. alterniflora and its coexistent species. We recommend future research to adopt the proposed method for monitoring long-term or multiseasonal changes in land coverage of invasive wetland plants.

  6. Spatiotemporal Mapping of Motility in Ex Vivo Preparations of the Intestines

    PubMed Central

    Kendig, Derek M.; Hurst, Norm R.; Grider, John R.

    2016-01-01

    Multiple approaches have been used to record and evaluate gastrointestinal motility including: recording changes in muscle tension, intraluminal pressure, and membrane potential. All of these approaches depend on measurement of activity at one or multiple locations along the gut simultaneously which are then interpreted to provide a sense of overall motility patterns. Recently, the development of video recording and spatiotemporal mapping (STmap) techniques have made it possible to observe and analyze complex patterns in ex vivo whole segments of colon and intestine. Once recorded and digitized, video records can be converted to STmaps in which the luminal diameter is converted to grayscale or color [called diameter maps (Dmaps)]. STmaps can provide data on motility direction (i.e., stationary, peristaltic, antiperistaltic), velocity, duration, frequency and strength of contractile motility patterns. Advantages of this approach include: analysis of interaction or simultaneous development of different motility patterns in different regions of the same segment, visualization of motility pattern changes over time, and analysis of how activity in one region influences activity in another region. Video recordings can be replayed with different timescales and analysis parameters so that separate STmaps and motility patterns can be analyzed in more detail. This protocol specifically details the effects of intraluminal fluid distension and intraluminal stimuli that affect motility generation. The use of luminal receptor agonists and antagonists provides mechanistic information on how specific patterns are initiated and how one pattern can be converted into another pattern. The technique is limited by the ability to only measure motility that causes changes in luminal diameter, without providing data on intraluminal pressure changes or muscle tension, and by the generation of artifacts based upon experimental setup; although, analysis methods can account for these issues

  7. [Reduction of accompanied deployment failure artifacts using a thick slice sensitivity map to assist with array spatial sensitivity encoding technique imaging].

    PubMed

    Fujita, Takushi; Watanabe, Takayuki; Kihara, Shou

    2014-08-01

    The array spatial sensitivity encoding technique (ASSET) is a form of parallel imaging (PI). Another scan, called a reference scan (Ref) is required before acquiring the ASSET image. However, artifacts tend to interfere with recognition of the object due to differences between the ASSET scan and Ref. scan. This is because the liver is in contact with the diaphragm, making it susceptible to respiratory motion. Further, since the liver upper edge is surrounded by the lung field, there is a risk of susceptibility artifacts. The purpose of this study was to reduce the accompanied deployment failure artifact using a thick slice sensitivity map. Our data showed it was possible to reduce accompanied deployment failure artifacts, as well as to suppress noise, by increasing the slice thickness of the Ref scan.

  8. Transient liquid-crystal technique used to produce high-resolution convective heat-transfer-coefficient maps

    NASA Technical Reports Server (NTRS)

    Hippensteele, Steven A.; Poinsatte, Philip E.

    1993-01-01

    In this transient technique the preheated isothermal model wall simulates the classic one-dimensional, semi-infinite wall heat transfer conduction problem. By knowing the temperature of the air flowing through the model, the initial temperature of the model wall, and the surface cooling rate measured at any location with time (using the fast-response liquid-crystal patterns recorded on video tape), the heat transfer coefficient can be calculated for the color isothermal pattern produced. Although the test was run transiently, the heat transfer coefficients are for the steady-state case. The upstream thermal boundary condition was considered to be isothermal. This transient liquid-crystal heat-transfer technique was used in a transient air tunnel in which a square-inlet, 3-to-1 exit transition duct was placed. The duct was preheated prior to allowing room temperature air to be suddenly drawn through it. The resulting isothermal contours on the duct surfaces were revealed using a surface coating of thermochromic liquid crystals that display distinctive colors at particular temperatures. A video record was made of the temperature and time data for all points on the duct surfaces during each test. The duct surfaces were uniformly heated using two heating systems: the first was an automatic temperature-controlled heater blanket completely surrounding the test duct like an oven, and the second was an internal hot-air loop through the inside of the test duct. The hot-air loop path was confined inside the test duct by insulated heat dams located at the inlet and exit ends of the test duct. A recirculating fan moved hot air into the duct inlet, through the duct, out of the duct exit, through the oven, and back to the duct inlet. The temperature nonuniformity of the test duct model wall was held very small. Test results are reported for two inlet Reynolds numbers of 200,000 and 1,150,000 (based on the square-inlet hydraulic diameter) and two free-stream turbulence

  9. Assessment of Water Quality in a Subtropical Alpine Lake Using Multivariate Statistical Techniques and Geostatistical Mapping: A Case Study

    PubMed Central

    Liu, Wen-Cheng; Yu, Hwa-Lung; Chung, Chung-En

    2011-01-01

    Concerns about the water quality in Yuan-Yang Lake (YYL), a shallow, subtropical alpine lake located in north-central Taiwan, has been rapidly increasing recently due to the natural and anthropogenic pollution. In order to understand the underlying physical and chemical processes as well as their associated spatial distribution in YYL, this study analyzes fourteen physico-chemical water quality parameters recorded at the eight sampling stations during 2008–2010 by using multivariate statistical techniques and a geostatistical method. Hierarchical clustering analysis (CA) is first applied to distinguish the three general water quality patterns among the stations, followed by the use of principle component analysis (PCA) and factor analysis (FA) to extract and recognize the major underlying factors contributing to the variations among the water quality measures. The spatial distribution of the identified major contributing factors is obtained by using a kriging method. Results show that four principal components i.e., nitrogen nutrients, meteorological factor, turbidity and nitrate factors, account for 65.52% of the total variance among the water quality parameters. The spatial distribution of principal components further confirms that nitrogen sources constitute an important pollutant contribution in the YYL. PMID:21695032

  10. Luminal Ca2+ dynamics during IP3R mediated signals

    NASA Astrophysics Data System (ADS)

    Lopez, Lucia F.; Ponce Dawson, Silvina

    2016-06-01

    The role of cytosolic Ca2+ on the kinetics of Inositol 1,4,5-triphosphate receptors (IP3Rs) and on the dynamics of IP3R-mediated Ca2+ signals has been studied at large both experimentally and by modeling. The role of luminal Ca2+ has not been investigated with that much detail although it has been found that it is relevant for signal termination in the case of Ca2+ release through ryanodine receptors. In this work we present the results of observing the dynamics of luminal and cytosolic Ca2+ simultaneously in Xenopus laevis oocytes. Combining observations and modeling we conclude that there is a rapid mechanism that guarantees the availability of free Ca2+ in the lumen even when a relatively large Ca2+ release is evoked. Comparing the dynamics of cytosolic and luminal Ca2+ during a release, we estimate that they are consistent with a 80% of luminal Ca2+ being buffered. The rapid availability of free luminal Ca2+ correlates with the observation that the lumen occupies a considerable volume in several regions across the images.

  11. Compensation of crosstalk of chrominance coding errors into the luminance

    NASA Astrophysics Data System (ADS)

    Breeuwer, Marcel M.; Beuker, Rob A.

    1995-02-01

    Component-based color video signals usually consist of one luminance (Y) and two chrominance or color-difference (U and V) components, which are obtained by multiplying the R, G and B components produced by the video camera by a 3 X 3 matrix. In order to compensate for the nonlinearity of the TV monitor on which the video signal will be displayed, the R, G, and B signals are usually first gamma-corrected before the matrix operation is applied. Due to this gamma correction, the Y component does not represent exactly the real luminance L of the recorded scene, and part of the real luminance information is carried by the U and V components. The introduction of errors into these chrominance components by, for example, video coding will therefore lead to perceivable errors in the luminance produced by the TV monitor on which the coded signal is eventually displayed. In this paper, we present a simple but effective method for avoiding this crosstalk of chrominance errors into the luminance. This method can be incorporated in most compression systems.

  12. An Exponential Luminous Efficiency Model for Hypervelocity Impact into Regolith

    NASA Technical Reports Server (NTRS)

    Swift, W. R.; Moser, D. E.; Suggs, R. M.; Cooke, W. J.

    2011-01-01

    The flash of thermal radiation produced as part of the impact-crater forming process can be used to determine the energy of the impact if the luminous efficiency is known. From this energy the mass and, ultimately, the mass flux of similar impactors can be deduced. The luminous efficiency, eta, is a unique function of velocity with an extremely large variation in the laboratory range of under 6 km/s but a necessarily small variation with velocity in the meteoric range of 20 to 70 km/s. Impacts into granular or powdery regolith, such as that on the moon, differ from impacts into solid materials in that the energy is deposited via a serial impact process which affects the rate of deposition of internal (thermal) energy. An exponential model of the process is developed which differs from the usual polynomial models of crater formation. The model is valid for the early time portion of the process and focuses on the deposition of internal energy into the regolith. The model is successfully compared with experimental luminous efficiency data from both laboratory impacts and from lunar impact observations. Further work is proposed to clarify the effects of mass and density upon the luminous efficiency scaling factors. Keywords hypervelocity impact impact flash luminous efficiency lunar impact meteoroid 1

  13. Closing the mind's eye: incoming luminance signals disrupt visual imagery.

    PubMed

    Sherwood, Rachel; Pearson, Joel

    2010-01-01

    Mental imagery has been associated with many cognitive functions, both high and low-level. Despite recent scientific advances, the contextual and environmental conditions that most affect the mechanisms of visual imagery remain unclear. It has been previously shown that the greater the level of background luminance the weaker the effect of imagery on subsequent perception. However, in these experiments it was unclear whether the luminance was affecting imagery generation or storage of a memory trace. Here, we report that background luminance can attenuate both mental imagery generation and imagery storage during an unrelated cognitive task. However, imagery generation was more sensitive to the degree of luminance. In addition, we show that these findings were not due to differential dark adaptation. These results suggest that afferent visual signals can interfere with both the formation and priming-memory effects associated with visual imagery. It follows that background luminance may be a valuable tool for investigating imagery and its role in various cognitive and sensory processes.

  14. Estimating temporal and spatial variation of ocean surface pCO2 in the North Pacific using a self-organizing map neural network technique

    NASA Astrophysics Data System (ADS)

    Nakaoka, S.; Telszewski, M.; Nojiri, Y.; Yasunaka, S.; Miyazaki, C.; Mukai, H.; Usui, N.

    2013-09-01

    This study uses a neural network technique to produce maps of the partial pressure of oceanic carbon dioxide (pCO2sea) in the North Pacific on a 0.25° latitude × 0.25° longitude grid from 2002 to 2008. The pCO2sea distribution was computed using a self-organizing map (SOM) originally utilized to map the pCO2sea in the North Atlantic. Four proxy parameters - sea surface temperature (SST), mixed layer depth, chlorophyll a concentration, and sea surface salinity (SSS) - are used during the training phase to enable the network to resolve the nonlinear relationships between the pCO2sea distribution and biogeochemistry of the basin. The observed pCO2sea data were obtained from an extensive dataset generated by the volunteer observation ship program operated by the National Institute for Environmental Studies (NIES). The reconstructed pCO2sea values agreed well with the pCO2sea measurements, with the root-mean-square error ranging from 17.6 μatm (for the NIES dataset used in the SOM) to 20.2 μatm (for independent dataset). We confirmed that the pCO2sea estimates could be improved by including SSS as one of the training parameters and by taking into account secular increases of pCO2sea that have tracked increases in atmospheric CO2. Estimated pCO2sea values accurately reproduced pCO2sea data at several time series locations in the North Pacific. The distributions of pCO2sea revealed by 7 yr averaged monthly pCO2sea maps were similar to Lamont-Doherty Earth Observatory pCO2sea climatology, allowing, however, for a more detailed analysis of biogeochemical conditions. The distributions of pCO2sea anomalies over the North Pacific during the winter clearly showed regional contrasts between El Niño and La Niña years related to changes of SST and vertical mixing.

  15. Cosmological baryonic and matter densities from 600000 SDSS luminous red galaxies with photometric redshifts

    NASA Astrophysics Data System (ADS)

    Blake, Chris; Collister, Adrian; Bridle, Sarah; Lahav, Ofer

    2007-02-01

    We analyse MegaZ-LRG, a photometric-redshift catalogue of luminous red galaxies (LRGs) based on the imaging data of the Sloan Digital Sky Survey (SDSS) 4th Data Release. MegaZ-LRG, presented in a companion paper, contains >106 photometric redshifts derived with ANNZ, an artificial neural network method, constrained by a spectroscopic subsample of ~13000 galaxies obtained by the 2dF-SDSS LRG and Quasar (2SLAQ) survey. The catalogue spans the redshift range 0.4 < z < 0.7 with an rms redshift error σz ~ 0.03(1 + z), covering 5914 deg2 to map out a total cosmic volume 2.5h-3Gpc3. In this study we use the most reliable 600000 photometric redshifts to measure the large-scale structure using two methods: (1) a spherical harmonic analysis in redshift slices, and (2) a direct re-construction of the spatial clustering pattern using Fourier techniques. We present the first cosmological parameter fits to galaxy angular power spectra from a photometric-redshift survey. Combining the redshift slices with appropriate covariances, we determine best-fitting values for the matter density Ωm and baryon density Ωb of Ωmh = 0.195 +/- 0.023 and Ωb/Ωm = 0.16 +/- 0.036 (with the Hubble parameter h = 0.75 and scalar index of primordial fluctuations nscalar = 1 held fixed). These results are in agreement with and independent of the latest studies of the cosmic microwave background radiation, and their precision is comparable to analyses of contemporary spectroscopic-redshift surveys. We perform an extensive series of tests which conclude that our power spectrum measurements are robust against potential systematic photometric errors in the catalogue. We conclude that photometric-redshift surveys are competitive with spectroscopic surveys for measuring cosmological parameters in the simplest `vanilla' models. Future deep imaging surveys have great potential for further improvement, provided that systematic errors can be controlled.

  16. Submillimetre observations of WISE-selected high-redshift, luminous, dusty galaxies

    NASA Astrophysics Data System (ADS)

    Jones, Suzy F.; Blain, Andrew W.; Stern, Daniel; Assef, Roberto J.; Bridge, Carrie R.; Eisenhardt, Peter; Petty, Sara; Wu, Jingwen; Tsai, Chao-Wei; Cutri, Roc; Wright, Edward L.; Yan, Lin

    2014-09-01

    We present SCUBA-2 (Submillimetre Common-User Bolometer Array) 850 μm submillimetre (submm) observations of the fields of 10 dusty, luminous galaxies at z ˜ 1.7-4.6, detected at 12 and/or 22 μm by the Wide-field Infrared Survey Explorer (WISE) all-sky survey, but faint or undetected at 3.4 and 4.6 μm; dubbed hot, dust-obscured galaxies (Hot DOGs). The six detected targets all have total infrared luminosities greater than 1013 L⊙, with one greater than 1014 L⊙. Their spectral energy distributions (SEDs) are very blue from mid-infrared to submm wavelengths and not well fitted by standard active galactic nuclei (AGN) SED templates, without adding extra dust extinction to fit the WISE 3.4 and 4.6 μm data. The SCUBA-2 850 μm observations confirm that the Hot DOGs have less cold and/or more warm dust emission than standard AGN templates, and limit an underlying extended spiral or ULIRG-type galaxy to contribute less than about 2 or 55 per cent of the typical total Hot DOG IR luminosity, respectively. The two most distant and luminous targets have similar observed submm to mid-infrared ratios to the rest, and thus appear to have even hotter SEDs. The number of serendipitous submm galaxies detected in the 1.5-arcmin-radius SCUBA-2 850 μm maps indicates there is a significant overdensity of serendipitous sources around Hot DOGs. These submm observations confirm that the WISE-selected ultraluminous galaxies have very blue mid-infrared to submm SEDs, suggesting that they contain very powerful AGN, and are apparently located in unusual arcmin-scale overdensities of very luminous dusty galaxies.

  17. Distribution and identification of luminous bacteria from the sargasso sea.

    PubMed

    Orndorff, S A; Colwell, R R

    1980-05-01

    Vibrio fischeri and Lucibacterium harveyi constituted 75 of the 83 luminous bacteria isolated from Sargasso Sea surface waters. Photobacterium leiognathi and Photobacterium phosphoreum constituted the remainder of the isolates. Luminescent bacteria were recovered at concentrations of 1 to 63 cells per 100 ml from water samples collected at depths of 160 to 320 m. Two water samples collected at the thermocline yielded larger numbers of viable, aerobic heterotrophic and luminous bacteria. Luminescent bacteria were not recovered from surface microlayer samples. The species distribution of the luminous bacteria reflected previously recognized growth patterns; i.e., L. harveyi and V. fischeri were predominant in the upper, warm waters (only one isolate of P. phosphoreum was obtained from surface tropical waters).

  18. Lunar eclipse photometry: absolute luminance measurements and modeling.

    PubMed

    Hernitschek, Nina; Schmidt, Elmar; Vollmer, Michael

    2008-12-01

    The Moon's time-dependent luminance was determined during the 9 February 1990 and 3 March 2007 total lunar eclipses by using calibrated, industry standard photometers. After the results were corrected to unit air mass and to standard distances for both Moon and Sun, an absolute calibration was accomplished by using the Sun's known luminance and a pre-eclipse lunar albedo of approximately 13.5%. The measured minimum level of brightness in the total phase of both eclipses was relatively high, namely -3.32 m(vis) and -1.7 m(vis), which hints at the absence of pronounced stratospheric aerosol. The light curves were modeled in such a way as to let the Moon move through an artificial Earth shadow composed of a multitude of disk and ring zones, containing a relative luminance data set from an atmospheric radiative transfer calculation. PMID:19037352

  19. A Second Luminous Blue Variable in the Quintuplet Cluster.

    PubMed

    Geballe; Najarro; Figer

    2000-02-20

    H- and K-band moderate-resolution and 4 µm high-resolution spectra have been obtained for FMM 362, a bright star in the Quintuplet Cluster near the Galactic center. The spectral features in these bands closely match those of the Pistol Star, a luminous blue variable and one of the most luminous stars known. The new spectra and previously obtained photometry imply a very high luminosity for FMM 362, L>/=106 L middle dot in circle, and a temperature of 10,000-13,000 K. Based on its luminosity, temperature, photometric variability, and similarities to the Pistol Star, we conclude that FMM 362 is a luminous blue variable. PMID:10655174

  20. Lunar eclipse photometry: absolute luminance measurements and modeling.

    PubMed

    Hernitschek, Nina; Schmidt, Elmar; Vollmer, Michael

    2008-12-01

    The Moon's time-dependent luminance was determined during the 9 February 1990 and 3 March 2007 total lunar eclipses by using calibrated, industry standard photometers. After the results were corrected to unit air mass and to standard distances for both Moon and Sun, an absolute calibration was accomplished by using the Sun's known luminance and a pre-eclipse lunar albedo of approximately 13.5%. The measured minimum level of brightness in the total phase of both eclipses was relatively high, namely -3.32 m(vis) and -1.7 m(vis), which hints at the absence of pronounced stratospheric aerosol. The light curves were modeled in such a way as to let the Moon move through an artificial Earth shadow composed of a multitude of disk and ring zones, containing a relative luminance data set from an atmospheric radiative transfer calculation.

  1. Mesopic luminance assessed with minimally distinct border perception.

    PubMed

    Raphael, Sabine; MacLeod, Donald I A

    2015-01-01

    In photopic vision, the border between two fields is minimally distinct when the two fields are isoluminant; that is, when the achromatic luminance of the two fields is equal. The distinctness of a border between extrafoveal reference and comparison fields was used here as an isoluminance criterion under a variety of adaptation conditions ranging from photopic to scotopic. The adjustment was done by trading off the amount of blue against the amount of red in the comparison field. Results show that isoluminant border settings are linear under all constant adaptation conditions, though varying with state of adaptation. The relative contribution of rods and cones to luminance was modeled such that the linear sum of the suitably weighted scotopic and photopic luminance is constant for the mesopic isoluminant conditions. The relative weights change with adapting intensity in a sigmoid fashion and also depend strongly on the position of the border in the visual field. PMID:26223024

  2. Mesopic luminance assessed with minimally distinct border perception.

    PubMed

    Raphael, Sabine; MacLeod, Donald I A

    2015-01-01

    In photopic vision, the border between two fields is minimally distinct when the two fields are isoluminant; that is, when the achromatic luminance of the two fields is equal. The distinctness of a border between extrafoveal reference and comparison fields was used here as an isoluminance criterion under a variety of adaptation conditions ranging from photopic to scotopic. The adjustment was done by trading off the amount of blue against the amount of red in the comparison field. Results show that isoluminant border settings are linear under all constant adaptation conditions, though varying with state of adaptation. The relative contribution of rods and cones to luminance was modeled such that the linear sum of the suitably weighted scotopic and photopic luminance is constant for the mesopic isoluminant conditions. The relative weights change with adapting intensity in a sigmoid fashion and also depend strongly on the position of the border in the visual field.

  3. Selected luminal mucosal complications of adult celiac disease.

    PubMed

    Freeman, Hugh J

    2009-01-01

    Celiac disease is a gluten-dependent intestinal disorder that appears to be associated with several clinical conditions. Some involve the luminal mucosa of the stomach and intestinal tract and may, occasionally, complicate the course of celiac disease. Collagenous colitis has been associated with celiac disease and may lead to chronic diarrhea. Conversely, some of these clinical disorders that involve the luminal mucosa of the stomach and intestine may represent the initial clinical presentation of celiac disease. These disorders should be considered in patients with celiac disease who develop recurrent or refractory symptoms despite adherence to a strict gluten-free diet. Detection of collagenous disorders that affect the luminal mucosa of the stomach or intestinal tract may result in recognition of underlying celiac disease. PMID:21694821

  4. Luminal matrices: an inside view on organ morphogenesis.

    PubMed

    Luschnig, Stefan; Uv, Anne

    2014-02-01

    Tubular epithelia come in various shapes and sizes to accommodate the specific needs for transport, excretion and absorption in multicellular organisms. The intestinal tract, glandular organs and conduits for liquids and gases are all lined by a continuous layer of epithelial cells, which form the boundary of the luminal space. Defects in epithelial architecture and lumen dimensions will impair transport and can lead to serious organ malfunctions. Not surprisingly, multiple cellular and molecular mechanisms contribute to the shape of tubular epithelial structures. One intriguing aspect of epithelial organ formation is the highly coordinate behavior of individual cells as they mold the mature lumen. Here, we focus on recent findings, primarily from Drosophila, demonstrating that informative cues can emanate from the developing organ lumen in the form of solid luminal material. The luminal material is produced by the surrounding epithelium and helps to coordinate changes in shape and arrangement of the very same cells, resulting in correct lumen dimensions. PMID:24075963

  5. Brightness perception of unrelated self-luminous colors.

    PubMed

    Withouck, Martijn; Smet, Kevin A G; Ryckaert, Wouter R; Pointer, Michael R; Deconinck, Geert; Koenderink, Jan; Hanselaer, Peter

    2013-06-01

    The perception of brightness of unrelated self-luminous colored stimuli of the same luminance has been investigated. The Helmholtz-Kohlrausch (H-K) effect, i.e., an increase in brightness perception due to an increase in saturation, is clearly observed. This brightness perception is compared with the calculated brightness according to six existing vision models, color appearance models, and models based on the concept of equivalent luminance. Although these models included the H-K effect and half of them were developed to work with unrelated colors, none of the models seemed to be able to fully predict the perceived brightness. A tentative solution to increase the prediction accuracy of the color appearance model CAM97u, developed by Hunt, is presented.

  6. Finite-element analyses of blade and slot coating flows using an implicit pseudo-solid domain mapping technique coupled with unstructured grids

    SciTech Connect

    Chen, K.S.; Schunk, P.R.; Sackinger, P.A.

    1994-12-04

    In coating processes (e.g. in blade coating) the flow domain inherently contains free surfaces and three-phase contact lines, and characteristic length scales of flow features in the dimension transverse to the web-movement vary by an order of magnitude or more from a fraction of a millimeter or more to tens of microns or less). The presence of free surfaces and three-phase contact lines, and the sudden changes of flow geometry and directions create difficulties in theoretical analyses of such flows. Though simulations of coating flows via finite-element methods using structured grids have been reportedly demonstrated in the literature, achieving high efficiency of such numerical experiments remains a grand challenge -- mainly due to difficulties in local mesh-refinement and in avoiding unacceptably distorted grids. High efficiency of computing steady flow fields under various process conditions is crucial in shortening turn-around time in design and optimization of coating-flow processes. In this paper we employ a fully-implicit, pseudo-solid, domain mapping technique coupled with unstructured meshes to analyze blade and slot coating flows using Galerkin`s method with finite element basis functions. We demonstrate the robustness and efficiency of our unique technique in circumventing shortcomings of mesh-motion schemes currently being used in the coating-flow research community. Our goal is to develop an efficient numerical tool, together with a suitable optimization toolkit, that can be used routinely in design and optimization of coating-flow processes.

  7. Effect of estrogen on calcium and sodium transport by the nephron luminal membranes.

    PubMed

    Brunette, M G; Leclerc, M

    2001-08-01

    Estrogens are widely used for contraception and osteoporosis prevention. The aim of the present study was to investigate the effect of 17 beta-estradiol on calcium (Ca(2+)) transport by the nephron luminal membranes, independently of any other Ca(2+)-regulating hormones. Proximal and distal tubules of rabbit kidneys were incubated with 17 beta-estradiol or the carrier for various periods of time, and the luminal membranes of these tubules were purified and vesiculated. Ca(2+) uptake by membrane vesicles was measured using the Millipore filtration technique. Incubation of proximal tubules with the hormone did not influence Ca(2+) uptake by the luminal membranes. In contrast, incubation of distal tubules with 10(-8) M 17 beta-estradiol for 30 min decreased the initial uptake of 0.5 mM Ca(2+) from 0.34+/-0.04 (s.e.m. ) to 0.17+/-0.04 pmol/microg per 5 s (P<0.05). In the presence of 100 mM Na(+), 0.5 mM Ca(2+) uptake was strongly diminished and the effect of 17 beta-estradiol disappeared (0.17+/-0.01 and 0.21+/-0.07 pmol/microg per 5 s in vesicles from the control and treated tubules). Direct incubation of the membranes with 17 beta-estradiol, however, failed to show any influence of the hormone on Ca(2+) transport. The action of 17 beta-estradiol was dose-dependent, with a half-maximal effect at approximately 10(-9) M. Ca(2+) uptake by the distal tubule membranes presents dual kinetics. 17 beta-Estradiol decreased the V(max) value of the high-affinity component from 0.42+/-0.02 to 0.31+/-0.03 pmol/microg per 10 s (P<0.02). In contrast with the effect of the hormone on Ca(2+) transport, estradiol increased Na(+) uptake by both the proximal and distal tubule luminal membranes. In conclusion, incubation of proximal and distal tubules with estrogen decreases Ca(2+) reabsorption by the high-affinity Ca(2+) channels of the distal luminal membranes, and enhances Na(+) transport by the membranes from proximal and distal nephrons. PMID:11479140

  8. V1 neurons respond to luminance changes faster than contrast changes.

    PubMed

    Wang, Wen-Liang; Li, Ran; Ding, Jian; Tao, Louis; Li, Da-Peng; Wang, Yi

    2015-12-04

    Luminance and contrast are two major attributes of objects in the visual scene. Luminance and contrast information received by visual neurons are often updated simultaneously. We examined the temporal response properties of neurons in the primary visual cortex (V1) to stimuli whose luminance and contrast were simultaneously changed by 50 Hz. We found that response tuning to luminance changes precedes tuning to contrast changes in V1. For most V1 neurons, the onset time of response tuning to luminance changes was shorter than that to contrast changes. Most neurons carried luminance information in the early response stage, while all neurons carried both contrast and luminance information in the late response stage. The early luminance response suggests that cortical processing for luminance is not as slow as previously thought.

  9. V1 neurons respond to luminance changes faster than contrast changes

    PubMed Central

    Wang, Wen-Liang; Li, Ran; Ding, Jian; Tao, Louis; Li, Da-Peng; Wang, Yi

    2015-01-01

    Luminance and contrast are two major attributes of objects in the visual scene. Luminance and contrast information received by visual neurons are often updated simultaneously. We examined the temporal response properties of neurons in the primary visual cortex (V1) to stimuli whose luminance and contrast were simultaneously changed by 50 Hz. We found that response tuning to luminance changes precedes tuning to contrast changes in V1. For most V1 neurons, the onset time of response tuning to luminance changes was shorter than that to contrast changes. Most neurons carried luminance information in the early response stage, while all neurons carried both contrast and luminance information in the late response stage. The early luminance response suggests that cortical processing for luminance is not as slow as previously thought. PMID:26634691

  10. Retinal Ganglion Cell Adaptation to Small Luminance Fluctuations

    PubMed Central

    Freeman, Daniel K.; Graña, Gilberto

    2010-01-01

    To accommodate the wide input range over which the visual system operates within the narrow output range of spiking neurons, the retina adjusts its sensitivity to the mean light level so that retinal ganglion cells can faithfully signal contrast, or relative deviations from the mean luminance. Given the large operating range of the visual system, the majority of work on luminance adaptation has involved logarithmic changes in light level. We report that luminance gain controls are recruited for remarkably small fluctuations in luminance as well. Using spike recordings from the rat optic tract, we show that ganglion cell responses to a brief flash of light are modulated in amplitude by local background fluctuations as little as 15% contrast. The time scale of the gain control is rapid (<125 ms), at least for on cells. The retinal locus of adaptation precedes the ganglion cell spike generator because response gain changes of on cells were uncorrelated with firing rate. The mechanism seems to reside within the inner retinal network and not in the photoreceptors, because the adaptation profiles of on and off cells differed markedly. The response gain changes follow Weber's law, suggesting that network mechanisms of luminance adaptation described in previous work modulates retinal ganglion cell sensitivity, not just when we move between different lighting environments, but also as our eyes scan a visual scene. Finally, we show that response amplitude is uniformly reduced for flashes on a modulated background that has spatial contrast, indicating that another gain control that integrates luminance signals nonlinearly over space operates within the receptive field center of rat ganglion cells. PMID:20538771

  11. Retinal ganglion cell adaptation to small luminance fluctuations.

    PubMed

    Freeman, Daniel K; Graña, Gilberto; Passaglia, Christopher L

    2010-08-01

    To accommodate the wide input range over which the visual system operates within the narrow output range of spiking neurons, the retina adjusts its sensitivity to the mean light level so that retinal ganglion cells can faithfully signal contrast, or relative deviations from the mean luminance. Given the large operating range of the visual system, the majority of work on luminance adaptation has involved logarithmic changes in light level. We report that luminance gain controls are recruited for remarkably small fluctuations in luminance as well. Using spike recordings from the rat optic tract, we show that ganglion cell responses to a brief flash of light are modulated in amplitude by local background fluctuations as little as 15% contrast. The time scale of the gain control is rapid (<125 ms), at least for on cells. The retinal locus of adaptation precedes the ganglion cell spike generator because response gain changes of on cells were uncorrelated with firing rate. The mechanism seems to reside within the inner retinal network and not in the photoreceptors, because the adaptation profiles of on and off cells differed markedly. The response gain changes follow Weber's law, suggesting that network mechanisms of luminance adaptation described in previous work modulates retinal ganglion cell sensitivity, not just when we move between different lighting environments, but also as our eyes scan a visual scene. Finally, we show that response amplitude is uniformly reduced for flashes on a modulated background that has spatial contrast, indicating that another gain control that integrates luminance signals nonlinearly over space operates within the receptive field center of rat ganglion cells.

  12. Human Mammary Luminal Epithelial Cells Contain Progenitors to Myoepithelial Cells

    SciTech Connect

    Pechoux, Christine; Gudjonsson, Thorarinn; Ronnov-Jessen, Lone; Bissell, Mina J; Petersen, Ole

    1999-02-01

    The origin of the epithelial and myoepithelial cells in the human breast has not been delineated. In this study we have addressed whether luminal epithelial cells and myoepithelial cells are vertically connected, i.e., whether one is the precursor for the other. We used a primary culture assay allowing preservation of basic phenotypic traits of luminal epithelial and myoepithelial cells in culture. The two cell types were then separated immunomagnetically using antibodies directed against lineage-specific cell surface antigens into at best 100% purity. The cellular identity was ascertained by cytochemistry, immunoblotting, and 2-D gel electrophoresis. Luminal epithelial cells were identified by strong expression of cytokeratins 18 and 19 while myoepithelial cells were recognized by expression of vimentin and {alpha}-smooth muscle actin. We used a previously devised culture medium (CDM4) that allows vigorous expansion of proliferative myoepithelial cells and also devised a medium (CDM6) that allowed sufficient expansion of differentiated luminal epithelial cells based on addition of hepatocyte growth factor/scatter factor. The two different culture media supported each lineage for at least five passages without signs of interconversion. We used parallel cultures where we switched culture media, thus testing the ability of each lineage to convert to the other. Whereas the myoepithelial lineage showed no signs of interconversion, a subset of luminal epithelial cells, gradually, but distinctly, converted to myoepithelial cells. We propose that in the mature human breast, it is the luminal epithelial cell compartment that gives rise to myoepithelial cells rather than the other way around.

  13. Probing the luminal microenvironment of reconstituted epithelial microtissues.

    PubMed

    Cerchiari, Alec E; Samy, Karen E; Todhunter, Michael E; Schlesinger, Erica; Henise, Jeff; Rieken, Christopher; Gartner, Zev J; Desai, Tejal A

    2016-01-01

    Polymeric microparticles can serve as carriers or sensors to instruct or characterize tissue biology. However, incorporating microparticles into tissues for in vitro assays remains a challenge. We exploit three-dimensional cell-patterning technologies and directed epithelial self-organization to deliver microparticles to the lumen of reconstituted human intestinal microtissues. We also develop a novel pH-sensitive microsensor that can measure the luminal pH of reconstituted epithelial microtissues. These studies offer a novel approach for investigating luminal microenvironments and drug-delivery across epithelial barriers. PMID:27619235

  14. Probing the luminal microenvironment of reconstituted epithelial microtissues

    PubMed Central

    Cerchiari, Alec E.; Samy, Karen E.; Todhunter, Michael E.; Schlesinger, Erica; Henise, Jeff; Rieken, Christopher; Gartner, Zev J.; Desai, Tejal A.

    2016-01-01

    Polymeric microparticles can serve as carriers or sensors to instruct or characterize tissue biology. However, incorporating microparticles into tissues for in vitro assays remains a challenge. We exploit three-dimensional cell-patterning technologies and directed epithelial self-organization to deliver microparticles to the lumen of reconstituted human intestinal microtissues. We also develop a novel pH-sensitive microsensor that can measure the luminal pH of reconstituted epithelial microtissues. These studies offer a novel approach for investigating luminal microenvironments and drug-delivery across epithelial barriers. PMID:27619235

  15. Mapping the optimal forest road network based on the multicriteria evaluation technique: the case study of Mediterranean Island of Thassos in Greece.

    PubMed

    Tampekis, Stergios; Sakellariou, Stavros; Samara, Fani; Sfougaris, Athanassios; Jaeger, Dirk; Christopoulou, Olga

    2015-11-01

    The sustainable management of forest resources can only be achieved through a well-organized road network designed with the optimal spatial planning and the minimum environmental impacts. This paper describes the spatial layout mapping for the optimal forest road network and the environmental impacts evaluation that are caused to the natural environment based on the multicriteria evaluation (MCE) technique at the Mediterranean island of Thassos in Greece. Data analysis and its presentation are achieved through a spatial decision support system using the MCE method with the contribution of geographic information systems (GIS). With the use of the MCE technique, we evaluated the human impact intensity to the forest ecosystem as well as the ecosystem's absorption from the impacts that are caused from the forest roads' construction. For the human impact intensity evaluation, the criteria that were used are as follows: the forest's protection percentage, the forest road density, the applied skidding means (with either the use of tractors or the cable logging systems in timber skidding), the timber skidding direction, the visitors' number and truck load, the distance between forest roads and streams, the distance between forest roads and the forest boundaries, and the probability that the forest roads are located on sights with unstable soils. In addition, for the ecosystem's absorption evaluation, we used forestry, topographical, and social criteria. The recommended MCE technique which is described in this study provides a powerful, useful, and easy-to-use implement in order to combine the sustainable utilization of natural resources and the environmental protection in Mediterranean ecosystems.

  16. Mapping the optimal forest road network based on the multicriteria evaluation technique: the case study of Mediterranean Island of Thassos in Greece.

    PubMed

    Tampekis, Stergios; Sakellariou, Stavros; Samara, Fani; Sfougaris, Athanassios; Jaeger, Dirk; Christopoulou, Olga

    2015-11-01

    The sustainable management of forest resources can only be achieved through a well-organized road network designed with the optimal spatial planning and the minimum environmental impacts. This paper describes the spatial layout mapping for the optimal forest road network and the environmental impacts evaluation that are caused to the natural environment based on the multicriteria evaluation (MCE) technique at the Mediterranean island of Thassos in Greece. Data analysis and its presentation are achieved through a spatial decision support system using the MCE method with the contribution of geographic information systems (GIS). With the use of the MCE technique, we evaluated the human impact intensity to the forest ecosystem as well as the ecosystem's absorption from the impacts that are caused from the forest roads' construction. For the human impact intensity evaluation, the criteria that were used are as follows: the forest's protection percentage, the forest road density, the applied skidding means (with either the use of tractors or the cable logging systems in timber skidding), the timber skidding direction, the visitors' number and truck load, the distance between forest roads and streams, the distance between forest roads and the forest boundaries, and the probability that the forest roads are located on sights with unstable soils. In addition, for the ecosystem's absorption evaluation, we used forestry, topographical, and social criteria. The recommended MCE technique which is described in this study provides a powerful, useful, and easy-to-use implement in order to combine the sustainable utilization of natural resources and the environmental protection in Mediterranean ecosystems. PMID:26463090

  17. Luminal surface fabrication for cardiovascular prostheses

    NASA Technical Reports Server (NTRS)

    Deininger, William D.; Gabriel, Stephen B.

    1987-01-01

    Methodology used for the production of highly ordered surfaces on polytetrafluoroethylene (PTFE) Teflon, for use of this material as a surface layer in small-diameter vascular prostheses, is described. Directed argon-ion beam sputter etching is used to produce highly ordered specific surface morphologies, and photolithographic techniques are used for pattern definition. The methodology also includes surface polishing of the samples, sputter deposition of the intermediary layer of gold to enhance photoresist adhesion and improve pattern resolution, and rotating of the sample to minimize the effects of ion-beam nonuniformity and asymmetry. Use of these techniques resulted in the formation of regular arrays of sharply defined hexagonal pillars with smooth vertical walls and large relief.

  18. Three-dimensional segmentation of luminal and adventitial borders in serial intravascular ultrasound images

    NASA Technical Reports Server (NTRS)

    Shekhar, R.; Cothren, R. M.; Vince, D. G.; Chandra, S.; Thomas, J. D.; Cornhill, J. F.

    1999-01-01

    Intravascular ultrasound (IVUS) provides exact anatomy of arteries, allowing accurate quantitative analysis. Automated segmentation of IVUS images is a prerequisite for routine quantitative analyses. We present a new three-dimensional (3D) segmentation technique, called active surface segmentation, which detects luminal and adventitial borders in IVUS pullback examinations of coronary arteries. The technique was validated against expert tracings by computing correlation coefficients (range 0.83-0.97) and William's index values (range 0.37-0.66). The technique was statistically accurate, robust to image artifacts, and capable of segmenting a large number of images rapidly. Active surface segmentation enabled geometrically accurate 3D reconstruction and visualization of coronary arteries and volumetric measurements.

  19. Can Self-Organizing Maps Accurately Predict Photometric Redshifts?

    NASA Astrophysics Data System (ADS)

    Way, M. J.; Klose, C. D.

    2012-03-01

    We present an unsupervised machine-learning approach that can be employed for estimating photometric redshifts. The proposed method is based on a vector quantization called the self-organizing-map (SOM) approach. A variety of photometrically derived input values were utilized from the Sloan Digital Sky Survey's main galaxy sample, luminous red galaxy, and quasar samples, along with the PHAT0 data set from the Photo-z Accuracy Testing project. Regression results obtained with this new approach were evaluated in terms of root-mean-square error (RMSE) to estimate the accuracy of the photometric redshift estimates. The results demonstrate competitive RMSE and outlier percentages when compared with several other popular approaches, such as artificial neural networks and Gaussian process regression. SOM RMSE results (using Δz = zphot - zspec) are 0.023 for the main galaxy sample, 0.027 for the luminous red galaxy sample, 0.418 for quasars, and 0.022 for PHAT0 synthetic data. The results demonstrate that there are nonunique solutions for estimating SOM RMSEs. Further research is needed in order to find more robust estimation techniques using SOMs, but the results herein are a positive indication of their capabilities when compared with other well-known methods.

  20. Astronomy. ASASSN-15lh: A highly super-luminous supernova.

    PubMed

    Dong, Subo; Shappee, B J; Prieto, J L; Jha, S W; Stanek, K Z; Holoien, T W-S; Kochanek, C S; Thompson, T A; Morrell, N; Thompson, I B; Basu, U; Beacom, J F; Bersier, D; Brimacombe, J; Brown, J S; Bufano, F; Chen, Ping; Conseil, E; Danilet, A B; Falco, E; Grupe, D; Kiyota, S; Masi, G; Nicholls, B; Olivares E, F; Pignata, G; Pojmanski, G; Simonian, G V; Szczygiel, D M; Woźniak, P R

    2016-01-15

    We report the discovery of ASASSN-15lh (SN 2015L), which we interpret as the most luminous supernova yet found. At redshift z = 0.2326, ASASSN-15lh reached an absolute magnitude of Mu ,AB = -23.5 ± 0.1 and bolometric luminosity Lbol = (2.2 ± 0.2) × 10(45) ergs s(-1), which is more than twice as luminous as any previously known supernova. It has several major features characteristic of the hydrogen-poor super-luminous supernovae (SLSNe-I), whose energy sources and progenitors are currently poorly understood. In contrast to most previously known SLSNe-I that reside in star-forming dwarf galaxies, ASASSN-15lh appears to be hosted by a luminous galaxy (MK ≈ -25.5) with little star formation. In the 4 months since first detection, ASASSN-15lh radiated (1.1 ± 0.2) × 10(52) ergs, challenging the magnetar model for its engine.

  1. Hot DOGs: The Most Luminous Galaxies Found by WISE

    NASA Astrophysics Data System (ADS)

    Eisenhardt, Peter; Tsai, Chao-Wei; Wu, Jingwen; Griffith, Roger; Yan, Lin; Stern, Daniel; Stanford, Adam; Blain, Andrew; Benford, Dominic; Bridge, Carrie; Assef, Roberto; Petty, Sara

    2013-02-01

    NASA's Wide-field Infrared Survey Explorer (WISE) has achieved its fundamental goal by delivering its all-sky survey at 3.4, 4.6, 12 and 22 (micron) (W1, W2, W3, and W4), reaching sensitivities hundreds of times deeper than IRAS. One of the two primary science objectives for WISE is to identify the most luminous galaxies in the Universe (Ultra-Luminous IR Galaxies, or ULIRGs). We have used WISE photometry to select an all- sky sample of objects which are extremely luminous, and for which Herschel far-IR follow-up observations are underway. The objects are prominent in W3 and W4, but faint or undetected in W1 and W2. Available spectroscopy and far IR photometry for these objects show they typically have redshifts z > 2 and luminosities over 10^13 L_odot, with about 10% exceeding 10^14 L_odot and rivaling the brightest known QSOs. Their dust is more than twice as hot as other IR luminous objects: they are hot dust obscured galaxies or ``hot DOGs," and may represent a new phase in galaxy evolution. We request NOAO time to obtain redshifts and optical and near IR photometry for the all-sky sample of the brightest hot DOGs, all of which are in our Herschel program. With existing and allocated observations, this request should complete the acquisition of these crucial data for this primary WISE science objective.

  2. Profile of a Growing Urban School: The Lumin Experience

    ERIC Educational Resources Information Center

    Ford, Terry

    2015-01-01

    This fairytale-come-true began with an idealistic public school teacher just out of college who lived in the neighborhood of her students. In stages, working with a community organizing group consisting mainly of concerned parents, Terry Ford founded what is now called Lumin Education, a network of campuses serving more than six hundred children…

  3. ASASSN-15lh: A highly super-luminous supernova

    NASA Astrophysics Data System (ADS)

    Dong, Subo; Shappee, B. J.; Prieto, J. L.; Jha, S. W.; Stanek, K. Z.; Holoien, T. W.-S.; Kochanek, C. S.; Thompson, T. A.; Morrell, N.; Thompson, I. B.; Basu, U.; Beacom, J. F.; Bersier, D.; Brimacombe, J.; Brown, J. S.; Bufano, F.; Chen, Ping; Conseil, E.; Danilet, A. B.; Falco, E.; Grupe, D.; Kiyota, S.; Masi, G.; Nicholls, B.; Olivares E., F.; Pignata, G.; Pojmanski, G.; Simonian, G. V.; Szczygiel, D. M.; Woźniak, P. R.

    2016-01-01

    We report the discovery of ASASSN-15lh (SN 2015L), which we interpret as the most luminous supernova yet found. At redshift z = 0.2326, ASASSN-15lh reached an absolute magnitude of Mu,AB = -23.5 ± 0.1 and bolometric luminosity Lbol = (2.2 ± 0.2) × 1045 ergs s-1, which is more than twice as luminous as any previously known supernova. It has several major features characteristic of the hydrogen-poor super-luminous supernovae (SLSNe-I), whose energy sources and progenitors are currently poorly understood. In contrast to most previously known SLSNe-I that reside in star-forming dwarf galaxies, ASASSN-15lh appears to be hosted by a luminous galaxy (MK ≈ -25.5) with little star formation. In the 4 months since first detection, ASASSN-15lh radiated (1.1 ± 0.2) × 1052 ergs, challenging the magnetar model for its engine.

  4. SN 2010U: A LUMINOUS NOVA IN NGC 4214

    SciTech Connect

    Humphreys, Roberta M.; Helton, L. Andrew; Prieto, Jose L.; Rosenfield, Philip; Williams, Benjamin; Murphy, Jeremiah; Dalcanton, Julianne; Gilbert, Karoline; Kochanek, Christopher S.; Stanek, K. Z.; Khan, Rubab; Szczygiel, Dorota; Mogren, Karen; Fesen, Robert A.; Milisavljevic, Dan

    2010-07-20

    The luminosity, light curve, post-maximum spectrum, and lack of a progenitor on deep pre-outburst images suggest that SN 2010U was a luminous, fast nova. Its outburst magnitude is consistent with that for a fast nova using the maximum magnitude-rate of decline relationship for classical novae.

  5. The dependence of luminous efficiency on chromatic adaptation.

    PubMed

    Stockman, Andrew; Jägle, Herbert; Pirzer, Markus; Sharpe, Lindsay T

    2008-12-15

    We investigated the dependence of luminous efficiency on background chromaticity by measuring 25-Hz heterochromatic flicker photometry (HFP) matches in six genotyped male observers on 21 different 1000-photopic-troland adapting fields: 14 spectral ones ranging from 430 to 670 nm and 7 bichromatic mixtures of 478 and 577 nm that varied in luminance ratio. Each function was analyzed in terms of the best-fitting linear combination of the long- (L) and middle- (M) wavelength sensitive cone fundamentals of A. Stockman and L. T. Sharpe (2000). Taking into account the adapting effects of both the backgrounds and the targets, we found that luminous efficiency between 603 and 535 nm could be predicted by a simple model in which the relative L- and M-cone weights are inversely proportional to the mean cone excitations produced in each cone type multiplied by a single factor, which was roughly independent of background wavelength (and may reflect relative L:M cone numerosity). On backgrounds shorter than 535 nm and longer than 603 nm, the M-cone contribution to luminous efficiency falls short of the proportionality prediction but most likely for different reasons in the two spectral regions.

  6. On the Social Traits of Luminous Blue Variables

    NASA Astrophysics Data System (ADS)

    Humphreys, Roberta M.; Weis, Kerstin; Davidson, Kris; Gordon, Michael S.

    2016-07-01

    In a recent paper, Smith & Tombleson state that the luminous blue variables (LBVs) in the Milky Way and the Magellanic Clouds are isolated; they are not spatially associated with young O-type stars. They propose a novel explanation that would overturn the standard view of LBVs. In this paper we test their hypothesis for the LBVs in M31 and M33, as well as the LMC and SMC. We show that in M31 and M33 the LBVs are associated with luminous young stars and supergiants that are appropriate to their luminosities and positions on the H-R diagram. Moreover, in the Smith and Tombleson scenario most of the LBVs should be runaway stars, but the stars’ velocities are consistent with their positions in the respective galaxies. In the Magellanic Clouds, those authors’ sample was a mixed population. We reassess their analysis, removing seven stars that have no clear relation to LBVs. When we separate the more massive classical and the less luminous LBVs, the classical LBVs have a distribution similar to the late O-type stars, while the less luminous LBVs have a distribution like the red supergiants. None of the confirmed LBVs have high velocities or are candidate runaway stars. These results support the accepted description of LBVs as evolved massive stars that have shed a lot of mass and are now close to their Eddington limit.

  7. An Exponential Luminous Efficiency Model for Hypervelocity Impact into Regolith

    NASA Technical Reports Server (NTRS)

    Swift, Wesley R.; Moser, D.E.; Suggs, Robb M.; Cooke, W.J.

    2010-01-01

    The flash of thermal radiation produced as part of the impact-crater forming process can be used to determine the energy of the impact if the luminous efficiency is known. From this energy the mass and, ultimately, the mass flux of similar impactors can be deduced. The luminous efficiency, Eta is a unique function of velocity with an extremely large variation in the laboratory range of under 8 km/s but a necessarily small variation with velocity in the meteoric range of 20 to 70 km/s. Impacts into granular or powdery regolith, such as that on the moon, differ from impacts into solid materials in that the energy is deposited via a serial impact process which affects the rate of deposition of internal (thermal) energy. An exponential model of the process is developed which differs from the usual polynomial models of crater formation. The model is valid for the early time portion of the process and focuses on the deposition of internal energy into the regolith. The model is successfully compared with experimental luminous efficiency data from laboratory impacts and from astronomical determinations and scaling factors are estimated. Further work is proposed to clarify the effects of mass and density upon the luminous efficiency scaling factors

  8. Aging and luminance-adaptation effects on spatial contrast sensitivity.

    PubMed

    Sloane, M E; Owsley, C; Jackson, C A

    1988-12-01

    Contrast sensitivity as a function of target luminance for four spatial frequencies (0.5, 2, 4, and 8 cycles/deg) was measured in younger (n = 12; age range, 19-35 years) and older (n = 11; age range, 68-79 years) adults in order to examine the feasibility of optical and neural explanations for the impairment of contrast sensitivity in older adults. All subjects were free from identifiable ocular disease and had good acuity. Sensitivity for each spatial frequency was measured at eight luminance levels spanning 3.5 log units in the photopic-mesopic range. When gratings were flickered at 0.5 Hz, functions for older adults were displaced downward on the sensitivity axis across all luminance levels, and the slopes of these functions were steeper than those for younger adults, suggesting that optical mechanisms alone cannot account for the vision loss in older adults. Further measurements, in which spatial targets were flickered at 7.5 Hz, indicated that this faster temporal modulation affected sensitivity as a function of luminance differentially in younger and older adults. These data imply that the neural mechanisms subserving human spatial vision undergo significant changes during adulthood.

  9. Astronomy. ASASSN-15lh: A highly super-luminous supernova.

    PubMed

    Dong, Subo; Shappee, B J; Prieto, J L; Jha, S W; Stanek, K Z; Holoien, T W-S; Kochanek, C S; Thompson, T A; Morrell, N; Thompson, I B; Basu, U; Beacom, J F; Bersier, D; Brimacombe, J; Brown, J S; Bufano, F; Chen, Ping; Conseil, E; Danilet, A B; Falco, E; Grupe, D; Kiyota, S; Masi, G; Nicholls, B; Olivares E, F; Pignata, G; Pojmanski, G; Simonian, G V; Szczygiel, D M; Woźniak, P R

    2016-01-15

    We report the discovery of ASASSN-15lh (SN 2015L), which we interpret as the most luminous supernova yet found. At redshift z = 0.2326, ASASSN-15lh reached an absolute magnitude of Mu ,AB = -23.5 ± 0.1 and bolometric luminosity Lbol = (2.2 ± 0.2) × 10(45) ergs s(-1), which is more than twice as luminous as any previously known supernova. It has several major features characteristic of the hydrogen-poor super-luminous supernovae (SLSNe-I), whose energy sources and progenitors are currently poorly understood. In contrast to most previously known SLSNe-I that reside in star-forming dwarf galaxies, ASASSN-15lh appears to be hosted by a luminous galaxy (MK ≈ -25.5) with little star formation. In the 4 months since first detection, ASASSN-15lh radiated (1.1 ± 0.2) × 10(52) ergs, challenging the magnetar model for its engine. PMID:26816375

  10. Modulation of intestinal L-glutamate transport by luminal leptin.

    PubMed

    Fanjul, Carmen; Barrenetxe, Jaione; Lostao, María Pilar; Ducroc, Robert

    2015-06-01

    Leptin is secreted into the digestive tract and contributes to the absorption of dietary molecules by regulating transporters activity. Here, we studied the effect of luminal leptin on the intestinal transport of L-glutamate, an important component of human diet. We examined the effect of leptin on L-glutamate uptake in rat intestine in vitro measuring glutamate-induced short-circuit current (Isc) in Ussing chambers and L-[(3)H (U)]-glutamate uptake in jejunal everted rings. Glutamate-induced Isc was only observed in Na(+)-free conditions. This Isc was concentration (1-60 mmol L(-1)) and pH dependent. Luminal leptin increased glutamate Isc (∼100 %). Dose-response curve showed a biphasic pattern, with maximal stimulations observed at 10(-13) and 10(-10) mmol L(-1), that were sensitive to leptin receptor antagonist. In everted rings, two glutamate transport mechanisms were distinguished: a Na(+)-dependent, H(+)-independent, that was inhibited by leptin (∼20 %), and a Na(+)-independent but H(+)-dependent, that was enhanced by leptin (∼20 %), in line with data obtained in Ussing chambers. Altogether, these data reveal original non-monotonic effect of luminal leptin in the intestine and demonstrate a new role for this hormone in the modulation of L-glutamate transport, showing that luminal active gut peptides can influence absorption of amino acids.

  11. Vocal Fold Epithelial Response to Luminal Osmotic Perturbation

    ERIC Educational Resources Information Center

    Sivasankar, Mahalakshmi; Fisher, Kimberly V.

    2007-01-01

    Purpose: Dry-air challenges increase the osmolarity of fluid lining the luminal surface of the proximal airway. The homeostasis of surface fluid is thought to be essential for voice production and laryngeal defense. Therefore, the authors hypothesized that viable vocal fold epithelium would generate a water flux to reduce an osmotic challenge (150…

  12. Chromatic and luminance contrast sensitivity in fullterm and preterm infants

    PubMed Central

    Bosworth, Rain G.; Dobkins, Karen R.

    2010-01-01

    In order to investigate the contributions of visual experience vs. preprogrammed mechanisms on visual development, the current study compared contrast sensitivity in preterm vs. fullterm infants. If development is tied to time since conception, preterm infants should match the developmental trajectories of fullterm infants when plotted in postterm age. By contrast, if development is influenced by visual experience, preterm and fullterm infants should match when plotted in postnatal age. Luminance (light/dark) and chromatic (red/green) contrast sensitivities (CS) were measured in 25 preterm (born, on average, 6.6 weeks early) and 77 fullterm infants, between 1 and 6 months postterm. In the first few months, luminance CS was found to be predicted by postterm age, suggesting that preprogrammed development is sufficient to account for luminance CS. By contrast, chromatic CS exceeded that predicted by postterm age, which suggests that time since birth confers a benefit on chromatic CS. The preterms’ 6.6 weeks of additional time since birth is roughly equivalent to 3.7 weeks of development in chromatic CS. In sum, these results suggest that chromatic CS is more influenced by early postnatal visual experience than luminance CS, which may have implications for development of parvocellular and magnocellular pathways. PMID:20055548

  13. Analysis of abdominal wounds made by surgical trocars using functional luminal imaging probe (FLIP) technology.

    PubMed

    McMahon, Barry P; O'Donovan, Deidre; Liao, Donghua; Zhao, Jingbo; Schiretz, Rich; Heninrich, Russell; Gregersen, Hans

    2008-09-01

    The aim was to use a novel functional luminal imaging probe for evaluation of wound defects and tissue damage resulting from the use of trocars. Following general anesthesia of 4 adult pigs, 6 different trocars were randomly inserted at preselected locations in the porcine abdominal wall. The functional luminal imaging probe was used to profile the trocar holes during bag distension from 8 axial cross-sectional area measurements. The cross-sectional areas and pressure in the bag were recorded and exported to Matlab for analysis and data display. Geometric profiles were generated, and the minimum cross-sectional area and hole length (abdominal wall thickness) were used as endpoints. Successful distensions were made in all cases. The slope of the contours increased away from the narrowest point of the hole. The slope increased more rapidly toward the inner abdominal wall than toward the outer wall. The slope of the linear trend lines for the cross-sectional area-pressure relation represents the compliance at the narrowest point in the wall. The hole length (abdominal wall thickness) could be obtained at different cross-sectional area cutoff points. A cutoff point of 300 mm(2) gave good results when compared to the length of the hole measured after the tissue was excised. This technique represents a new and straightforward way to evaluate the effects of trocars on the abdominal wall. It may also prove useful in comparing techniques and technology from different manufacturers. PMID:18757380

  14. Spectral and spatial selectivity of luminance vision in reef fish.

    PubMed

    Siebeck, Ulrike E; Wallis, Guy Michael; Litherland, Lenore; Ganeshina, Olga; Vorobyev, Misha

    2014-01-01

    Luminance vision has high spatial resolution and is used for form vision and texture discrimination. In humans, birds and bees luminance channel is spectrally selective-it depends on the signals of the long-wavelength sensitive photoreceptors (bees) or on the sum of long- and middle-wavelength sensitive cones (humans), but not on the signal of the short-wavelength sensitive (blue) photoreceptors. The reasons of such selectivity are not fully understood. The aim of this study is to reveal the inputs of cone signals to high resolution luminance vision in reef fish. Sixteen freshly caught damselfish, Pomacentrus amboinensis, were trained to discriminate stimuli differing either in their color or in their fine patterns (stripes vs. cheques). Three colors ("bright green", "dark green" and "blue") were used to create two sets of color and two sets of pattern stimuli. The "bright green" and "dark green" were similar in their chromatic properties for fish, but differed in their lightness; the "dark green" differed from "blue" in the signal for the blue cone, but yielded similar signals in the long-wavelength and middle-wavelength cones. Fish easily learned to discriminate "bright green" from "dark green" and "dark green" from "blue" stimuli. Fish also could discriminate the fine patterns created from "dark green" and "bright green". However, fish failed to discriminate fine patterns created from "blue" and "dark green" colors, i.e., the colors that provided contrast for the blue-sensitive photoreceptor, but not for the long-wavelength sensitive one. High resolution luminance vision in damselfish, Pomacentrus amboinensis, does not have input from the blue-sensitive cone, which may indicate that the spectral selectivity of luminance channel is a general feature of visual processing in both aquatic and terrestrial animals.

  15. Characterization of FEL Lamps as Secondary Standard of Luminous Intensity

    NASA Astrophysics Data System (ADS)

    Junior, Antonio F. G. Ferreira; Machado, Ilomar E. C.

    2008-04-01

    This work presents a study comparing the drift during seasoning of four of 1000W FEL-type lamp regarding the use of theses lamps as secondary luminous intensity standard. Three of these lamps are manufactured by Philips and the other lamp is manufactured by ORIEL. The lamps seasoning takes normally 30 hours and during the seasoning period relative drift of the lamp luminous intensity, lamp current and voltage are measured at each 5 minutes. The correlated color temperature of the lamps is measured at the end of lamp seasoning period. The luminous intensity is measured using a 4 1/2 digits photometer with thermal stabilized detector head, the lamp voltage is measured using a 6 1/2 digits voltmeter and the current is measured and controlled by a calibrated current power source shunt. The lamp sockets are adapted to a cinematic positioning device which is placed on an adjustable mounting device. A cross target is used as reference for alignment with a He-Ne Laser. In the 1st group of three lamps from Philips the minimum relative drift in luminous intensity per hour at the end of seasoning period was 0,0075 percent and the maximum relative drift was 0,02 percent. Voltage relative drift of the lamps were very similar in shape on the last few hours of the seasoning period, but different for one lamp at the beginning. The lamp current remained practically constant at 8 A which was the current adjusted in the current power source. One lamp had the luminous intensity calibrated by the National Institute of Metrology from Argentina and is used as a transfer standard for the other lamps.

  16. Spectral and spatial selectivity of luminance vision in reef fish

    PubMed Central

    Siebeck, Ulrike E.; Wallis, Guy Michael; Litherland, Lenore; Ganeshina, Olga; Vorobyev, Misha

    2014-01-01

    Luminance vision has high spatial resolution and is used for form vision and texture discrimination. In humans, birds and bees luminance channel is spectrally selective—it depends on the signals of the long-wavelength sensitive photoreceptors (bees) or on the sum of long- and middle-wavelength sensitive cones (humans), but not on the signal of the short-wavelength sensitive (blue) photoreceptors. The reasons of such selectivity are not fully understood. The aim of this study is to reveal the inputs of cone signals to high resolution luminance vision in reef fish. Sixteen freshly caught damselfish, Pomacentrus amboinensis, were trained to discriminate stimuli differing either in their color or in their fine patterns (stripes vs. cheques). Three colors (“bright green”, “dark green” and “blue”) were used to create two sets of color and two sets of pattern stimuli. The “bright green” and “dark green” were similar in their chromatic properties for fish, but differed in their lightness; the “dark green” differed from “blue” in the signal for the blue cone, but yielded similar signals in the long-wavelength and middle-wavelength cones. Fish easily learned to discriminate “bright green” from “dark green” and “dark green” from “blue” stimuli. Fish also could discriminate the fine patterns created from “dark green” and “bright green”. However, fish failed to discriminate fine patterns created from “blue” and “dark green” colors, i.e., the colors that provided contrast for the blue-sensitive photoreceptor, but not for the long-wavelength sensitive one. High resolution luminance vision in damselfish, Pomacentrus amboinensis, does not have input from the blue-sensitive cone, which may indicate that the spectral selectivity of luminance channel is a general feature of visual processing in both aquatic and terrestrial animals. PMID:25324727

  17. A review of ocean color remote sensing methods and statistical techniques for the detection, mapping and analysis of phytoplankton blooms in coastal and open oceans

    NASA Astrophysics Data System (ADS)

    Blondeau-Patissier, David; Gower, James F. R.; Dekker, Arnold G.; Phinn, Stuart R.; Brando, Vittorio E.

    2014-04-01

    The need for more effective environmental monitoring of the open and coastal ocean has recently led to notable advances in satellite ocean color technology and algorithm research. Satellite ocean color sensors' data are widely used for the detection, mapping and monitoring of phytoplankton blooms because earth observation provides a synoptic view of the ocean, both spatially and temporally. Algal blooms are indicators of marine ecosystem health; thus, their monitoring is a key component of effective management of coastal and oceanic resources. Since the late 1970s, a wide variety of operational ocean color satellite sensors and algorithms have been developed. The comprehensive review presented in this article captures the details of the progress and discusses the advantages and limitations of the algorithms used with the multi-spectral ocean color sensors CZCS, SeaWiFS, MODIS and MERIS. Present challenges include overcoming the severe limitation of these algorithms in coastal waters and refining detection limits in various oceanic and coastal environments. To understand the spatio-temporal patterns of algal blooms and their triggering factors, it is essential to consider the possible effects of environmental parameters, such as water temperature, turbidity, solar radiation and bathymetry. Hence, this review will also discuss the use of statistical techniques and additional datasets derived from ecosystem models or other satellite sensors to characterize further the factors triggering or limiting the development of algal blooms in coastal and open ocean waters.

  18. Quantitative DNA fiber mapping

    DOEpatents

    Gray, Joe W.; Weier, Heinz-Ulrich G.

    1998-01-01

    The present invention relates generally to the DNA mapping and sequencing technologies. In particular, the present invention provides enhanced methods and compositions for the physical mapping and positional cloning of genomic DNA. The present invention also provides a useful analytical technique to directly map cloned DNA sequences onto individual stretched DNA molecules.

  19. Mapping with Young Children.

    ERIC Educational Resources Information Center

    Sunal, Cynthia Szymanski; Warash, Bobbi Gibson

    Techniques for encouraging young children to discover the purpose and use of maps are discussed. Motor activity and topological studies form a base from which the teacher and children can build a mapping program of progressive sophistication. Concepts important to mapping include boundaries, regions, exteriors, interiors, holes, order, point of…

  20. Using maps in genealogy

    USGS Publications Warehouse

    ,

    1999-01-01

    Maps are one of many sources you may need to complete a family tree. In genealogical research, maps can provide clues to where our ancestors may have lived and where to look for written records about them. Beginners should master basic genealogical research techniques before starting to use topographic maps.

  1. Image colorization based on texture map

    NASA Astrophysics Data System (ADS)

    Liu, Shiguang; Zhang, Xiang

    2013-01-01

    Colorizing grayscale images so that the resulting image appears natural is a hard problem. Previous colorization algorithms generally use just the luminance information and ignore the rich texture information, which means that regions with the same luminance but different textures may mistakenly be assigned the same color. A novel automatic texture-map-based grayscale image colorization method is proposed. The texture map is generated with bilateral decomposition and a Gaussian high pass filter, which is further optimized using statistical adaptive gamma correction method. The segmentation of the spatial map is performed using locally weighted linear regression on its histogram in order to match the grayscale image and the source image. Within each of the spatial segmentation, a weighted color-luminance correspondence is achieved by the results of locally weighted linear regression. The luminance-color correspondence between the grayscale image and the source image can thus be used to colorize the grayscale image directly. By considering the consistency of both color information and texture information between two images, various plausible colorization results are generated using this new method.

  2. A stent for co-delivering paclitaxel and nitric oxide from abluminal and luminal surfaces: Preparation, surface characterization, and in vitro drug release studies

    NASA Astrophysics Data System (ADS)

    Gallo, Annemarie; Mani, Gopinath

    2013-08-01

    Most drug-eluting stents currently available are coated with anti-proliferative drugs on both abluminal (toward blood vessel wall) and luminal (toward lumen) surfaces to prevent neointimal hyperplasia. While the abluminal delivery of anti-proliferative drugs is useful for controlling neointimal hyperplasia, the luminal delivery of such drugs impairs or prevents endothelialization which causes late stent thrombosis. This research is focused on developing a bidirectional dual drug-eluting stent to co-deliver an anti-proliferative agent (paclitaxel - PAT) and an endothelial cell promoting agent (nitric oxide - NO) from abluminal and luminal surfaces of the stent, respectively. Phosphonoacetic acid, a polymer-free drug delivery platform, was initially coated on the stents. Then, the PAT and NO donor drugs were co-coated on the abluminal and luminal stent surfaces, respectively. The co-coating of drugs was collectively confirmed by the surface characterization techniques such as Fourier transform infrared spectroscopy, scanning electron microscopy (SEM), 3D optical surface profilometry, and contact angle goniometry. SEM showed that the integrity of the co-coating of drugs was maintained without delamination or cracks formation occurring during the stent expansion experiments. In vitro drug release studies showed that the PAT was released from the abluminal stent surfaces in a biphasic manner, which is an initial burst followed by a slow and sustained release. The NO was burst released from the luminal stent surfaces. Thus, this study demonstrated the co-delivery of PAT and NO from abluminal and luminal stent surfaces, respectively. The stent developed in this study has potential applications in inhibiting neointimal hyperplasia as well as encouraging luminal endothelialization to prevent late stent thrombosis.

  3. Calcium transport in bovine rumen epithelium as affected by luminal Ca concentrations and Ca sources

    PubMed Central

    Schröder, Bernd; Wilkens, Mirja R; Ricken, Gundula E; Leonhard-Marek, Sabine; Fraser, David R; Breves, Gerhard

    2015-01-01

    The quantitative role of different segments of the gastrointestinal tract for Ca absorption, the respective mechanisms, and their regulation are not fully identified for ruminants, that is, cattle. In different in vitro experiments the forestomach wall has been demonstrated to be a major site for active Ca absorption in sheep and goats. In order to further clarify the role of the bovine rumen for Ca transport with special attention to luminal Ca concentrations, its ionic form, and pH, electrophysiological and unidirectional flux rate measurements were performed with isolated bovine rumen epithelial tissues. For Ca flux studies (Jms, Jsm) in vitro Ussing chamber technique was applied. Standard RT-PCR method was used to characterize TRPV6 and PMCA1 as potential contributors to transepithelial active Ca transport. At Ca concentrations of 1.2 mmol L−1 on both sides of the tissues, Jms were higher than Jsm resulting under some conditions in significant Ca net flux rates (Jnet), indicating the presence of active Ca transport. In the absence of an electrical gradient, Jnet could significantly be stimulated in the presence of luminal short-chain fatty acids (SCFAs). Increasing the luminal Ca concentrations up to 11.2 mmol L−1 resulted in significant increases in Jms without influencing Jsm. Providing Ca in its form as respective chloride, formate, or propionate salts there was no significant effect on Jms. No transcripts specific for Ca channel TRPV6 could be demonstrated. Our results indicate different mechanisms for Ca absorption in bovine rumen as compared with those usually described for the small intestines. PMID:26564067

  4. Keck Long Wavelength Spectrometer Images of Luminous IR Galaxies

    NASA Astrophysics Data System (ADS)

    Jones, Barbara; Puetter, Richard C.; Smith, Harding E.; Stein, Wayne A.; Wang, Michael C.; Campbell, Randy

    1998-05-01

    We have used the UCSD/Keck Long Wavelength Spectrometer (LWS; Jones & Puetter 1993, Proc. S.P.I.E., 1946, 610) in its initial (72 x 64) imaging mode to observe the luminous IR Galaxies Mrk 231, Arp 220, and NGC 7469, as well as NGC 1068 at mid-infrared wavelengths from 8--18\\micron. Pixon-based image reconstruction techniques (Puetter 1995, Int. J. Image Sys. & Tech., 6, 314) have been employed to achieve resolution as high as 50 mas. The mid-infrared emission in Arp 220 is resolved into the two nuclei plus a faint knot of emission 0.5 arcsec SE of the western nucleus. The SEDs show that the the W nucleus dominates at the longest wavelengths and probably in the far-infrared. Silicate absorption at 10\\micron\\ is present in all three components, but is strongest in the E nucleus, suggesting that the emission comes from an optically thick shell around a very compact mid-IR source. The E nucleus is unresolved at 0.2 arcsec resolution. The nucleus of NGC 7469 is marginally resolved at 50mas resolution. On the average the nuclear emission is redder than the surrounding starburst ring; the active nucleus dominates at all mid-infrared wavelengths and the ratio of Nucleus/Starburst increases toward the FIR. Mrk 231 shows a compact, unresolved nucleus with a faint, resolved star-formation ring. These observations will be discussed in terms of the Sanders et al. (1988, ApJ, 325 74) model in which LIGs evolve from Starbursts to AGN. The LWS is being upgraded with a Boeing 128 x 128 BIB array which is expected to be delivered in early summer. A 128 x 128 element multiplexer has been installed and optical performance reverified; further temperature stability tests and signal-to-noise optimization are being performed with an engineering array. The upgraded spectrometer with 11" FOV for imaging and spectroscopic resolutions, R=100 and 1000, is expected to be recommissioned this summer and to be available for scheduling in second semester 1998.

  5. A New 350 GHz Heterodyne Array Receiver (HARP) and Observations of Luminous Infrared Galaxies

    NASA Astrophysics Data System (ADS)

    Leech, Jamie

    2002-08-01

    This thesis describes the design, theoretical modelling and experimental verification of a 16 element heterodyne array receiver for operation between 325 and 375~GHz. The development of array receivers is crucial for submillimetre astronomy with single dish telescopes since they promise order of magnitude increases in spectroscopic mapping speeds compared to single element receivers. The HARP (Heterodyne Array Receiver Programme) receiver, when attached to the James Clerk Maxwell Telescope (JCMT), will enable fast mapping of several astrophysically important constituents of the interstellar medium, both in the Milky Way and in external galaxies. The imaging module for the HARP receiver consists of 16 SIS (Superconductor - Insulator - Superconductor) fixed-tuned mixers with corrugated horn-reflector feeds. The design, electromagnetic modelling and experimental testing of a new type of corrugated horn, combining good performance with ease of machining, is described. The incoming astronomical and LO signals are coupled from a rectangular waveguide to the SIS tunnel junction by means of a one-sided microstrip probe. The input impedance of such one-sided probes is analysed theoretically and confirmed by measurements on a 5 GHz scale model. The potential of using such probes in combination with extended backshort to achieve sideband suppression is also examined. The design of the SIS devices to be used in each HARP mixer in the array is described, and electromagnetic modelling is used to achieve improved designs. Noise temperatures for a range of HARP SIS devices are measured, and the consequences of these measurements for receiver performance is assessed. Embedding impedance recovery techniques are successfully used to assess the tuning of the SIS devices. The design for the injection of local oscillator signal into each mixer in the array is developed, by using theoretical modelling based on single and multi-mode Gaussian beams. The performance of the prototype LO

  6. Surgery for luminal Crohn’s disease

    PubMed Central

    Yamamoto, Takayuki; Watanabe, Toshiaki

    2014-01-01

    Many patients with Crohn’s disease (CD) require surgery. Indications for surgery include failure of medical treatment, bowel obstruction, fistula or abscess formation. The most common surgical procedure is resection. In jejunoileal CD, strictureplasty is an accepted surgical technique that relieves the obstructive symptoms, while preserving intestinal length and avoiding the development of short bowel syndrome. However, the role of strictureplasty in duodenal and colonic diseases remains controversial. In extensive colitis, after total colectomy with ileorectal anastomosis (IRA), the recurrence rates and functional outcomes are reasonable. For patients with extensive colitis and rectal involvement, total colectomy and end-ileostomy is safe and effective; however, a few patients can have subsequent IRA, and half of the patients will require proctectomy later. Proctocolectomy is associated with a high incidence of delayed perineal wound healing, but it carries a low recurrence rate. Patients undergoing proctocolectomy with ileal pouch-anal anastomosis had poor functional outcomes and high failure rates. Laparoscopic surgery has been introduced as a minimal invasive procedure. Patients who undergo laparoscopic surgery have a more rapid recovery of bowel function and a shorter hospital stay. The morbidity also is lower, and the rate of disease recurrence is similar compared with open procedures. PMID:24415860

  7. Assessment of planetary geologic mapping techniques for Mars using terrestrial analogs: The SP Mountain area of the San Francisco Volcanic Field, Arizona

    USGS Publications Warehouse

    Tanaka, K.L.; Skinner, J.A.; Crumpler, L.S.; Dohm, J.M.

    2009-01-01

    We photogeologically mapped the SP Mountain region of the San Francisco Volcanic Field in northern Arizona, USA to evaluate and improve the fidelity of approaches used in geologic mapping of Mars. This test site, which was previously mapped in the field, is chiefly composed of Late Cenozoic cinder cones, lava flows, and alluvium perched on Permian limestone of the Kaibab Formation. Faulting and folding has deformed the older rocks and some of the volcanic materials, and fluvial erosion has carved drainage systems and deposited alluvium. These geologic materials and their formational and modificational histories are similar to those for regions of the Martian surface. We independently prepared four geologic maps using topographic and image data at resolutions that mimic those that are commonly used to map the geology of Mars (where consideration was included for the fact that Martian features such as lava flows are commonly much larger than their terrestrial counterparts). We primarily based our map units and stratigraphic relations on geomorphology, color contrasts, and cross-cutting relationships. Afterward, we compared our results with previously published field-based mapping results, including detailed analyses of the stratigraphy and of the spatial overlap and proximity of the field-based vs. remote-based (photogeologic) map units, contacts, and structures. Results of these analyses provide insights into how to optimize the photogeologic mapping of Mars (and, by extension, other remotely observed planetary surfaces). We recommend the following: (1) photogeologic mapping as an excellent approach to recovering the general geology of a region, along with examination of local, high-resolution datasets to gain insights into the complexity of the geology at outcrop scales; (2) delineating volcanic vents and lava-flow sequences conservatively and understanding that flow abutment and flow overlap are difficult to distinguish in remote data sets; (3) taking care to

  8. Mapping and analyzing Conservation Reserve Program (CRP) enrollment patterns from 1991 to 2011 in Nelson County, North Dakota, using remote sensing and GIS techniques

    NASA Astrophysics Data System (ADS)

    Roehrdanz, Nicholas L.

    Since its inception in 1985, the federally managed Conservation Reserve Program (CRP) has contributed to land-use/land-cover change (LUCC) in areas throughout North Dakota. Concurrently, the Devils Lake Basin and surrounding Prairie Pothole Region (PPR) in North Dakota has experienced pervasive lake and wetland flooding. Unsurprisingly, a clustering of CRP enrollment in certain counties within the basin has occurred, seemingly coinciding with the flooding. Analysis of historical county-level CRP enrollment data pertaining to counties in North Dakota revealed that Nelson County, which is partially within the basin, has developed as a CRP hotspot in the state and has had the greatest increase in the density of CRP acreage amongst the counties in the region. We hypothesize that this high enrollment is the response of farmers losing arable lands and/or field access to the rising waters in the region, thus making CRP enrollment an economically viable option. This study uses Landsat data and GIS analysis to document LUCC and the forces driving it associated with CRP grassland and pervasive lake and wetland flooding in Nelson County. Because CRP field locations are not available from the federal government, we used multi-temporal classification techniques (three scenes per year) to derive land-cover maps from Landsat Thematic Mapper data for five growing seasons (1984, 1991, 1998, 2005, and 2011). We mapped CRP grassland at more than 90% accuracy with validation data derived from interpretation of historical aerial photography and, in the case of 2011, data gathered in the field. LUCC change analysis was done using raster GIS. We found an increase in the amount of CRP grassland in the study area between 1991 (19,688 ha) and 2005 (35,612 ha) and then a decline to 2011 (27,856 ha). Spatial analysis revealed a clustering of CRP in 1991 in the Sheyenne and Goose river valleys, likely attributable to those lands being considered of greater conservation importance. By 1998, a

  9. Mapping gas-phase organic reactivity and concomitant secondary organic aerosol formation: chemometric dimension reduction techniques for the deconvolution of complex atmospheric data sets

    NASA Astrophysics Data System (ADS)

    Wyche, K. P.; Monks, P. S.; Smallbone, K. L.; Hamilton, J. F.; Alfarra, M. R.; Rickard, A. R.; McFiggans, G. B.; Jenkin, M. E.; Bloss, W. J.; Ryan, A. C.; Hewitt, C. N.; MacKenzie, A. R.

    2015-07-01

    Highly non-linear dynamical systems, such as those found in atmospheric chemistry, necessitate hierarchical approaches to both experiment and modelling in order to ultimately identify and achieve fundamental process-understanding in the full open system. Atmospheric simulation chambers comprise an intermediate in complexity, between a classical laboratory experiment and the full, ambient system. As such, they can generate large volumes of difficult-to-interpret data. Here we describe and implement a chemometric dimension reduction methodology for the deconvolution and interpretation of complex gas- and particle-phase composition spectra. The methodology comprises principal component analysis (PCA), hierarchical cluster analysis (HCA) and positive least-squares discriminant analysis (PLS-DA). These methods are, for the first time, applied to simultaneous gas- and particle-phase composition data obtained from a comprehensive series of environmental simulation chamber experiments focused on biogenic volatile organic compound (BVOC) photooxidation and associated secondary organic aerosol (SOA) formation. We primarily investigated the biogenic SOA precursors isoprene, α-pinene, limonene, myrcene, linalool and β-caryophyllene. The chemometric analysis is used to classify the oxidation systems and resultant SOA according to the controlling chemistry and the products formed. Results show that "model" biogenic oxidative systems can be successfully separated and classified according to their oxidation products. Furthermore, a holistic view of results obtained across both the gas- and particle-phases shows the different SOA formation chemistry, initiating in the gas-phase, proceeding to govern the differences between the various BVOC SOA compositions. The results obtained are used to describe the particle composition in the context of the oxidised gas-phase matrix. An extension of the technique, which incorporates into the statistical models data from anthropogenic (i

  10. Effects of Phosphor Persistence on High-Speed Imaging of Transient Luminous Events

    NASA Astrophysics Data System (ADS)

    Qin, J.; Pasko, V. P.; Celestin, S. J.; Cummer, S. A.; McHarg, M. G.; Stenbaek-Nielsen, H. C.

    2014-12-01

    High-speed intensified cameras are commonly used to observe and study the transient luminous events known as sprite halos and sprite streamers occurring in the Earth's upper atmosphere in association with thunderstorm activity. In such observations the phosphor persistence in the image intensifier, depending on its characteristic decay time, might lead to a significant distortion of the optical signals recorded by those cameras. In the present work, we analyze observational data obtained using different camera systems to discuss the effects of phosphor persistence on high-speed video observations of sprites, and introduce a deconvolution technique to effectively reduce such effects. The discussed technique could also be used to enhance the high-speed images of other transient optical phenomena in the case when the phosphor persistence has a characteristic decay time that is comparable to the temporal resolution of the cameras required to resolve the phenomena.

  11. [Natural orifice trans-luminal endoscopic surgery (notes)--a new era in general surgery].

    PubMed

    Elazary, Ram; Horgan, Santiago; Talamini, Mark A; Rivkind, Avraham I; Mintz, Yoav

    2008-10-01

    Four years ago, a new surgical technique was presented, the natural orifice trans-luminal endoscopic surgery (NOTES). This technique provides an incisionless operation. The surgical devices are inserted into the peritoneal cavity through the gastrointestinal or the urogenital tracts. Today, a cholecystectomy can be performed using an advanced endoscope inserted through the stomach or the vagina. The advantages of NOTES are: reduced post operative pain, no hernias, no surgical wounds infections and better cosmetic results. The disadvantages are: difficulties in achieving safe enterotomy closure or a leak proof anastomosis, it necessitates performing more operations compared to open or laparoscopic operations in order to obtain the skills for performing these operations, and difficulties of acquiring satisfactory endoscopic vision due to lack of advanced technology. Several NOTES operations have already been performed in humans. However, many other surgical procedures were tested in laboratory animals. Development and improvement of surgical devices may promote this surgical modality in the future.

  12. An evaluation of organic light emitting diode monitors for medical applications: Great timing, but luminance artifacts

    PubMed Central

    Elze, Tobias; Taylor, Christopher; Bex, Peter J.

    2013-01-01

    Purpose: In contrast to the dominant medical liquid crystal display (LCD) technology, organic light-emitting diode (OLED) monitors control the display luminance via separate light-emitting diodes for each pixel and are therefore supposed to overcome many previously documented temporal artifacts of medical LCDs. We assessed the temporal and luminance characteristics of the only currently available OLED monitor designed for use in the medical treatment field (SONY PVM2551MD) and checked the authors’ main findings with another SONY OLED device (PVM2541). Methods: Temporal properties of the photometric output were measured with an optical transient recorder. Luminances of the three color primaries and white for all 256 digital driving levels (DDLs) were measured with a spectroradiometer. Between the luminances of neighboring DDLs, just noticeable differences were calculated according to a perceptual model developed for medical displays. Luminances of full screen (FS) stimuli were compared to luminances of smaller stimuli with identical DDLs. Results: All measured luminance transition times were below 300 μs. Luminances were independent of the luminance in the preceding frame. However, for the single color primaries, up to 50.5% of the luminances of neighboring DDLs were not perceptually distinguishable. If two color primaries were active simultaneously, between 36.7% and 55.1% of neighboring luminances for increasing DDLs of the third primary were even decreasing. Moreover, luminance saturation effects were observed when too many pixels were active simultaneously. This effect was strongest for white; a small white patch was close to 400 cd/m2, but in FS the luminance of white saturated at 162 cd/m2. Due to different saturation levels, the luminance of FS green and FS yellow could exceed the luminance of FS white for identical DDLs. Conclusions: The OLED temporal characteristics are excellent and superior to those of LCDs. However, the OLEDs revealed severe

  13. Infrared spectroscopy of radio-luminous OH/IR stars

    NASA Technical Reports Server (NTRS)

    Jones, Terry Jay; Hyland, A. R.; Fix, John D.; Cobb, Michael L.

    1988-01-01

    Low-resolution 1.5-2.5-micron spectra for 21 radio-luminous OH/IR stars are presented. These spectra divide into two broad classes. Those with very strong water-vapor absorption closely resemble the spectra of classical Mira variables and are classified Type VM. Those with weaker water-vapor absorption, but still showing strong CO absorption, resemble the spectra of true core-burning supergiants and are classified Type SG. Comparison of the classification of 30 radio-luminous OH/IR stars with their Delta(V)s and luminosities suggests this classification is a good indicator of the intrinsic nature of the underlying star. There is some evidence, however, that some true supergiants (massive main-sequence progenitors) develop the pulsation properties and photospheric characteristics of the Mira-like OH/IR stars when they become optically obscured OH/IR stars.

  14. Luminance-dependent long-term chromatic adaptation.

    PubMed

    Vincent, Joris; Kale, Alex M; Buck, Steven L

    2016-03-01

    There is theoretical and empirical support for long-term adaptation of human vision to chromatic regularities in the environment. The current study investigates whether relationships of luminance and chromaticity in the natural environment could drive chromatic adaptation independently and differently for bright and dark colors. This is motivated by psychophysical evidence of systematic difference shifts in red-green chromatic sensitivities between contextually bright- versus dark-colored stimuli. For some broad classes of scene content, consistent shifts in chromaticity are found between high and low light levels within images. Especially in those images in which sky and terrain are juxtaposed, this shift has direction and magnitude consistent with the observed psychophysical shifts in the red-green balance between bright and dark colors. Taken together, these findings suggest that relative weighting of M- and L-cone signals could be adapted, in a luminance-dependent fashion, to regularities in the natural environment.

  15. WRA 751, a candidate for a new luminous blue variable

    NASA Technical Reports Server (NTRS)

    Hu, J. Y.; De Winter, D.; The, P. S.; Perez, M. R.

    1990-01-01

    Results are presened on spectroscopic observations (carried out with the ESO 1.52-m telescope at La Silla, Chile) and measurements made by the Infrared Astronomical Satellite on the star WRA 751. It is estimated that the T(eff) of this star is 30,000 K and its E(B-V) is 1.8 mag. Comparison with evolutionary tracks of Maeder and Meynet (1980) indicates that WRA 751 is located in the region of luminous blue variables at a lower limit of the initial mass of about 50 solar masses. Its other physical characteristics, such as the light variability, strong Fe II lines, and the extinction-free spectral energy distribution, suggest that WRA 751 is a candidate for a new luminous blue variable.

  16. Luminous fabric devices for wearable low-level light therapy

    PubMed Central

    Shen, Jing; Chui, Chunghin; Tao, Xiaoming

    2013-01-01

    In this paper, a flexible luminous fabric device was developed and investigated for wearable three-dimensionally fitted low-level light therapy. The fabric device exhibited excellent optical and thermal properties. Its optical power density and operating temperature were stable during usage for 10 hours. In vitro experiments demonstrated a significant increase in collagen production in human fibroblast irradiated by the fabric device, compared with the fibroblast without light irradiation. A series of tests were conducted for the safety of the fabric for human skin contact according to ISO standard ISO 10993-1:2003. The results showed that there was no potential hazard when the luminous fabrics were in direct contact with human skin. PMID:24409391

  17. Effect of room illuminance on monitor black level luminance and monitor calibration.

    PubMed

    Chakrabarti, K; Kaczmarek, R V; Thomas, J A; Romanyukha, A

    2003-12-01

    In this article we demonstrate the effect of room illuminance and surrounding monitor black level luminance on image quality for soft copy interpretation. Luminance values of a 10% central target and image quality evaluations and observer performance using a contrast-detail mammography (CDMAM) phantom demonstrate these effects. Our results indicate that high room illuminance has a more damaging effect on image quality when the surrounding monitor luminance is 0% to 5% of the maximum monitor luminance. The effect of room illuminance is less obvious when the surrounding monitor luminance is 20% of the maximum.

  18. Infrared atomic hydrogen line formation in luminous stars

    NASA Technical Reports Server (NTRS)

    Krolik, J. H.; Smith, H. A.

    1981-01-01

    Infrared atomic hydrogen lines observed in luminous stars, generally attributed to compact circumstellar H II regions, can also be formed in the winds likely to emanate from these stars. Implications are discussed for the class of obscured infrared point sources showing these lines, and an illustrative model is derived for the BN object in Orion. Such stellar winds should also produce weak, but detectable, radio emission.

  19. The Beatrice experiment and the luminance concept in Dante

    NASA Astrophysics Data System (ADS)

    Cialdea, R.

    1984-05-01

    An experiment suggested by Beatrice in Dante's Divine Comedy is analyzed to show that Dante was well aware of the knowledge of his time to the point of exploring concepts not understood in the 13th century. The experiment shows that the luminance of a spherical light source as perceived by the retina, is independent of the source distance. Three mirrors are used in the experimental set-up.

  20. Race-associated biological differences among Luminal A breast tumors.

    PubMed

    D'Arcy, Monica; Fleming, Jodie; Robinson, Whitney R; Kirk, Erin L; Perou, Charles M; Troester, Melissa A

    2015-07-01

    African-American (AA) women have higher breast cancer-specific mortality rates. A higher prevalence of the worse outcome Basal-like breast cancer subtype contributes to this, but AA women also have higher mortality even within the more favorable outcome Luminal A breast cancers. These differences may reflect treatment or health care access issues, inherent biological differences, or both. To identify potential biological differences by race among Luminal A breast cancers, gene expression data from 108 CAU and 57 AA breast tumors were analyzed. Race-associated genes were evaluated for associations with survival. Finally, expression of race- and survival-associated genes was evaluated in normal tissue of AA and CAU women. Six genes (ACOX2, MUC1, CRYBB2, PSPH, SQLE, TYMS) were differentially expressed by race among Luminal A breast cancers and were associated with survival (HR <0.8, HR >1.25). For all six genes, tumors in AA had higher expression of poor prognosis genes (CRYBB2, PSPH, SQLE, TYMS) and lower expression of good prognosis genes (ACOX2, MUC1). A score based on all six genes predicted survival in a large independent dataset (HR = 1.9 top vs. bottom quartile, 95% CI: 1.4-2.5). For four genes, normal tissue of AA and CAU women showed similar expression (ACOX2, MUC1, SQLE, TYMS); however, the poor outcome-associated genes CRYBB2 and PSPH were more highly expressed in AA versus CAU women's normal tissue. This analysis identified gene expression differences that may contribute to mortality disparities and suggests that among Luminal A breast tumors there are biological differences between AA and CAU patients. Some of these differences (CRYBB2 and PSPH) may exist from the earliest stages of tumor development, or may even precede malignancy. PMID:26109344

  1. Methods for cultivation of luminal parasitic protists of clinical importance.

    PubMed

    Clark, C Graham; Diamond, Louis S

    2002-07-01

    Cultivation of luminal protistan parasites has a long history. In this review we discuss the methods and media that are most widely used for the establishment and maintenance of the following organisms in culture: Entamoeba histolytica, Giardia intestinalis, Trichomonas vaginalis, Dientamoeba fragilis, Blastocystis hominis, and Balantidium coli. While cultivation is of limited importance in the diagnostic laboratory, it is essential to most research laboratories, and it is toward the latter that this review is primarily aimed.

  2. Methods for cultivation of luminal parasitic protists of clinical importance.

    PubMed

    Clark, C Graham; Diamond, Louis S

    2002-07-01

    Cultivation of luminal protistan parasites has a long history. In this review we discuss the methods and media that are most widely used for the establishment and maintenance of the following organisms in culture: Entamoeba histolytica, Giardia intestinalis, Trichomonas vaginalis, Dientamoeba fragilis, Blastocystis hominis, and Balantidium coli. While cultivation is of limited importance in the diagnostic laboratory, it is essential to most research laboratories, and it is toward the latter that this review is primarily aimed. PMID:12097242

  3. Night sky luminance under clear sky conditions: Theory vs. experiment

    NASA Astrophysics Data System (ADS)

    Kocifaj, Miroslav

    2014-05-01

    Sky glow is caused by both natural phenomena and factors of anthropogenic origin, and of the latter ground-based light sources are the most important contributors for they emit the spatially linked spectral radiant intensity distribution of artificial light sources, which are further modulated by local atmospheric optics and perceived as the diffuse light of a night sky. In other words, sky glow is closely related to a city's shape and pattern of luminaire distribution, in practical effect an almost arbitrary deployment of random orientation of heterogeneous electrical light sources. Thus the luminance gradation function measured in a suburban zone or near the edges of a city is linked to the City Pattern or vice versa. It is shown that clear sky luminance/radiance data recorded in an urban area can be used to retrieve the bulk luminous/radiant intensity distribution if some a-priori information on atmospheric aerosols is available. For instance, the single scattering albedo of aerosol particles is required under low turbidity conditions, as demonstrated on a targeted experiment in the city of Frýdek-Mistek. One of the main advantages of the retrieval method presented in this paper is that the single scattering approximation is satisfactorily accurate in characterizing the light field near the ground because the dominant contribution to the sky glow has originated from beams propagated along short optical paths.

  4. Magnitude of luminance modulation specifies amplitude of perceived movement.

    PubMed

    Allik, J; Pulver, A

    1995-01-01

    A compelling impression of movement, which is perceptually indistinguishable from a real displacement, can be elicited by patterns containing no spatially displaced elements. An apparent oscillation, w-movement, was generated by a stationary pattern containing a large number of horizontal pairs of spatially adjacent dots modulated in brightness. The observer's task was to adjust the perceived amplitude of the w-motion to match the amplitude of a real oscillation. All of the data can be accounted for by a simple rule: If the relative change in the luminance, W = delta L/L, between two adjacent stationary dots is kept constant, the distance over which these dots appeared to travel in space comprises a fixed fraction of the total distance by which they are separated. The apparent amplitude of the w-motion increases strictly in proportion with luminance contrast, provided that the contrast is represented in the motion-encoding system by a rapidly saturating compressive Weibull transformation. These findings can be explained in terms of bilocal motion encoders comparing two luminance modulations occurring at two different locations.

  5. Hot DOGs: The Most Luminous Galaxies Found by WISE

    NASA Astrophysics Data System (ADS)

    Eisenhardt, Peter; Tsai, Chao-Wei; Wu, Jingwen; Assef, Roberto; Stern, Daniel; Wright, Edward

    2013-08-01

    NASA's Wide-field Infrared Survey Explorer (WISE) has achieved its fundamental goal by delivering an all-sky survey at 3.4, 4.6, 12 and 22 (micron) (W1, W2, W3, and W4), reaching sensitivities hundreds of times deeper than IRAS. One of the two primary science objectives for WISE is to identify the most luminous galaxies in the Universe. We have used WISE photometry to select an all-sky sample of objects which are extremely luminous, and for which Herschel far-IR follow-up observations are 99% complete. The objects are prominent in W3 and W4, but faint or undetected in W1 and W2. The spectroscopy and far IR photometry for these objects show they typically have redshifts z > 2 and luminosities over 10^13 L_⊙, with about 5 - 10% exceeding 10^14 L_⊙ and rivaling the brightest known QSOs. Their dust is more than twice as hot as other IR luminous objects: they are hot dust obscured galaxies or ``hot DOGs," and may represent a new phase in galaxy evolution. Because our 2012B allocation had mixed weather, we request 2013B NOAO time to complete the collection of redshifts and optical and near IR photometry for this all-sky sample of the brightest hot DOGs. With existing and allocated observations, this request should complete the acquisition of these crucial data for this primary WISE science objective.

  6. Linking luminance and lightness by global contrast normalization.

    PubMed

    Zeiner, Katharina; Maertens, Marianne

    2014-06-03

    In the present experiment we addressed the question of how the visual system determines surface lightness from luminances in the retinal image. We measured the perceived lightness of target surfaces that were embedded in custom-made checkerboards. The checkerboards consisted of 10 by 10 checks of 10 different reflectance values that were arranged randomly across the board. They were rendered under six viewing conditions including plain view, with a shadow-casting cylinder, or with one of four different transparent media covering part of the board. For each reflectance we measured its corresponding luminance in the different viewing conditions. We then assessed the lightness matches of four observers for each of the reflectances in the different viewing conditions. We derived predictions of perceived lightness based on local luminance, Michelson contrast, edge integration, anchoring theory, and a normalized Michelson contrast measure. The normalized contrast measure was the best predictor of surface lightness and was almost as good as the actual reflectance values. The normalized contrast measure combines a local computation of Michelson contrast with a region-based normalization of contrast ranges with respect to the contrast range in plain view. How the segregation of image regions is accomplished remains to be elucidated.

  7. Infrequent Loss of Luminal Differentiation in Ductal Breast Cancer Metastasis

    PubMed Central

    Calvo, Julia; Sánchez-Cid, Lourdes; Muñoz, Montserrat; Lozano, Juan José; Thomson, Timothy M.; Fernández, Pedro L.

    2013-01-01

    Lymph node involvement is a major prognostic variable in breast cancer. Whether the molecular mechanisms that drive breast cancer cells to colonize lymph nodes are shared with their capacity to form distant metastases is yet to be established. In a transcriptomic survey aimed at identifying molecular factors associated with lymph node involvement of ductal breast cancer, we found that luminal differentiation, assessed by the expression of estrogen receptor (ER) and/or progesterone receptor (PR) and GATA3, was only infrequently lost in node-positive primary tumors and in matched lymph node metastases. The transcription factor GATA3 critically determines luminal lineage specification of mammary epithelium and is widely considered a tumor and metastasis suppressor in breast cancer. Strong expression of GATA3 and ER in a majority of primary node-positive ductal breast cancer was corroborated by quantitative RT-PCR and immunohistochemistry in the initial sample set, and by immunohistochemistry in an additional set from 167 patients diagnosed of node-negative and –positive primary infiltrating ductal breast cancer, including 102 samples from loco-regional lymph node metastases matched to their primary tumors, as well as 37 distant metastases. These observations suggest that loss of luminal differentiation is not a major factor driving the ability of breast cancer cells to colonize regional lymph nodes. PMID:24205108

  8. Detecting Exomoons around Self-luminous Giant Exoplanets through Polarization

    NASA Astrophysics Data System (ADS)

    Sengupta, Sujan; Marley, Mark S.

    2016-06-01

    Many of the directly imaged self-luminous gas-giant exoplanets have been found to have cloudy atmospheres. Scattering of the emergent thermal radiation from these planets by the dust grains in their atmospheres should locally give rise to significant linear polarization of the emitted radiation. However, the observable disk-averaged polarization should be zero if the planet is spherically symmetric. Rotation-induced oblateness may yield a net non-zero disk-averaged polarization if the planets have sufficiently high spin rotation velocity. On the other hand, when a large natural satellite or exomoon transits a planet with a cloudy atmosphere along the line of sight, the asymmetry induced during the transit should give rise to a net non-zero, time-resolved linear polarization signal. The peak amplitude of such time-dependent polarization may be detectable even for slowly rotating exoplanets. Therefore, we suggest that large exomoons around directly imaged self-luminous exoplanets may be detectable through time-resolved imaging polarimetry. Adopting detailed atmospheric models for several values of effective temperature and surface gravity that are appropriate for self-luminous exoplanets, we present the polarization profiles of these objects in the infrared during the transit phase and estimate the peak amplitude of polarization that occurs during the inner contacts of the transit ingress/egress phase. The peak polarization is predicted to range between 0.1% and 0.3% in the infrared.

  9. Perceived temporal asynchrony between sinusoidally modulated luminance and depth

    PubMed Central

    Žarić, Gojko; Yazdanbakhsh, Arash; Nishina, Shigeaki; De Weerd, Peter; Watanabe, Takeo

    2015-01-01

    Simultaneously presented visual events lead to temporally asynchronous percepts. This has led some researchers to conclude that the asynchronous experience is a manifestation of differences in neural processing time for different visual attributes. Others, however, have suggested that the asynchronous experience is due to differences in temporal markers for changes of different visual attributes. Here, two sets of bars were presented, one to each eye. Either the bars were moving or their luminance was gradually changing. Bars moved horizontally in counterphase at low frequencies along short trajectories and were presented stereoscopically, such that the horizontal movements were perceived as back-and-forth motion on a sagittal plane, or monocularly to a dominant eye, preserving a perception of the horizontal movements on a frontal plane. In a control condition, bars were stationary and their luminance was modulated. The changes in stimulus speed or luminance occurred sinusoidally. When asked to adjust the phase of one stimulus to the other to achieve synchronous perception, participants showed a constant phase offset at the lowest frequencies used. Given the absence of abrupt transitions and the presence of similar gradual turning points in our stimuli to control for attentional effects, it can be concluded that asynchronous percepts in multimodal stimuli may at least in part be a manifestation of difference in neural processing time of visual attributes rather than solely a difference in the temporal markers (transitions versus turning points). PMID:26605842

  10. Cartographic mapping study

    NASA Technical Reports Server (NTRS)

    Wilson, C.; Dye, R.; Reed, L.

    1982-01-01

    The errors associated with planimetric mapping of the United States using satellite remote sensing techniques are analyzed. Assumptions concerning the state of the art achievable for satellite mapping systems and platforms in the 1995 time frame are made. An analysis of these performance parameters is made using an interactive cartographic satellite computer model, after first validating the model using LANDSAT 1 through 3 performance parameters. An investigation of current large scale (1:24,000) US National mapping techniques is made. Using the results of this investigation, and current national mapping accuracy standards, the 1995 satellite mapping system is evaluated for its ability to meet US mapping standards for planimetric and topographic mapping at scales of 1:24,000 and smaller.

  11. Lunar impact flashes from Geminids: analysis of luminous efficiencies and the flux of large meteoroids on Earth

    NASA Astrophysics Data System (ADS)

    Ortiz, J. L.; Madiedo, J. M.; Morales, N.; Santos-Sanz, P.; Aceituno, F. J.

    2015-11-01

    We analyse lunar impact flashes recorded by our team during runs in December 2007, 2011, 2013 and 2014. In total, 12 impact flashes with magnitudes ranging between 7.1 and 9.3 in V band were identified. From these, nine events could be linked to the Geminid stream. Using these observations, the ratio of luminous energy emitted in the flashes with respect to the kinetic energy of the impactors for meteoroids of the Geminid stream is estimated. By making use of the known Geminids meteoroid flux on Earth we found this ratio to be 2.1 × 10-3 on average. We compare this luminous efficiency with other estimations derived in the past for other meteoroid streams and also compare it with other estimations that we present here for the first time by making use of crater diameter measurements. We think that the luminous efficiency has to be revised downwards, not upwards, at least for sporadic impacts. This implies an increase in the influx of kilogram-sized and larger bodies on Earth that has been derived thus far through the lunar impact flash monitoring technique.

  12. Mapping gas-phase organic reactivity and concomitant secondary organic aerosol formation: chemometric dimension reduction techniques for the deconvolution of complex atmospheric datasets

    NASA Astrophysics Data System (ADS)

    Wyche, K. P.; Monks, P. S.; Smallbone, K. L.; Hamilton, J. F.; Alfarra, M. R.; Rickard, A. R.; McFiggans, G. B.; Jenkin, M. E.; Bloss, W. J.; Ryan, A. C.; Hewitt, C. N.; MacKenzie, A. R.

    2015-01-01

    Highly non-linear dynamical systems, such as those found in atmospheric chemistry, necessitate hierarchical approaches to both experiment and modeling in order, ultimately, to identify and achieve fundamental process-understanding in the full open system. Atmospheric simulation chambers comprise an intermediate in complexity, between a classical laboratory experiment and the full, ambient system. As such, they can generate large volumes of difficult-to-interpret data. Here we describe and implement a chemometric dimension reduction methodology for the deconvolution and interpretation of complex gas- and particle-phase composition spectra. The methodology comprises principal component analysis (PCA), hierarchical cluster analysis (HCA) and positive least squares-discriminant analysis (PLS-DA). These methods are, for the first time, applied to simultaneous gas- and particle-phase composition data obtained from a comprehensive series of environmental simulation chamber experiments focused on biogenic volatile organic compound (BVOC) photooxidation and associated secondary organic aerosol (SOA) formation. We primarily investigated the biogenic SOA precursors isoprene, α-pinene, limonene, myrcene, linalool and β-caryophyllene. The chemometric analysis is used to classify the oxidation systems and resultant SOA according to the controlling chemistry and the products formed. Furthermore, a holistic view of results across both the gas- and particle-phases shows the different SOA formation chemistry, initiating in the gas-phase, proceeding to govern the differences between the various BVOC SOA compositions. The results obtained are used to describe the particle composition in the context of the oxidized gas-phase matrix. An extension of the technique, which incorporates into the statistical models data from anthropogenic (i.e. toluene) oxidation and "more realistic" plant mesocosm systems, demonstrates that such an ensemble of chemometric mapping has the potential to be

  13. Luminance measurement to evaluate the damage of notched FRP plates in static load

    SciTech Connect

    Hyakutake, H.; Yamamoto, T.

    1995-11-01

    The validity of the damage criterion for notched FRP plates based on the concept of severity near the notch root is subjected to further experimental scrutiny. An experimental program is presented which examines the effect of notch geometry on the damage near the notch root of FRP plates. This is accomplished by obtaining experimental data on the notched specimens of a glass cloth/epoxy laminate for a wide range of notch geometries in tension and bending. The process of initiation and growth of damage near the notch root was measured by means of the luminance measurement technique with a CCD camera. The experiment shows that the growth of damage zone near the notch root was governed predominantly by both the notch-root radius and the maximum elastic stress at the notch root, while it was independent of notch depth and type of loading. On the basis of the concept of severity, the experimental results can be clearly elucidated.

  14. Luminous bacteria cultured from fish guts in the Gulf of Oman.

    PubMed

    Makemson, J C; Hermosa, G V

    1999-01-01

    The incidence of culturable luminous bacteria in Omani market fish guts was correlated to habitat type amongst 109 species of fish. Isolated representative luminous bacteria were compared to known species using the Biolog system (95 traits/isolate) and cluster analysis, which showed that the main taxa present in fish guts were clades related to Vibrio harveyi and Photobacterium species with sporadic incidence of P. phosphoreum. The luminous isolates from gut of the slip-mouth (barred pony fish), Leiognathus fasciatus, were mainly a type related to Photobacterium but phenotypically different from known species. These luminous gut bacteria were identical with the bacteria in the light organ, indicating that the light organ supplies a significant quantity of luminous bacteria to the gut. In many of the fish that lack light organs, luminous bacteria were also the dominant bacterial type in the gut, while in some others luminous bacteria were encountered sporadically and at low densities, reflecting the incidence of culturable luminous bacteria in seawater. Pelagic fish contained the highest incidence of culturable luminous bacteria and reef-associated fish the lowest. No correlation was found between the incidence of culturable luminous bacteria and the degree to which fish produce a melanin-covered gut.

  15. Extensive Transcriptomic and Genomic Analysis Provides New Insights about Luminal Breast Cancers

    PubMed Central

    Tishchenko, Inna; Milioli, Heloisa Helena; Riveros, Carlos; Moscato, Pablo

    2016-01-01

    Despite constituting approximately two thirds of all breast cancers, the luminal A and B tumours are poorly classified at both clinical and molecular levels. There are contradictory reports on the nature of these subtypes: some define them as intrinsic entities, others as a continuum. With the aim of addressing these uncertainties and identifying molecular signatures of patients at risk, we conducted a comprehensive transcriptomic and genomic analysis of 2,425 luminal breast cancer samples. Our results indicate that the separation between the molecular luminal A and B subtypes—per definition—is not associated with intrinsic characteristics evident in the differentiation between other subtypes. Moreover, t-SNE and MST-kNN clustering approaches based on 10,000 probes, associated with luminal tumour initiation and/or development, revealed the close connections between luminal A and B tumours, with no evidence of a clear boundary between them. Thus, we considered all luminal tumours as a single heterogeneous group for analysis purposes. We first stratified luminal tumours into two distinct groups by their HER2 gene cluster co-expression: HER2-amplified luminal and ordinary-luminal. The former group is associated with distinct transcriptomic and genomic profiles, and poor prognosis; it comprises approximately 8% of all luminal cases. For the remaining ordinary-luminal tumours we further identified the molecular signature correlated with disease outcomes, exhibiting an approximately continuous gene expression range from low to high risk. Thus, we employed four virtual quantiles to segregate the groups of patients. The clinico-pathological characteristics and ratios of genomic aberrations are concordant with the variations in gene expression profiles, hinting at a progressive staging. The comparison with the current separation into luminal A and B subtypes revealed a substantially improved survival stratification. Concluding, we suggest a review of the definition of

  16. Extensive Transcriptomic and Genomic Analysis Provides New Insights about Luminal Breast Cancers.

    PubMed

    Tishchenko, Inna; Milioli, Heloisa Helena; Riveros, Carlos; Moscato, Pablo

    2016-01-01

    Despite constituting approximately two thirds of all breast cancers, the luminal A and B tumours are poorly classified at both clinical and molecular levels. There are contradictory reports on the nature of these subtypes: some define them as intrinsic entities, others as a continuum. With the aim of addressing these uncertainties and identifying molecular signatures of patients at risk, we conducted a comprehensive transcriptomic and genomic analysis of 2,425 luminal breast cancer samples. Our results indicate that the separation between the molecular luminal A and B subtypes-per definition-is not associated with intrinsic characteristics evident in the differentiation between other subtypes. Moreover, t-SNE and MST-kNN clustering approaches based on 10,000 probes, associated with luminal tumour initiation and/or development, revealed the close connections between luminal A and B tumours, with no evidence of a clear boundary between them. Thus, we considered all luminal tumours as a single heterogeneous group for analysis purposes. We first stratified luminal tumours into two distinct groups by their HER2 gene cluster co-expression: HER2-amplified luminal and ordinary-luminal. The former group is associated with distinct transcriptomic and genomic profiles, and poor prognosis; it comprises approximately 8% of all luminal cases. For the remaining ordinary-luminal tumours we further identified the molecular signature correlated with disease outcomes, exhibiting an approximately continuous gene expression range from low to high risk. Thus, we employed four virtual quantiles to segregate the groups of patients. The clinico-pathological characteristics and ratios of genomic aberrations are concordant with the variations in gene expression profiles, hinting at a progressive staging. The comparison with the current separation into luminal A and B subtypes revealed a substantially improved survival stratification. Concluding, we suggest a review of the definition of

  17. Extended [C II] Emission in Local Luminous Infrared Galaxies

    NASA Astrophysics Data System (ADS)

    Díaz-Santos, T.; Armus, L.; Charmandaris, V.; Stacey, G.; Murphy, E. J.; Haan, S.; Stierwalt, S.; Malhotra, S.; Appleton, P.; Inami, H.; Magdis, G. E.; Elbaz, D.; Evans, A. S.; Mazzarella, J. M.; Surace, J. A.; van der Werf, P. P.; Xu, C. K.; Lu, N.; Meijerink, R.; Howell, J. H.; Petric, A. O.; Veilleux, S.; Sanders, D. B.

    2014-06-01

    We present Herschel/PACS observations of extended [C II] 157.7 μm line emission detected on ~1-10 kpc scales in 60 local luminous infrared galaxies (LIRGs) from the Great Observatories All-sky LIRG Survey. We find that most of the extra-nuclear emission show [C II]/FIR ratios >=4 × 10-3, larger than the mean ratio seen in the nuclei, and similar to those found in the extended disks of normal star-forming galaxies and the diffuse interstellar medium of our Galaxy. The [C II] "deficits" found in the most luminous local LIRGs are therefore restricted to their nuclei. There is a trend for LIRGs with warmer nuclei to show larger differences between their nuclear and extra-nuclear [C II]/FIR ratios. We find an anti-correlation between [C II]/FIR and the luminosity surface density, ΣIR, for the extended emission in the spatially resolved galaxies. However, there is an offset between this trend and that found for the LIRG nuclei. We use this offset to derive a beam filling-factor for the star-forming regions within the LIRG disks of ~6% relative to their nuclei. We confront the observed trend to photo-dissociation region models and find that the slope of the correlation is much shallower than the model predictions. Finally, we compare the correlation found between [C II]/FIR and ΣIR with measurements of high-redshift starbursting IR-luminous galaxies.

  18. LUMINOUS SATELLITES VERSUS DARK SUBHALOS: CLUSTERING IN THE MILKY WAY

    SciTech Connect

    Bozek, Brandon; Wyse, Rosemary F. G.; Gilmore, Gerard

    2013-08-01

    The observed population of the Milky Way satellite galaxies offers a unique testing ground for galaxy formation theory on small scales. Our novel approach was to investigate the clustering of the known Milky Way satellite galaxies and to quantify the amount of substructure within their distribution using a two-point correlation function statistic in each of three spaces: configuration space, line-of-sight velocity space, and four-dimensional (4D) phase space. These results were compared to those for three sets of subhalos in the Via Lactea II cold dark matter (CDM) simulation defined to represent the luminous dwarfs. We found no evidence at a significance level above 2{sigma} of substructure within the distribution of the Milky Way satellite galaxies in any of the three spaces. The 'luminous' subhalo sets are more strongly clustered than are the Milky Way satellites in all three spaces and over a broader range of scales in 4D phase space. Each of the 'luminous' subhalo sets are clustered as a result of substructure within their line-of-sight velocity space distributions at greater than 3{sigma} significance, whereas the Milky Way satellite galaxies are randomly distributed in line-of-sight velocity space. While our comparison is with only one CDM simulation, the inconsistencies between the Milky Way satellite galaxies and the Via Lactea II subhalo sets for all clustering methods suggest a potential new 'small-scale' tension between CDM theory and the observed Milky Way satellites. Future work will obtain a more robust comparison between the observed Milky Way satellites and CDM theory by studying additional simulations.

  19. Wide-field lensing mass maps from Dark Energy Survey science verification data: Methodology and detailed analysis

    NASA Astrophysics Data System (ADS)

    Vikram, V.; Chang, C.; Jain, B.; Bacon, D.; Amara, A.; Becker, M. R.; Bernstein, G.; Bonnett, C.; Bridle, S.; Brout, D.; Busha, M.; Frieman, J.; Gaztanaga, E.; Hartley, W.; Jarvis, M.; Kacprzak, T.; Kovács, A.; Lahav, O.; Leistedt, B.; Lin, H.; Melchior, P.; Peiris, H.; Rozo, E.; Rykoff, E.; Sánchez, C.; Sheldon, E.; Troxel, M. A.; Wechsler, R.; Zuntz, J.; Abbott, T.; Abdalla, F. B.; Armstrong, R.; Banerji, M.; Bauer, A. H.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Buckley-Geer, E.; Burke, D. L.; Capozzi, D.; Carnero Rosell, A.; Carrasco Kind, M.; Castander, F. J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; DePoy, D. L.; Desai, S.; Diehl, H. T.; Dietrich, J. P.; Estrada, J.; Evrard, A. E.; Fausti Neto, A.; Fernandez, E.; Flaugher, B.; Fosalba, P.; Gerdes, D.; Gruen, D.; Gruendl, R. A.; Honscheid, K.; James, D.; Kent, S.; Kuehn, K.; Kuropatkin, N.; Li, T. S.; Maia, M. A. G.; Makler, M.; March, M.; Marshall, J.; Martini, P.; Merritt, K. W.; Miller, C. J.; Miquel, R.; Neilsen, E.; Nichol, R. C.; Nord, B.; Ogando, R.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Sanchez, E.; Scarpine, V.; Sevilla, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thaler, J.; Thomas, D.; Walker, A. R.; Weller, J.

    2015-07-01

    Weak gravitational lensing allows one to reconstruct the spatial distribution of the projected mass density across the sky. These "mass maps" provide a powerful tool for studying cosmology as they probe both luminous and dark matter. In this paper, we present a weak lensing mass map reconstructed from shear measurements in a 139 deg2 area from the Dark Energy Survey (DES) science verification data. We compare the distribution of mass with that of the foreground distribution of galaxies and clusters. The overdensities in the reconstructed map correlate well with the distribution of optically detected clusters. We demonstrate that candidate superclusters and voids along the line of sight can be identified, exploiting the tight scatter of the cluster photometric redshifts. We cross-correlate the mass map with a foreground magnitude-limited galaxy sample from the same data. Our measurement gives results consistent with mock catalogs from N -body simulations that include the primary sources of statistical uncertainties in the galaxy, lensing, and photo-z catalogs. The statistical significance of the cross-correlation is at the 6.8 σ level with 20 arcminute smoothing. We find that the contribution of systematics to the lensing mass maps is generally within measurement uncertainties. In this work, we analyze less than 3% of the final area that will be mapped by the DES; the tools and analysis techniques developed in this paper can be applied to forthcoming larger data sets from the survey.

  20. A conformal mapping technique to correlate the rotating flow around a wing section of vertical axis wind turbine and an equivalent linear flow around a static wing

    NASA Astrophysics Data System (ADS)

    Akimoto, Hiromichi; Hara, Yutaka; Kawamura, Takafumi; Nakamura, Takuju; Lee, Yeon-Seung

    2013-12-01

    In a vertical axis wind turbine (VAWT), turbine blades are subjected to the curved flow field caused by the revolution of turbine. However, performance prediction of VAWT is usually based on the fluid dynamic coefficients obtained in wind tunnel measurements of the two-dimensional static wing. The difference of fluid dynamic coefficients in the curved flow and straight flow deteriorates the accuracy of performance prediction. To find the correlation between the two conditions of curved and straight flow, the authors propose a conformal mapping method on complex plane. It provides bidirectional mapping between the two flow fields. For example, the flow around a symmetric wing in the curved flow is mapped to that around a curved (cambered) wing in the straight flow. Although the shape of mapped wing section is different from the original one, its aerodynamic coefficients show a good correlation to those of the original in the rotating condition. With the proposed method, we can reproduce the local flow field around a rotating blade from the flow data around the mapped static wing in the straight flow condition.