Bergmann, Helmar; Minear, Gregory; Raith, Maria; Schaffarich, Peter M
2008-12-09
The accuracy of multiple window spatial resolution characterises the performance of a gamma camera for dual isotope imaging. In the present study we investigate an alternative method to the standard NEMA procedure for measuring this performance parameter. A long-lived 133Ba point source with gamma energies close to 67Ga and a single bore lead collimator were used to measure the multiple window spatial registration error. Calculation of the positions of the point source in the images used the NEMA algorithm. The results were validated against the values obtained by the standard NEMA procedure which uses a liquid 67Ga source with collimation. Of the source-collimator configurations under investigation an optimum collimator geometry, consisting of a 5 mm thick lead disk with a diameter of 46 mm and a 5 mm central bore, was selected. The multiple window spatial registration errors obtained by the 133Ba method showed excellent reproducibility (standard deviation < 0.07 mm). The values were compared with the results from the NEMA procedure obtained at the same locations and showed small differences with a correlation coefficient of 0.51 (p < 0.05). In addition, the 133Ba point source method proved to be much easier to use. A Bland-Altman analysis showed that the 133Ba and the 67Ga Method can be used interchangeably. The 133Ba point source method measures the multiple window spatial registration error with essentially the same accuracy as the NEMA-recommended procedure, but is easier and safer to use and has the potential to replace the current standard procedure.
In order to protect estuarine resources, managers must be able to discern the effects of natural conditions and non-point source effects, and separate them from multiple anthropogenic point source effects. Our approach was to evaluate benthic community assemblages, riverine nitro...
NASA Astrophysics Data System (ADS)
Nagasaka, Yosuke; Nozu, Atsushi
2017-02-01
The pseudo point-source model approximates the rupture process on faults with multiple point sources for simulating strong ground motions. A simulation with this point-source model is conducted by combining a simple source spectrum following the omega-square model with a path spectrum, an empirical site amplification factor, and phase characteristics. Realistic waveforms can be synthesized using the empirical site amplification factor and phase models even though the source model is simple. The Kumamoto earthquake occurred on April 16, 2016, with M JMA 7.3. Many strong motions were recorded at stations around the source region. Some records were considered to be affected by the rupture directivity effect. This earthquake was suitable for investigating the applicability of the pseudo point-source model, the current version of which does not consider the rupture directivity effect. Three subevents (point sources) were located on the fault plane, and the parameters of the simulation were determined. The simulated results were compared with the observed records at K-NET and KiK-net stations. It was found that the synthetic Fourier spectra and velocity waveforms generally explained the characteristics of the observed records, except for underestimation in the low frequency range. Troughs in the observed Fourier spectra were also well reproduced by placing multiple subevents near the hypocenter. The underestimation is presumably due to the following two reasons. The first is that the pseudo point-source model targets subevents that generate strong ground motions and does not consider the shallow large slip. The second reason is that the current version of the pseudo point-source model does not consider the rupture directivity effect. Consequently, strong pulses were not reproduced enough at stations northeast of Subevent 3 such as KMM004, where the effect of rupture directivity was significant, while the amplitude was well reproduced at most of the other stations. This result indicates the necessity for improving the pseudo point-source model, by introducing azimuth-dependent corner frequency for example, so that it can incorporate the effect of rupture directivity.[Figure not available: see fulltext.
VizieR Online Data Catalog: GUViCS. Ultraviolet Source Catalogs (Voyer+, 2014)
NASA Astrophysics Data System (ADS)
Voyer, E. N.; Boselli, A.; Boissier, S.; Heinis, S.; Cortese, L.; Ferrarese, L.; Cote, P.; Cuillandre, J.-C.; Gwyn, S. D. J.; Peng, E. W.; Zhang, H.; Liu, C.
2014-07-01
These catalogs are based on GALEX NUV and FUV source detections in and behind the Virgo Cluster. The detections are split into catalogs of extended sources and point-like sources. The UV Virgo Cluster Extended Source catalog (UV_VES.fit) provides the deepest and most extensive UV photometric data of extended galaxies in Virgo to date. If certain data is not available for a given source then a null value is entered (e.g. -999, -99). UV point-like sources are matched with SDSS, NGVS, and NED and the relevant photometry and further data from these databases/catalogs are provided in this compilation of catalogs. The primary GUViCS UV Virgo Cluster Point-Like Source catalog is UV_VPS.fit. This catalog provides the most useful GALEX pipeline NUV and FUV photometric parameters, and categorizes sources as stars, Virgo members, and background sources, when possible. It also provides identifiers for optical matches in the SDSS and NED, and indicates if a match exists in the NGVS, only if GUViCS-optical matches are one-to-one. NED spectroscopic redshifts are also listed for GUViCS-NED one-to-one matches. If certain data is not available for a given source a null value is entered. Additionally, the catalog is useful for quick access to optical data on one-to-one GUViCS-SDSS matches.The only parameter available in the catalog for UV sources that have multiple SDSS matches is the total number of multiple matches, i.e. SDSSNUMMTCHS. Multiple GUViCS sources matched to the same SDSS source are also flagged given a total number of matches, SDSSNUMMTCHS, of one. All other fields for multiple matches are set to a null value of -99. In order to obtain full optical SDSS data for multiply matched UV sources in both scenarios, the user can cross-correlate the GUViCS ID of the sources of interest with the full GUViCS-SDSS matched catalog in GUV_SDSS.fit. The GUViCS-SDSS matched catalog, GUV_SDSS.fit, provides the most relevant SDSS data on all GUViCS-SDSS matches, including one-to-one matches and multiply matched sources. The catalog gives full SDSS identification information, complete SDSS photometric measurements in multiple aperture types, and complete redshift information (photometric and spectroscopic). It is ideal for large statistical studies of galaxy populations at multiple wavelengths in the background of the Virgo Cluster. The catalog can also be used as a starting point to study and search for previously unknown UV-bright point-like objects within the Virgo Cluster. If certain data is not available for a given source that field is given a null value. (6 data files).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.
An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less
Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.
2017-06-13
An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less
General theory of remote gaze estimation using the pupil center and corneal reflections.
Guestrin, Elias Daniel; Eizenman, Moshe
2006-06-01
This paper presents a general theory for the remote estimation of the point-of-gaze (POG) from the coordinates of the centers of the pupil and corneal reflections. Corneal reflections are produced by light sources that illuminate the eye and the centers of the pupil and corneal reflections are estimated in video images from one or more cameras. The general theory covers the full range of possible system configurations. Using one camera and one light source, the POG can be estimated only if the head is completely stationary. Using one camera and multiple light sources, the POG can be estimated with free head movements, following the completion of a multiple-point calibration procedure. When multiple cameras and multiple light sources are used, the POG can be estimated following a simple one-point calibration procedure. Experimental and simulation results suggest that the main sources of gaze estimation errors are the discrepancy between the shape of real corneas and the spherical corneal shape assumed in the general theory, and the noise in the estimation of the centers of the pupil and corneal reflections. A detailed example of a system that uses the general theory to estimate the POG on a computer screen is presented.
Distributed optimization system and method
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2003-06-10
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
Distributed Optimization System
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2004-11-30
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
Innovations in the Analysis of Chandra-ACIS Observations
NASA Astrophysics Data System (ADS)
Broos, Patrick S.; Townsley, Leisa K.; Feigelson, Eric D.; Getman, Konstantin V.; Bauer, Franz E.; Garmire, Gordon P.
2010-05-01
As members of the instrument team for the Advanced CCD Imaging Spectrometer (ACIS) on NASA's Chandra X-ray Observatory and as Chandra General Observers, we have developed a wide variety of data analysis methods that we believe are useful to the Chandra community, and have constructed a significant body of publicly available software (the ACIS Extract package) addressing important ACIS data and science analysis tasks. This paper seeks to describe these data analysis methods for two purposes: to document the data analysis work performed in our own science projects and to help other ACIS observers judge whether these methods may be useful in their own projects (regardless of what tools and procedures they choose to implement those methods). The ACIS data analysis recommendations we offer here address much of the workflow in a typical ACIS project, including data preparation, point source detection via both wavelet decomposition and image reconstruction, masking point sources, identification of diffuse structures, event extraction for both point and diffuse sources, merging extractions from multiple observations, nonparametric broadband photometry, analysis of low-count spectra, and automation of these tasks. Many of the innovations presented here arise from several, often interwoven, complications that are found in many Chandra projects: large numbers of point sources (hundreds to several thousand), faint point sources, misaligned multiple observations of an astronomical field, point source crowding, and scientifically relevant diffuse emission.
Liu, Mei-bing; Chen, Xing-wei; Chen, Ying
2015-07-01
Identification of the critical source areas of non-point source pollution is an important means to control the non-point source pollution within the watershed. In order to further reveal the impact of multiple time scales on the spatial differentiation characteristics of non-point source nitrogen loss, a SWAT model of Shanmei Reservoir watershed was developed. Based on the simulation of total nitrogen (TN) loss intensity of all 38 subbasins, spatial distribution characteristics of nitrogen loss and critical source areas were analyzed at three time scales of yearly average, monthly average and rainstorms flood process, respectively. Furthermore, multiple linear correlation analysis was conducted to analyze the contribution of natural environment and anthropogenic disturbance on nitrogen loss. The results showed that there were significant spatial differences of TN loss in Shanmei Reservoir watershed at different time scales, and the spatial differentiation degree of nitrogen loss was in the order of monthly average > yearly average > rainstorms flood process. TN loss load mainly came from upland Taoxi subbasin, which was identified as the critical source area. At different time scales, land use types (such as farmland and forest) were always the dominant factor affecting the spatial distribution of nitrogen loss, while the effect of precipitation and runoff on the nitrogen loss was only taken in no fertilization month and several processes of storm flood at no fertilization date. This was mainly due to the significant spatial variation of land use and fertilization, as well as the low spatial variability of precipitation and runoff.
NASA Astrophysics Data System (ADS)
Singh, Sarvesh Kumar; Rani, Raj
2015-10-01
The study addresses the identification of multiple point sources, emitting the same tracer, from their limited set of merged concentration measurements. The identification, here, refers to the estimation of locations and strengths of a known number of simultaneous point releases. The source-receptor relationship is described in the framework of adjoint modelling by using an analytical Gaussian dispersion model. A least-squares minimization framework, free from an initialization of the release parameters (locations and strengths), is presented to estimate the release parameters. This utilizes the distributed source information observable from the given monitoring design and number of measurements. The technique leads to an exact retrieval of the true release parameters when measurements are noise free and exactly described by the dispersion model. The inversion algorithm is evaluated using the real data from multiple (two, three and four) releases conducted during Fusion Field Trials in September 2007 at Dugway Proving Ground, Utah. The release locations are retrieved, on average, within 25-45 m of the true sources with the distance from retrieved to true source ranging from 0 to 130 m. The release strengths are also estimated within a factor of three to the true release rates. The average deviations in retrieval of source locations are observed relatively large in two release trials in comparison to three and four release trials.
North Fork Clear Creek (NFCC) receives acid-mine drainage (AMD) from multiple abandoned mines in the Clear Creek Watershed. Point sources of AMD originate In the Black Hawk/Central City region of the stream. Water chemistry also is influenced by several non-point sources of AMD,...
Acoustic field in unsteady moving media
NASA Technical Reports Server (NTRS)
Bauer, F.; Maestrello, L.; Ting, L.
1995-01-01
In the interaction of an acoustic field with a moving airframe the authors encounter a canonical initial value problem for an acoustic field induced by an unsteady source distribution, q(t,x) with q equivalent to 0 for t less than or equal to 0, in a medium moving with a uniform unsteady velocity U(t)i in the coordinate system x fixed on the airframe. Signals issued from a source point S in the domain of dependence D of an observation point P at time t will arrive at point P more than once corresponding to different retarded times, Tau in the interval (0, t). The number of arrivals is called the multiplicity of the point S. The multiplicity equals 1 if the velocity U remains subsonic and can be greater when U becomes supersonic. For an unsteady uniform flow U(t)i, rules are formulated for defining the smallest number of I subdomains V(sub i) of D with the union of V(sub i) equal to D. Each subdomain has multiplicity 1 and a formula for the corresponding retarded time. The number of subdomains V(sub i) with nonempty intersection is the multiplicity m of the intersection. The multiplicity is at most I. Examples demonstrating these rules are presented for media at accelerating and/or decelerating supersonic speed.
A study on the evaporation process with multiple point-sources
NASA Astrophysics Data System (ADS)
Jun, Sunghoon; Kim, Minseok; Kim, Suk Han; Lee, Moon Yong; Lee, Eung Ki
2013-10-01
In Organic Light Emitting Display (OLED) manufacturing processes, there is a need to enlarge the mother glass substrate to raise its productivity and enable OLED TV. The larger the size of the glass substrate, the more difficult it is to establish a uniform thickness profile of the organic thin-film layer in the vacuum evaporation process. In this paper, a multiple point-source evaporation process is proposed to deposit a uniform organic layer uniformly. Using this method, a uniformity of 3.75% was achieved along a 1,300 mm length of Gen. 5.5 glass substrate (1300 × 1500 mm2).
Modal Analysis Using the Singular Value Decomposition and Rational Fraction Polynomials
2017-04-06
information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...results. The programs are designed for experimental datasets with multiple drive and response points and have proven effective even for systems with... designed for experimental datasets with multiple drive and response points and have proven effective even for systems with numerous closely-spaced
Environmental monitoring of Galway Bay: fusing data from remote and in-situ sources
NASA Astrophysics Data System (ADS)
O'Connor, Edel; Hayes, Jer; Smeaton, Alan F.; O'Connor, Noel E.; Diamond, Dermot
2009-09-01
Changes in sea surface temperature can be used as an indicator of water quality. In-situ sensors are being used for continuous autonomous monitoring. However these sensors have limited spatial resolution as they are in effect single point sensors. Satellite remote sensing can be used to provide better spatial coverage at good temporal scales. However in-situ sensors have a richer temporal scale for a particular point of interest. Work carried out in Galway Bay has combined data from multiple satellite sources and in-situ sensors and investigated the benefits and drawbacks of using multiple sensing modalities for monitoring a marine location.
NASA Astrophysics Data System (ADS)
Tryka, Stanislaw
2007-04-01
A general formula and some special integral formulas were presented for calculating radiative fluxes incident on a circular plane from a planar multiple point source within a coaxial cylindrical enclosure perpendicular to the source. These formula were obtained for radiation propagating in a homogeneous isotropic medium assuming that the lateral surface of the enclosure completely absorbs the incident radiation. Exemplary results were computed numerically and illustrated with three-dimensional surface plots. The formulas presented are suitable for determining fluxes of radiation reaching planar circular detectors, collectors or other planar circular elements from systems of laser diodes, light emitting diodes and fiber lamps within cylindrical enclosures, as well as small biological emitters (bacteria, fungi, yeast, etc.) distributed on planar bases of open nontransparent cylindrical containers.
NASA Astrophysics Data System (ADS)
Roten, D.; Hogue, S.; Spell, P.; Marland, E.; Marland, G.
2017-12-01
There is an increasing role for high resolution, CO2 emissions inventories across multiple arenas. The breadth of the applicability of high-resolution data is apparent from their use in atmospheric CO2 modeling, their potential for validation of space-based atmospheric CO2 remote-sensing, and the development of climate change policy. This work focuses on increasing our understanding of the uncertainty in these inventories and the implications on their downstream use. The industrial point sources of emissions (power generating stations, cement manufacturing plants, paper mills, etc.) used in the creation of these inventories often have robust emissions characteristics, beyond just their geographic location. Physical parameters of the emission sources such as number of exhaust stacks, stack heights, stack diameters, exhaust temperatures, and exhaust velocities, as well as temporal variability and climatic influences can be important in characterizing emissions. Emissions from large point sources can behave much differently than emissions from areal sources such as automobiles. For many applications geographic location is not an adequate characterization of emissions. This work demonstrates the sensitivities of atmospheric models to the physical parameters of large point sources and provides a methodology for quantifying parameter impacts at multiple locations across the United States. The sensitivities highlight the importance of location and timing and help to highlight potential aspects that can guide efforts to reduce uncertainty in emissions inventories and increase the utility of the models.
Advanced Optimal Extraction for the Spitzer/IRS
NASA Astrophysics Data System (ADS)
Lebouteiller, V.; Bernard-Salas, J.; Sloan, G. C.; Barry, D. J.
2010-02-01
We present new advances in the spectral extraction of pointlike sources adapted to the Infrared Spectrograph (IRS) on board the Spitzer Space Telescope. For the first time, we created a supersampled point-spread function of the low-resolution modules. We describe how to use the point-spread function to perform optimal extraction of a single source and of multiple sources within the slit. We also examine the case of the optimal extraction of one or several sources with a complex background. The new algorithms are gathered in a plug-in called AdOpt which is part of the SMART data analysis software.
AN INTEGRATED FRAMEWORK FOR WATERSHED ASSESSMENT AND MANAGEMENT
Watershed approaches to water quality management have become popular, because they can address multiple point and non-point sources and the influences of land use. Developing technically-sound watershed management strategies can be challenging due to the need to 1) account for mu...
1SXPS: A Deep Swift X-Ray Telescope Point Source Catalog with Light Curves and Spectra
NASA Technical Reports Server (NTRS)
Evans, P. A.; Osborne, J. P.; Beardmore, A. P.; Page, K. L.; Willingale, R.; Mountford, C. J.; Pagani, C.; Burrows, D. N.; Kennea, J. A.; Perri, M.;
2013-01-01
We present the 1SXPS (Swift-XRT point source) catalog of 151,524 X-ray point sources detected by the Swift-XRT in 8 yr of operation. The catalog covers 1905 sq deg distributed approximately uniformly on the sky. We analyze the data in two ways. First we consider all observations individually, for which we have a typical sensitivity of approximately 3 × 10(exp -13) erg cm(exp -2) s(exp -1) (0.3-10 keV). Then we co-add all data covering the same location on the sky: these images have a typical sensitivity of approximately 9 × 10(exp -14) erg cm(exp -2) s(exp -1) (0.3-10 keV). Our sky coverage is nearly 2.5 times that of 3XMM-DR4, although the catalog is a factor of approximately 1.5 less sensitive. The median position error is 5.5 (90% confidence), including systematics. Our source detection method improves on that used in previous X-ray Telescope (XRT) catalogs and we report greater than 68,000 new X-ray sources. The goals and observing strategy of the Swift satellite allow us to probe source variability on multiple timescales, and we find approximately 30,000 variable objects in our catalog. For every source we give positions, fluxes, time series (in four energy bands and two hardness ratios), estimates of the spectral properties, spectra and spectral fits for the brightest sources, and variability probabilities in multiple energy bands and timescales.
A NEW METHOD FOR FINDING POINT SOURCES IN HIGH-ENERGY NEUTRINO DATA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Ke; Miller, M. Coleman
The IceCube collaboration has reported the first detection of high-energy astrophysical neutrinos, including ∼50 high-energy starting events, but no individual sources have been identified. It is therefore important to develop the most sensitive and efficient possible algorithms to identify the point sources of these neutrinos. The most popular current method works by exploring a dense grid of possible directions to individual sources, and identifying the single direction with the maximum probability of having produced multiple detected neutrinos. This method has numerous strengths, but it is computationally intensive and because it focuses on the single best location for a point source,more » additional point sources are not included in the evidence. We propose a new maximum likelihood method that uses the angular separations between all pairs of neutrinos in the data. Unlike existing autocorrelation methods for this type of analysis, which also use angular separations between neutrino pairs, our method incorporates information about the point-spread function and can identify individual point sources. We find that if the angular resolution is a few degrees or better, then this approach reduces both false positive and false negative errors compared to the current method, and is also more computationally efficient up to, potentially, hundreds of thousands of detected neutrinos.« less
Boluda-Ruiz, Rubén; García-Zambrana, Antonio; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz
2014-06-30
In this paper, a novel adaptive cooperative protocol with multiple relays using detect-and-forward (DF) over atmospheric turbulence channels with pointing errors is proposed. The adaptive DF cooperative protocol here analyzed is based on the selection of the optical path, source-destination or different source-relay links, with a greater value of fading gain or irradiance, maintaining a high diversity order. Closed-form asymptotic bit error-rate (BER) expressions are obtained for a cooperative free-space optical (FSO) communication system with Nr relays, when the irradiance of the transmitted optical beam is susceptible to either a wide range of turbulence conditions, following a gamma-gamma distribution of parameters α and β, or pointing errors, following a misalignment fading model where the effect of beam width, detector size and jitter variance is considered. A greater robustness for different link distances and pointing errors is corroborated by the obtained results if compared with similar cooperative schemes or equivalent multiple-input multiple-output (MIMO) systems. Simulation results are further demonstrated to confirm the accuracy and usefulness of the derived results.
SIFT optimization and automation for matching images from multiple temporal sources
NASA Astrophysics Data System (ADS)
Castillo-Carrión, Sebastián; Guerrero-Ginel, José-Emilio
2017-05-01
Scale Invariant Feature Transformation (SIFT) was applied to extract tie-points from multiple source images. Although SIFT is reported to perform reliably under widely different radiometric and geometric conditions, using the default input parameters resulted in too few points being found. We found that the best solution was to focus on large features as these are more robust and not prone to scene changes over time, which constitutes a first approach to the automation of processes using mapping applications such as geometric correction, creation of orthophotos and 3D models generation. The optimization of five key SIFT parameters is proposed as a way of increasing the number of correct matches; the performance of SIFT is explored in different images and parameter values, finding optimization values which are corroborated using different validation imagery. The results show that the optimization model improves the performance of SIFT in correlating multitemporal images captured from different sources.
Multiwavelength counterparts of the point sources in the Chandra Source Catalog
NASA Astrophysics Data System (ADS)
Reynolds, Michael; Civano, Francesca Maria; Fabbiano, Giuseppina; D'Abrusco, Raffaele
2018-01-01
The most recent release of the Chandra Source Catalog (CSC) version 2.0 comprises more than $\\sim$350,000 point sources, down to fluxes of $\\sim$10$^{-16}$ erg/cm$^2$/s, covering $\\sim$500 deg$^2$ of the sky, making it one of the best available X-ray catalogs to date. There are many reasons to have multiwavelength counterparts for sources, one such reason is that X-ray information alone is not enough to identify the sources and divide them between galactic and extragalactic origin, therefore multiwavelength data associated to each X-ray source is crucial for classification and scientific analysis of the sample. To perform this multiwavelength association, we are going to employ the recently released versatile tool NWAY (Salvato et al. 2017), based on a Bayesian algorithm for cross-matching multiple catalogs. NWAY allows the combination of multiple catalogs at the same time, provides a probability for the matches, even in case of non-detection due to different depth of the matching catalogs, and it can be used by including priors on the nature of the sources (e.g. colors, magnitudes, etc). In this poster, we are presenting the preliminary analysis using the CSC sources above the galactic plane matched to the WISE All-Sky catalog, SDSS, Pan-STARRS and GALEX.
Processing Uav and LIDAR Point Clouds in Grass GIS
NASA Astrophysics Data System (ADS)
Petras, V.; Petrasova, A.; Jeziorska, J.; Mitasova, H.
2016-06-01
Today's methods of acquiring Earth surface data, namely lidar and unmanned aerial vehicle (UAV) imagery, non-selectively collect or generate large amounts of points. Point clouds from different sources vary in their properties such as number of returns, density, or quality. We present a set of tools with applications for different types of points clouds obtained by a lidar scanner, structure from motion technique (SfM), and a low-cost 3D scanner. To take advantage of the vertical structure of multiple return lidar point clouds, we demonstrate tools to process them using 3D raster techniques which allow, for example, the development of custom vegetation classification methods. Dense point clouds obtained from UAV imagery, often containing redundant points, can be decimated using various techniques before further processing. We implemented and compared several decimation techniques in regard to their performance and the final digital surface model (DSM). Finally, we will describe the processing of a point cloud from a low-cost 3D scanner, namely Microsoft Kinect, and its application for interaction with physical models. All the presented tools are open source and integrated in GRASS GIS, a multi-purpose open source GIS with remote sensing capabilities. The tools integrate with other open source projects, specifically Point Data Abstraction Library (PDAL), Point Cloud Library (PCL), and OpenKinect libfreenect2 library to benefit from the open source point cloud ecosystem. The implementation in GRASS GIS ensures long term maintenance and reproducibility by the scientific community but also by the original authors themselves.
Steering and positioning targets for HWIL IR testing at cryogenic conditions
NASA Astrophysics Data System (ADS)
Perkes, D. W.; Jensen, G. L.; Higham, D. L.; Lowry, H. S.; Simpson, W. R.
2006-05-01
In order to increase the fidelity of hardware-in-the-loop ground-truth testing, it is desirable to create a dynamic scene of multiple, independently controlled IR point sources. ATK-Mission Research has developed and supplied the steering mirror systems for the 7V and 10V Space Simulation Test Chambers at the Arnold Engineering Development Center (AEDC), Air Force Materiel Command (AFMC). A portion of the 10V system incorporates multiple target sources beam-combined at the focal point of a 20K cryogenic collimator. Each IR source consists of a precision blackbody with cryogenic aperture and filter wheels mounted on a cryogenic two-axis translation stage. This point source target scene is steered by a high-speed steering mirror to produce further complex motion. The scene changes dynamically in order to simulate an actual operational scene as viewed by the System Under Test (SUT) as it executes various dynamic look-direction changes during its flight to a target. Synchronization and real-time hardware-in-the-loop control is accomplished using reflective memory for each subsystem control and feedback loop. This paper focuses on the steering mirror system and the required tradeoffs of optical performance, precision, repeatability and high-speed motion as well as the complications of encoder feedback calibration and operation at 20K.
Geometric Characterization of Multi-Axis Multi-Pinhole SPECT
DiFilippo, Frank P.
2008-01-01
A geometric model and calibration process are developed for SPECT imaging with multiple pinholes and multiple mechanical axes. Unlike the typical situation where pinhole collimators are mounted directly to rotating gamma ray detectors, this geometric model allows for independent rotation of the detectors and pinholes, for the case where the pinhole collimator is physically detached from the detectors. This geometric model is applied to a prototype small animal SPECT device with a total of 22 pinholes and which uses dual clinical SPECT detectors. All free parameters in the model are estimated from a calibration scan of point sources and without the need for a precision point source phantom. For a full calibration of this device, a scan of four point sources with 360° rotation is suitable for estimating all 95 free parameters of the geometric model. After a full calibration, a rapid calibration scan of two point sources with 180° rotation is suitable for estimating the subset of 22 parameters associated with repositioning the collimation device relative to the detectors. The high accuracy of the calibration process is validated experimentally. Residual differences between predicted and measured coordinates are normally distributed with 0.8 mm full width at half maximum and are estimated to contribute 0.12 mm root mean square to the reconstructed spatial resolution. Since this error is small compared to other contributions arising from the pinhole diameter and the detector, the accuracy of the calibration is sufficient for high resolution small animal SPECT imaging. PMID:18293574
Reconstructed Image Spatial Resolution of Multiple Coincidences Compton Imager
NASA Astrophysics Data System (ADS)
Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna
2010-02-01
We study the multiple coincidences Compton imager (MCCI) which is based on a simultaneous acquisition of several photons emitted in cascade from a single nuclear decay. Theoretically, this technique should provide a major improvement in localization of a single radioactive source as compared to a standard Compton camera. In this work, we investigated the performance and limitations of MCCI using Monte Carlo computer simulations. Spatial resolutions of the reconstructed point source have been studied as a function of the MCCI parameters, including geometrical dimensions and detector characteristics such as materials, energy and spatial resolutions.
NASA Astrophysics Data System (ADS)
Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.
2018-06-01
The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey r-band images with artificial AGN point sources added that are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover point source and host galaxy magnitudes with smaller systematic error and a lower average scatter (49 per cent). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ± 50 per cent if it is trained on multiple PSFs. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN is more robust and easy to use than parametric methods as it requires no input parameters.
An Exact Algebraic Evaluation of Path-Length Difference for Two-Source Interference
ERIC Educational Resources Information Center
Hopper, Seth; Howell, John
2006-01-01
When studying wave interference, one often wants to know the difference in path length for two waves arriving at a common point P but coming from adjacent sources. For example, in many contexts interference maxima occur where this path-length difference is an integer multiple of the wavelength. The standard approximation for the path-length…
A review of radiative detachment studies in tokamak advanced magnetic divertor configurations
Soukhanovskii, V. A.
2017-04-28
The present vision for a plasma–material interface in the tokamak is an axisymmetric poloidal magnetic X-point divertor. Four tasks are accomplished by the standard poloidal X-point divertor: plasma power exhaust; particle control (D/T and He pumping); reduction of impurity production (source); and impurity screening by the divertor scrape-off layer. A low-temperature, low heat flux divertor operating regime called radiative detachment is viewed as the main option that addresses these tasks for present and future tokamaks. Advanced magnetic divertor configuration has the capability to modify divertor parallel and cross-field transport, radiative and dissipative losses, and detachment front stability. Advanced magnetic divertormore » configurations are divided into four categories based on their salient qualitative features: (1) multiple standard X-point divertors; (2) divertors with higher order nulls; (3) divertors with multiple X-points; and (4) long poloidal leg divertors (and also with multiple X-points). As a result, this paper reviews experiments and modeling in the area of radiative detachment in the advanced magnetic divertor configurations.« less
A review of radiative detachment studies in tokamak advanced magnetic divertor configurations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soukhanovskii, V. A.
The present vision for a plasma–material interface in the tokamak is an axisymmetric poloidal magnetic X-point divertor. Four tasks are accomplished by the standard poloidal X-point divertor: plasma power exhaust; particle control (D/T and He pumping); reduction of impurity production (source); and impurity screening by the divertor scrape-off layer. A low-temperature, low heat flux divertor operating regime called radiative detachment is viewed as the main option that addresses these tasks for present and future tokamaks. Advanced magnetic divertor configuration has the capability to modify divertor parallel and cross-field transport, radiative and dissipative losses, and detachment front stability. Advanced magnetic divertormore » configurations are divided into four categories based on their salient qualitative features: (1) multiple standard X-point divertors; (2) divertors with higher order nulls; (3) divertors with multiple X-points; and (4) long poloidal leg divertors (and also with multiple X-points). As a result, this paper reviews experiments and modeling in the area of radiative detachment in the advanced magnetic divertor configurations.« less
The Chandra Source Catalog : Automated Source Correlation
NASA Astrophysics Data System (ADS)
Hain, Roger; Evans, I. N.; Evans, J. D.; Glotfelty, K. J.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Primini, F. A.; Refsdal, B. L.; Rots, A. H.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.
2009-01-01
Chandra Source Catalog (CSC) master source pipeline processing seeks to automatically detect sources and compute their properties. Since Chandra is a pointed mission and not a sky survey, different sky regions are observed for a different number of times at varying orientations, resolutions, and other heterogeneous conditions. While this provides an opportunity to collect data from a potentially large number of observing passes, it also creates challenges in determining the best way to combine different detection results for the most accurate characterization of the detected sources. The CSC master source pipeline correlates data from multiple observations by updating existing cataloged source information with new data from the same sky region as they become available. This process sometimes leads to relatively straightforward conclusions, such as when single sources from two observations are similar in size and position. Other observation results require more logic to combine, such as one observation finding a single, large source and another identifying multiple, smaller sources at the same position. We present examples of different overlapping source detections processed in the current version of the CSC master source pipeline. We explain how they are resolved into entries in the master source database, and examine the challenges of computing source properties for the same source detected multiple times. Future enhancements are also discussed. This work is supported by NASA contract NAS8-03060 (CXC).
Neutron crosstalk between liquid scintillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verbeke, J. M.; Prasad, M. K.; Snyderman, N. J.
2015-05-01
We propose a method to quantify the fractions of neutrons scattering between liquid scintillators. Using a spontaneous fission source, this method can be utilized to quickly characterize an array of liquid scintillators in terms of crosstalk. The point model theory due to Feynman is corrected to account for these multiple scatterings. Using spectral information measured by the liquid scintillators, fractions of multiple scattering can be estimated, and mass reconstruction of fissile materials under investigation can be improved. Monte Carlo simulations of mono-energetic neutron sources were performed to estimate neutron crosstalk. A californium source in an array of liquid scintillators wasmore » modeled to illustrate the improvement of the mass reconstruction.« less
Evaluation of multiple emission point facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miltenberger, R.P.; Hull, A.P.; Strachan, S.
In 1970, the New York State Department of Environmental Conservation (NYSDEC) assumed responsibility for the environmental aspect of the state's regulatory program for by-product, source, and special nuclear material. The major objective of this study was to provide consultation to NYSDEC and the US NRC to assist NYSDEC in determining if broad-based licensed facilities with multiple emission points were in compliance with NYCRR Part 380. Under this contract, BNL would evaluate a multiple emission point facility, identified by NYSDEC, as a case study. The review would be a nonbinding evaluation of the facility to determine likely dispersion characteristics, compliance withmore » specified release limits, and implementation of the ALARA philosophy regarding effluent release practices. From the data collected, guidance as to areas of future investigation and the impact of new federal regulations were to be developed. Reported here is the case study for the University of Rochester, Strong Memorial Medical Center and Riverside Campus.« less
Code of Federal Regulations, 2012 CFR
2012-07-01
... § 437.42(d). (e) Combined waste receipts from subparts B and C of this part: Limitations for BOD5, O&G... CENTRALIZED WASTE TREATMENT POINT SOURCE CATEGORY Multiple Wastestreams § 437.43 Effluent limitations... combines treated or untreated wastes from subparts A, B, or C of this part may be subject to Multiple...
Greenwood, Daniel; Davids, Keith; Renshaw, Ian
2014-01-01
Coordination of dynamic interceptive movements is predicated on cyclical relations between an individual's actions and information sources from the performance environment. To identify dynamic informational constraints, which are interwoven with individual and task constraints, coaches' experiential knowledge provides a complementary source to support empirical understanding of performance in sport. In this study, 15 expert coaches from 3 sports (track and field, gymnastics and cricket) participated in a semi-structured interview process to identify potential informational constraints which they perceived to regulate action during run-up performance. Expert coaches' experiential knowledge revealed multiple information sources which may constrain performance adaptations in such locomotor pointing tasks. In addition to the locomotor pointing target, coaches' knowledge highlighted two other key informational constraints: vertical reference points located near the locomotor pointing target and a check mark located prior to the locomotor pointing target. This study highlights opportunities for broadening the understanding of perception and action coupling processes, and the identified information sources warrant further empirical investigation as potential constraints on athletic performance. Integration of experiential knowledge of expert coaches with theoretically driven empirical knowledge represents a promising avenue to drive future applied science research and pedagogical practice.
Illusion induced overlapped optics.
Zang, XiaoFei; Shi, Cheng; Li, Zhou; Chen, Lin; Cai, Bin; Zhu, YiMing; Zhu, HaiBin
2014-01-13
The traditional transformation-based cloak seems like it can only hide objects by bending the incident electromagnetic waves around the hidden region. In this paper, we prove that invisible cloaks can be applied to realize the overlapped optics. No matter how many in-phase point sources are located in the hidden region, all of them can overlap each other (this can be considered as illusion effect), leading to the perfect optical interference effect. In addition, a singular parameter-independent cloak is also designed to obtain quasi-overlapped optics. Even more amazing of overlapped optics is that if N identical separated in-phase point sources covered with the illusion media, the total power outside the transformation region is N2I0 (not NI0) (I0 is the power of just one point source, and N is the number point sources), which seems violating the law of conservation of energy. A theoretical model based on interference effect is proposed to interpret the total power of these two kinds of overlapped optics effects. Our investigation may have wide applications in high power coherent laser beams, and multiple laser diodes, and so on.
An integral equation formulation for the diffraction from convex plates and polyhedra.
Asheim, Andreas; Svensson, U Peter
2013-06-01
A formulation of the problem of scattering from obstacles with edges is presented. The formulation is based on decomposing the field into geometrical acoustics, first-order, and multiple-order edge diffraction components. An existing secondary-source model for edge diffraction from finite edges is extended to handle multiple diffraction of all orders. It is shown that the multiple-order diffraction component can be found via the solution to an integral equation formulated on pairs of edge points. This gives what can be called an edge source signal. In a subsequent step, this edge source signal is propagated to yield a multiple-order diffracted field, taking all diffraction orders into account. Numerical experiments demonstrate accurate response for frequencies down to 0 for thin plates and a cube. No problems with irregular frequencies, as happen with the Kirchhoff-Helmholtz integral equation, are observed for this formulation. For the axisymmetric scattering from a circular disc, a highly effective symmetric formulation results, and results agree with reference solutions across the entire frequency range.
Fast underdetermined BSS architecture design methodology for real time applications.
Mopuri, Suresh; Reddy, P Sreenivasa; Acharyya, Amit; Naik, Ganesh R
2015-01-01
In this paper, we propose a high speed architecture design methodology for the Under-determined Blind Source Separation (UBSS) algorithm using our recently proposed high speed Discrete Hilbert Transform (DHT) targeting real time applications. In UBSS algorithm, unlike the typical BSS, the number of sensors are less than the number of the sources, which is of more interest in the real time applications. The DHT architecture has been implemented based on sub matrix multiplication method to compute M point DHT, which uses N point architecture recursively and where M is an integer multiples of N. The DHT architecture and state of the art architecture are coded in VHDL for 16 bit word length and ASIC implementation is carried out using UMC 90 - nm technology @V DD = 1V and @ 1MHZ clock frequency. The proposed architecture implementation and experimental comparison results show that the DHT design is two times faster than state of the art architecture.
Nearby Dwarf Stars: Duplicity, Binarity, and Masses
NASA Astrophysics Data System (ADS)
Mason, Brian D.; Hatkopf, William I.; Raghavan, Deepak
2008-02-01
Double stars have proven to be both a blessing and a curse for astronomers since their discovery over two centuries ago. They remain the only reliable source of masses, the most fundamental parameter defining stars. On the other hand, their sobriquet ``vermin of the sky'' is well-earned, due to the complications they present to both observers and theoreticians. These range from non-linear proper motions to stray light in detectors, to confusion in pointing of instruments due to non-symmetric point spread functions, to angular momentum conservation in multiple stars which results in binaries closer than allowed by evolution of two single stars. This proposal is an effort to address both their positive and negative aspects, through speckle interferometric observations, targeting ~1200 systems where useful information can be obtained with only a single additional observation. The proposed work will refine current statistics regarding duplicity (chance alignments of nearby point sources) and binarity (actual physical relationships), and improve the precisions and accuracies of stellar masses. Several targets support Raghavan's Ph.D. thesis, which is a comprehensive survey aimed at determining the multiplicity fraction among solar-type stars.
Nearby Dwarf Stars: Duplicity, Binarity, and Masses
NASA Astrophysics Data System (ADS)
Mason, Brian D.; Hartkopf, William I.; Raghavan, Deepak
2007-08-01
Double stars have proven to be both a blessing and a curse for astronomers since their discovery over two centuries ago. They remain the only reliable source of masses, the most fundamental parameter defining stars. On the other hand, their sobriquet ``vermin of the sky'' is well-earned, due to the complications they present to both observers and theoreticians. These range from non-linear proper motions to stray light in detectors, to confusion in pointing of instruments due to non-symmetric point spread functions, to angular momentum conservation in multiple stars which results in binaries closer than allowed by evolution of two single stars. This proposal is an effort to address both their positive and negative aspects, through speckle interferometric observations, targeting ~1200 systems where useful information can be obtained with only a single additional observation. The proposed work will refine current statistics regarding duplicity (chance alignments of nearby point sources) and binarity (actual physical relationships), and improve the precisions and accuracies of stellar masses. Several targets support Raghavan's Ph.D. thesis, which is a comprehensive survey aimed at determining the multiplicity fraction among solar-type stars.
Evaluation of selective vs. point-source perforating for hydraulic fracturing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Underwood, P.J.; Kerley, L.
1996-12-31
This paper is a case history comparing and evaluating the effects of fracturing the Reef Ridge Diatomite formation in the Midway-Sunset Field, Kern County, California, using {open_quotes}select-fire{close_quotes} and {open_quotes}point-source{close_quotes} perforating completions. A description of the reservoir, production history, and fracturing techniques used leading up to this study is presented. Fracturing treatment analysis and production history matching were used to evaluate the reservoir and fracturing parameters for both completion types. The work showed that single fractures were created with the point-source (PS) completions, and multiple fractures resulted from many of the select-fire (SF) completions. A good correlation was developed between productivitymore » and the product of formation permeability, net fracture height, bottomhole pressure, and propped fracture length. Results supported the continued development of 10 wells using the PS concept with a more efficient treatment design, resulting in substantial cost savings.« less
Photogrammetric Method and Software for Stream Planform Identification
NASA Astrophysics Data System (ADS)
Stonedahl, S. H.; Stonedahl, F.; Lohberg, M. M.; Lusk, K.; Miller, D.
2013-12-01
Accurately characterizing the planform of a stream is important for many purposes, including recording measurement and sampling locations, monitoring change due to erosion or volumetric discharge, and spatial modeling of stream processes. While expensive surveying equipment or high resolution aerial photography can be used to obtain planform data, our research focused on developing a close-range photogrammetric method (and accompanying free/open-source software) to serve as a cost-effective alternative. This method involves securing and floating a wooden square frame on the stream surface at several locations, taking photographs from numerous angles at each location, and then post-processing and merging data from these photos using the corners of the square for reference points, unit scale, and perspective correction. For our test field site we chose a ~35m reach along Black Hawk Creek in Sunderbruch Park (Davenport, IA), a small, slow-moving stream with overhanging trees. To quantify error we measured 88 distances between 30 marked control points along the reach. We calculated error by comparing these 'ground truth' distances to the corresponding distances extracted from our photogrammetric method. We placed the square at three locations along our reach and photographed it from multiple angles. The square corners, visible control points, and visible stream outline were hand-marked in these photos using the GIMP (open-source image editor). We wrote an open-source GUI in Java (hosted on GitHub), which allows the user to load marked-up photos, designate square corners and label control points. The GUI also extracts the marked pixel coordinates from the images. We also wrote several scripts (currently in MATLAB) that correct the pixel coordinates for radial distortion using Brown's lens distortion model, correct for perspective by forcing the four square corner pixels to form a parallelogram in 3-space, and rotate the points in order to correctly orient all photos of the same square location. Planform data from multiple photos (and multiple square locations) are combined using weighting functions that mitigate the error stemming from the markup-process, imperfect camera calibration, etc. We have used our (beta) software to mark and process over 100 photos, yielding an average error of only 1.5% relative to our 88 measured lengths. Next we plan to translate the MATLAB scripts into Python and release their source code, at which point only free software, consumer-grade digital cameras, and inexpensive building materials will be needed for others to replicate this method at new field sites. Three sample photographs of the square with the created planform and control points
Multi-point laser ignition device
McIntyre, Dustin L.; Woodruff, Steven D.
2017-01-17
A multi-point laser device comprising a plurality of optical pumping sources. Each optical pumping source is configured to create pumping excitation energy along a corresponding optical path directed through a high-reflectivity mirror and into substantially different locations within the laser media thereby producing atomic optical emissions at substantially different locations within the laser media and directed along a corresponding optical path of the optical pumping source. An output coupler and one or more output lenses are configured to produce a plurality of lasing events at substantially different times, locations or a combination thereof from the multiple atomic optical emissions produced at substantially different locations within the laser media. The laser media is a single continuous media, preferably grown on a single substrate.
Parameter estimation for slit-type scanning sensors
NASA Technical Reports Server (NTRS)
Fowler, J. W.; Rolfe, E. G.
1981-01-01
The Infrared Astronomical Satellite, scheduled for launch into a 900 km near-polar orbit in August 1982, will perform an infrared point source survey by scanning the sky with slit-type sensors. The description of position information is shown to require the use of a non-Gaussian random variable. Methods are described for deciding whether separate detections stem from a single common source, and a formulism is developed for the scan-to-scan problems of identifying multiple sightings of inertially fixed point sources for combining their individual measurements into a refined estimate. Several cases are given where the general theory yields results which are quite different from the corresponding Gaussian applications, showing that argument by Gaussian analogy would lead to error.
Procedure for Separating Noise Sources in Measurements of Turbofan Engine Core Noise
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2006-01-01
The study of core noise from turbofan engines has become more important as noise from other sources like the fan and jet have been reduced. A multiple microphone and acoustic source modeling method to separate correlated and uncorrelated sources has been developed. The auto and cross spectrum in the frequency range below 1000 Hz is fitted with a noise propagation model based on a source couplet consisting of a single incoherent source with a single coherent source or a source triplet consisting of a single incoherent source with two coherent point sources. Examples are presented using data from a Pratt & Whitney PW4098 turbofan engine. The method works well.
Symmetrical group theory for mathematical complexity reduction of digital holograms
NASA Astrophysics Data System (ADS)
Perez-Ramirez, A.; Guerrero-Juk, J.; Sanchez-Lara, R.; Perez-Ramirez, M.; Rodriguez-Blanco, M. A.; May-Alarcon, M.
2017-10-01
This work presents the use of mathematical group theory through an algorithm to reduce the multiplicative computational complexity in the process of creating digital holograms. An object is considered as a set of point sources using mathematical symmetry properties of both the core in the Fresnel integral and the image, where the image is modeled using group theory. This algorithm has multiplicative complexity equal to zero and an additive complexity ( k - 1) × N for the case of sparse matrices and binary images, where k is the number of pixels other than zero and N is the total points in the image.
NASA Astrophysics Data System (ADS)
Ma, W.; Jafarpour, B.
2017-12-01
We develop a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information:: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) and its multiple data assimilation variant (ES-MDA) are adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at select locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.
40 CFR 437.41 - Special definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... procedures it has adopted will ensure its treatment systems are well-operated and maintained. ... STANDARDS (CONTINUED) THE CENTRALIZED WASTE TREATMENT POINT SOURCE CATEGORY Multiple Wastestreams § 437.41 Special definitions. (a) Initial Certification Statement for this subpart means a written submission to...
40 CFR 437.41 - Special definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... procedures it has adopted will ensure its treatment systems are well-operated and maintained. ... STANDARDS (CONTINUED) THE CENTRALIZED WASTE TREATMENT POINT SOURCE CATEGORY Multiple Wastestreams § 437.41 Special definitions. (a) Initial Certification Statement for this subpart means a written submission to...
40 CFR 437.41 - Special definitions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... procedures it has adopted will ensure its treatment systems are well-operated and maintained. ... STANDARDS (CONTINUED) THE CENTRALIZED WASTE TREATMENT POINT SOURCE CATEGORY Multiple Wastestreams § 437.41 Special definitions. (a) Initial Certification Statement for this subpart means a written submission to...
A Survey of Insider Attack Detection Research
2008-08-25
modeling of statistical features , such as the frequency of events, the duration of events, the co-occurrence of multiple events combined through...forms of attack that have been reported [Error! Reference source not found.]. For example: • Unauthorized extraction , duplication, or exfiltration...network level. Schultz pointed out that not one approach will work but solutions need to be based on multiple sensors to be able to find any combination
NASA Astrophysics Data System (ADS)
Tam, Kai-Chung; Lau, Siu-Kit; Tang, Shiu-Keung
2016-07-01
A microphone array signal processing method for locating a stationary point source over a locally reactive ground and for estimating ground impedance is examined in detail in the present study. A non-linear least square approach using the Levenberg-Marquardt method is proposed to overcome the problem of unknown ground impedance. The multiple signal classification method (MUSIC) is used to give the initial estimation of the source location, while the technique of forward backward spatial smoothing is adopted as a pre-processer of the source localization to minimize the effects of source coherence. The accuracy and robustness of the proposed signal processing method are examined. Results show that source localization in the horizontal direction by MUSIC is satisfactory. However, source coherence reduces drastically the accuracy in estimating the source height. The further application of Levenberg-Marquardt method with the results from MUSIC as the initial inputs improves significantly the accuracy of source height estimation. The present proposed method provides effective and robust estimation of the ground surface impedance.
Social inequalities in health information seeking among young adults in Montreal.
Gagné, Thierry; Ghenadenik, Adrian E; Abel, Thomas; Frohlich, Katherine L
2018-06-01
Over their lifecourse, young adults develop different skills and preferences in relationship to the information sources they seek when having questions about health. Health information seeking behaviour (HISB) includes multiple, unequally accessed sources; yet most studies have focused on single sources and did not examine HISB's association with social inequalities. This study explores 'multiple-source' profiles and their association with socioeconomic characteristics. We analyzed cross-sectional data from the Interdisciplinary Study of Inequalities in Smoking involving 2093 young adults recruited in Montreal, Canada, in 2011-2012. We used latent class analysis to create profiles based on responses to questions regarding whether participants sought health professionals, family, friends or the Internet when having questions about health. Using multinomial logistic regression, we examined the associations between profiles and economic, social and cultural capital indicators: financial difficulties and transportation means, friend satisfaction and network size, and individual, mother's, and father's education. Five profiles were found: 'all sources' (42%), 'health professional centred' (29%), 'family only' (14%), 'Internet centred' (14%) and 'no sources' (2%). Participants with a larger social network and higher friend satisfaction were more likely to be in the 'all sources' group. Participants who experienced financial difficulties and completed college/university were less likely to be in the 'family only' group; those whose mother had completed college/university were more likely to be in this group. Our findings point to the importance of considering multiple sources to study HISB, especially when the capacity to seek multiple sources is unequally distributed. Scholars should acknowledge HISB's implications for health inequalities.
Hemispherical breathing mode speaker using a dielectric elastomer actuator.
Hosoya, Naoki; Baba, Shun; Maeda, Shingo
2015-10-01
Although indoor acoustic characteristics should ideally be assessed by measuring the reverberation time using a point sound source, a regular polyhedron loudspeaker, which has multiple loudspeakers on a chassis, is typically used. However, such a configuration is not a point sound source if the size of the loudspeaker is large relative to the target sound field. This study investigates a small lightweight loudspeaker using a dielectric elastomer actuator vibrating in the breathing mode (the pulsating mode such as the expansion and contraction of a balloon). Acoustic testing with regard to repeatability, sound pressure, vibration mode profiles, and acoustic radiation patterns indicate that dielectric elastomer loudspeakers may be feasible.
40 CFR 437.41 - Special definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... are well-operated and maintained; and (5) Explain why the procedures it has adopted will ensure its... STANDARDS THE CENTRALIZED WASTE TREATMENT POINT SOURCE CATEGORY Multiple Wastestreams § 437.41 Special definitions. (a) Initial Certification Statement for this subpart means a written submission to the...
40 CFR 437.41 - Special definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... are well-operated and maintained; and (5) Explain why the procedures it has adopted will ensure its... STANDARDS THE CENTRALIZED WASTE TREATMENT POINT SOURCE CATEGORY Multiple Wastestreams § 437.41 Special definitions. (a) Initial Certification Statement for this subpart means a written submission to the...
NASA Technical Reports Server (NTRS)
Seo, Byoung-Joon; Nissly, Carl; Troy, Mitchell; Angeli, George
2010-01-01
The Normalized Point Source Sensitivity (PSSN) has previously been defined and analyzed as an On-Axis seeing-limited telescope performance metric. In this paper, we expand the scope of the PSSN definition to include Off-Axis field of view (FoV) points and apply this generalized metric for performance evaluation of the Thirty Meter Telescope (TMT). We first propose various possible choices for the PSSN definition and select one as our baseline. We show that our baseline metric has useful properties including the multiplicative feature even when considering Off-Axis FoV points, which has proven to be useful for optimizing the telescope error budget. Various TMT optical errors are considered for the performance evaluation including segment alignment and phasing, segment surface figures, temperature, and gravity, whose On-Axis PSSN values have previously been published by our group.
Bueno, I; Williams-Nguyen, J; Hwang, H; Sargeant, J M; Nault, A J; Singer, R S
2018-02-01
Point sources such as wastewater treatment plants and agricultural facilities may have a role in the dissemination of antibiotic-resistant bacteria (ARB) and antibiotic resistance genes (ARG). To analyse the evidence for increases in ARB in the natural environment associated with these point sources of ARB and ARG, we conducted a systematic review. We evaluated 5,247 records retrieved through database searches, including both studies that ascertained ARG and ARB outcomes. All studies were subjected to a screening process to assess relevance to the question and methodology to address our review question. A risk of bias assessment was conducted upon the final pool of studies included in the review. This article summarizes the evidence only for those studies with ARB outcomes (n = 47). Thirty-five studies were at high (n = 11) or at unclear (n = 24) risk of bias in the estimation of source effects due to lack of information and/or failure to control for confounders. Statistical analysis was used in ten studies, of which one assessed the effect of multiple sources using modelling approaches; none reported effect measures. Most studies reported higher ARB prevalence or concentration downstream/near the source. However, this evidence was primarily descriptive and it could not be concluded that there is a clear impact of point sources on increases in ARB in the environment. To quantify increases in ARB in the environment due to specific point sources, there is a need for studies that stress study design, control of biases and analytical tools to provide effect measure estimates. © 2017 Blackwell Verlag GmbH.
Method for measuring multiple scattering corrections between liquid scintillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.
2016-04-11
In this study, a time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.
NASA Astrophysics Data System (ADS)
Hansen, Scott K.; Vesselinov, Velimir V.
2016-10-01
We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited.
Ghannam, K; El-Fadel, M
2013-02-01
This paper examines the relative source contribution to ground-level concentrations of carbon monoxide (CO), nitrogen dioxide (NO2), and PM10 (particulate matter with an aerodynamic diameter < 10 microm) in a coastal urban area due to emissions from an industrial complex with multiple stacks, quarrying activities, and a nearby highway. For this purpose, an inventory of CO, oxide of nitrogen (NO(x)), and PM10 emissions was coupled with the non-steady-state Mesoscale Model 5/California Puff Dispersion Modeling system to simulate individual source contributions under several spatial and temporal scales. As the contribution of a particular source to ground-level concentrations can be evaluated by simulating this single-source emissions or otherwise total emissions except that source, a set of emission sensitivity simulations was designed to examine if CALPUFF maintains a linear relationship between emission rates and predicted concentrations in cases where emitted plumes overlap and chemical transformations are simulated. Source apportionment revealed that ground-level releases (i.e., highway and quarries) extended over large areas dominated the contribution to exposure levels over elevated point sources, despite the fact that cumulative emissions from point sources are higher. Sensitivity analysis indicated that chemical transformations of NO(x) are insignificant, possibly due to short-range plume transport, with CALPUFF exhibiting a linear response to changes in emission rate. The current paper points to the significance of ground-level emissions in contributing to urban air pollution exposure and questions the viability of the prevailing paradigm of point-source emission reduction, especially that the incremental improvement in air quality associated with this common abatement strategy may not accomplish the desirable benefit in terms of lower exposure with costly emissions capping. The application of atmospheric dispersion models for source apportionment helps in identifying major contributors to regional air pollution. In industrial urban areas where multiple sources with different geometry contribute to emissions, ground-level releases extended over large areas such as roads and quarries often dominate the contribution to ground-level air pollution. Industrial emissions released at elevated stack heights may experience significant dilution, resulting in minor contribution to exposure at ground level. In such contexts, emission reduction, which is invariably the abatement strategy targeting industries at a significant investment in control equipment or process change, may result in minimal return on investment in terms of improvement in air quality at sensitive receptors.
Distinguishing one from many using super-resolution compressive sensing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anthony, Stephen Michael; Mulcahy-Stanislawczyk, Johnathan; Shields, Eric A.
We present that distinguishing whether a signal corresponds to a single source or a limited number of highly overlapping point spread functions (PSFs) is a ubiquitous problem across all imaging scales, whether detecting receptor-ligand interactions in cells or detecting binary stars. Super-resolution imaging based upon compressed sensing exploits the relative sparseness of the point sources to successfully resolve sources which may be separated by much less than the Rayleigh criterion. However, as a solution to an underdetermined system of linear equations, compressive sensing requires the imposition of constraints which may not always be valid. One typical constraint is that themore » PSF is known. However, the PSF of the actual optical system may reflect aberrations not present in the theoretical ideal optical system. Even when the optics are well characterized, the actual PSF may reflect factors such as non-uniform emission of the point source (e.g. fluorophore dipole emission). As such, the actual PSF may differ from the PSF used as a constraint. Similarly, multiple different regularization constraints have been suggested including the l 1-norm, l 0-norm, and generalized Gaussian Markov random fields (GGMRFs), each of which imposes a different constraint. Other important factors include the signal-to-noise ratio of the point sources and whether the point sources vary in intensity. In this work, we explore how these factors influence super-resolution image recovery robustness, determining the sensitivity and specificity. In conclusion, we determine an approach that is more robust to the types of PSF errors present in actual optical systems.« less
Distinguishing one from many using super-resolution compressive sensing
Anthony, Stephen Michael; Mulcahy-Stanislawczyk, Johnathan; Shields, Eric A.; ...
2018-05-14
We present that distinguishing whether a signal corresponds to a single source or a limited number of highly overlapping point spread functions (PSFs) is a ubiquitous problem across all imaging scales, whether detecting receptor-ligand interactions in cells or detecting binary stars. Super-resolution imaging based upon compressed sensing exploits the relative sparseness of the point sources to successfully resolve sources which may be separated by much less than the Rayleigh criterion. However, as a solution to an underdetermined system of linear equations, compressive sensing requires the imposition of constraints which may not always be valid. One typical constraint is that themore » PSF is known. However, the PSF of the actual optical system may reflect aberrations not present in the theoretical ideal optical system. Even when the optics are well characterized, the actual PSF may reflect factors such as non-uniform emission of the point source (e.g. fluorophore dipole emission). As such, the actual PSF may differ from the PSF used as a constraint. Similarly, multiple different regularization constraints have been suggested including the l 1-norm, l 0-norm, and generalized Gaussian Markov random fields (GGMRFs), each of which imposes a different constraint. Other important factors include the signal-to-noise ratio of the point sources and whether the point sources vary in intensity. In this work, we explore how these factors influence super-resolution image recovery robustness, determining the sensitivity and specificity. In conclusion, we determine an approach that is more robust to the types of PSF errors present in actual optical systems.« less
VizieR Online Data Catalog: Polarized point sources in LOTSS-HETDEX (Van Eck+, 2018)
NASA Astrophysics Data System (ADS)
van Eck, C. L.; Haverkorn, M.; Alves, M. I. R.; Beck, R.; Best, P.; Carretti, E.; Chyzy, K. T.; Farnes, J. S.; Ferriere, K.; Hardcastle, M. J.; Heald, G.; Horellou, C.; Iacobelli, M.; Jelic, V.; Mulcahy, D. D.; O'Sullivan, S. P.; Polderman, I. M.; Reich, W.; Riseley, C. J.; Rottgering, H.; Schnitzeler, D. H. F. M.; Shimwell, T. W.; Vacca, V.; Vink, J.; White, G. J.
2018-06-01
Visibility data taken from LOTSS, imaged in polarization, and had RM synthesis applied. Resulting RM spectra were searched for polarization peaks. Detected peaks that were determined to not be foreground or instrumental effects were collected in this catalog. Source locations (for peak searches) were selected from TGSS-ADR1 (J/A+A/598/A78). Due to overlap between fields, some sources were detected multiple times, as recorded in the Ndet column. Polarized sources were cross-matched with the high-resolution LOTSS images (Shimwell+, in prep), and WISE and PanSTARRS images, which were used to determine the source classification and morphology. (1 data file).
Adjustable long duration high-intensity point light source
NASA Astrophysics Data System (ADS)
Krehl, P.; Hagelweide, J. B.
1981-06-01
A new long duration high-intensity point light source with adjustable light duration and a small light spot locally stable in time has been developed. The principle involved is a stationary high-temperature plasma flow inside a partly constrained capillary of a coaxial spark gap which is viewed end on through a terminating Plexiglas window. The point light spark gap is operated via a resistor by an artificial transmission line. Using two exchangeable inductance sets in the line, two ranges of photoduration 10-130 μs and 100-600 μs can be covered. For a light spot size of 1.5 mm diameter the corresponding peak light output amounts to 5×106 and 1.6×106 candelas, respectively. Within these ranges the duration is controlled by an ignitron crowbar to extinguish the plasma. The adjustable photoduration is very useful for the application of continuous writing rotating mirror cameras, thus preventing multiple exposures. The essentially uniform exposure within the visible spectral range makes the new light source suitable for color cinematography.
Hybrid Optimization Parallel Search PACKage
DOE Office of Scientific and Technical Information (OSTI.GOV)
2009-11-10
HOPSPACK is open source software for solving optimization problems without derivatives. Application problems may have a fully nonlinear objective function, bound constraints, and linear and nonlinear constraints. Problem variables may be continuous, integer-valued, or a mixture of both. The software provides a framework that supports any derivative-free type of solver algorithm. Through the framework, solvers request parallel function evaluation, which may use MPI (multiple machines) or multithreading (multiple processors/cores on one machine). The framework provides a Cache and Pending Cache of saved evaluations that reduces execution time and facilitates restarts. Solvers can dynamically create other algorithms to solve subproblems, amore » useful technique for handling multiple start points and integer-valued variables. HOPSPACK ships with the Generating Set Search (GSS) algorithm, developed at Sandia as part of the APPSPACK open source software project.« less
Mapping ecosystem service indicators in a Great Lakes estuarine Area of Concern
Estuaries provide multiple ecosystem services from which humans benefit. Currently, thirty-six Great Lakes estuaries in the United States and Canada are designated as Areas of Concern (AOCs) due to a legacy of chemical contamination, degraded habitat, and non-point-source polluti...
Nitrate-Nitrogen, Landuse/Landcover, and Soil Drainage Associations at Multiple Spatial Scales
Managing non–point-source pollution of water requires knowledge of land use/land cover (LULC) influences at altering watershed scales. To gain improved understanding of relationships among LULC, soil drainage, and dissolved nitrate-N dynamics within the Calapooia River Basin in w...
Global Situational Awareness with Free Tools
2015-01-15
Client Technical Solutions • Software Engineering Measurement and Analysis • Architecture Practices • Product Line Practice • Team Software Process...multiple data sources • Snort (Snorby on Security Onion ) • Nagios • SharePoint RSS • Flow • Others • Leverage standard data formats • Keyhole Markup Language
NASA Astrophysics Data System (ADS)
Gatto, A.; Parolari, P.; Boffi, P.
2018-05-01
Frequency division multiplexing (FDM) is attractive to achieve high capacities in multiple access networks characterized by direct modulation and direct detection. In this paper we take into account point-to-point intra- and inter-datacenter connections to understand the performance of FDM operation compared with the ones achievable with standard multiple carrier modulation approach based on discrete multitone (DMT). DMT and FDM allow to match the non-uniform and bandwidth-limited response of the system under test, associated with the employment of low-cost directly-modulated sources, such as VCSELs with high-frequency chirp, and with fibre-propagation in presence of chromatic dispersion. While for very short distances typical of intra-datacentre communications, the huge number of DMT subcarriers permits to increase the transported capacity with respect to the FDM employment, in case of few tens-km reaches typical of inter-datacentre connections, the capabilities of FDM are more evident, providing system performance similar to the case of DMT application.
AN ACCURACY ASSESSMENT OF MULTIPLE MID-ATLANTIC SUB-PIXEL IMPERVIOUS SURFACE MAPS
Anthropogenic impervious surfaces have an important relationship with non-point source pollution (NPS) in urban watersheds. The amount of impervious surface area in a watershed is a key indicator of landscape change. As a single variable, it serves to integrate a number of conc...
Multiple magma emplacement and its effect on the superficial deformation: hints from analogue models
NASA Astrophysics Data System (ADS)
Montanari, Domenico; Bonini, Marco; Corti, Giacomo; del Ventisette, Chiara
2017-04-01
To test the effect exerted by multiple magma emplacement on the deformation pattern, we have run analogue models with synchronous, as well as diachronous magma injection from different, aligned inlets. The distance between injection points, as well as the activation in time of injection points was varied for each model. Our model results show how the position and activation in time of injection points (which reproduce multiple magma batches in nature) strongly influence model evolution. In the case of synchronous injection at different inlets, the intrusions and associated surface deformation were elongated. Forced folds and annular bounding reverse faults were quite elliptical, and with the main axis of the elongated dome trending sub-parallel to the direction of the magma input points. Model results also indicate that the injection from multiple aligned sources could reproduce the same features of systems associated with planar feeder dikes, thereby suggesting that caution should be taken when trying to infer the feeding areas on the basis of the deformation features observed at the surface or in seismic profiles. Diachronous injection from different injection points showed that the deformation observed at surface does not necessarily reflect the location and/or geometry of their feeders. Most notably, these experiments suggest that coeval magma injection from different sources favor the lateral migration of magma rather than the vertical growth, promoting the development of laterally interconnected intrusions. Recently, some authors (Magee et al., 2014, 2016; Schofield et al., 2015) have suggested that, based on seismic reflection data analysis, interconnected sills and inclined sheets can facilitate the transport of magma over great vertical distances and laterally for large distances. Intrusions and volcanoes fed by sill complexes may thus be laterally offset significantly from the melt source. Our model results strongly support these findings, by reproducing in the laboratory a strong lateral magma migration, and suggesting a possible mechanism. The models also confirmed that lateral magma migration could take place with little or no accompanying surface deformation. The research leading to these results has received funding from the European Community's Seventh Framework Programme under grant agreement No. 608553 (Project IMAGE). References: Magee et al., 2014. Basin Research, v. 26, p. 85-105, doi: 10 .1111 /bre.12044. Magee et al., 2016. Geosphere, v. 12, p. 809-841, ISSN: 1553-040X. Schofield et al., 2015. Basin Research, v. 29, p. 41-63, doi:10.1111/bre.12164.
Hansen, Scott K.; Vesselinov, Velimir Valentinov
2016-10-01
We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulatemore » well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. Furthermore, this greatly enhanced performance, but gains from additional data collection remained limited.« less
A double-correlation tremor-location method
NASA Astrophysics Data System (ADS)
Li, Ka Lok; Sgattoni, Giulia; Sadeghisorkhani, Hamzeh; Roberts, Roland; Gudmundsson, Olafur
2017-02-01
A double-correlation method is introduced to locate tremor sources based on stacks of complex, doubly-correlated tremor records of multiple triplets of seismographs back projected to hypothetical source locations in a geographic grid. Peaks in the resulting stack of moduli are inferred source locations. The stack of the moduli is a robust measure of energy radiated from a point source or point sources even when the velocity information is imprecise. Application to real data shows how double correlation focuses the source mapping compared to the common single correlation approach. Synthetic tests demonstrate the robustness of the method and its resolution limitations which are controlled by the station geometry, the finite frequency of the signal, the quality of the used velocity information and noise level. Both random noise and signal or noise correlated at time shifts that are inconsistent with the assumed velocity structure can be effectively suppressed. Assuming a surface wave velocity, we can constrain the source location even if the surface wave component does not dominate. The method can also in principle be used with body waves in 3-D, although this requires more data and seismographs placed near the source for depth resolution.
Nguyen, Hai M.; Matsumoto, Jumpei; Tran, Anh H.; Ono, Taketoshi; Nishijo, Hisao
2014-01-01
Previous studies have reported that multiple brain regions are activated during spatial navigation. However, it is unclear whether these activated brain regions are specifically associated with spatial updating or whether some regions are recruited for parallel cognitive processes. The present study aimed to localize current sources of event related potentials (ERPs) associated with spatial updating specifically. In the control phase of the experiment, electroencephalograms (EEGs) were recorded while subjects sequentially traced 10 blue checkpoints on the streets of a virtual town, which were sequentially connected by a green line, by manipulating a joystick. In the test phase of the experiment, the checkpoints and green line were not indicated. Instead, a tone was presented when the subjects entered the reference points where they were then required to trace the 10 invisible spatial reference points corresponding to the checkpoints. The vertex-positive ERPs with latencies of approximately 340 ms from the moment when the subjects entered the unmarked reference points were significantly larger in the test than in the control phases. Current source density analysis of the ERPs by standardized low-resolution brain electromagnetic tomography (sLORETA) indicated activation of brain regions in the test phase that are associated with place and landmark recognition (entorhinal cortex/hippocampus, parahippocampal and retrosplenial cortices, fusiform, and lingual gyri), detecting self-motion (posterior cingulate and posterior insular cortices), motor planning (superior frontal gyrus, including the medial frontal cortex), and regions that process spatial attention (inferior parietal lobule). The present results provide the first identification of the current sources of ERPs associated with spatial updating, and suggest that multiple systems are active in parallel during spatial updating. PMID:24624067
Computing Fault Displacements from Surface Deformations
NASA Technical Reports Server (NTRS)
Lyzenga, Gregory; Parker, Jay; Donnellan, Andrea; Panero, Wendy
2006-01-01
Simplex is a computer program that calculates locations and displacements of subterranean faults from data on Earth-surface deformations. The calculation involves inversion of a forward model (given a point source representing a fault, a forward model calculates the surface deformations) for displacements, and strains caused by a fault located in isotropic, elastic half-space. The inversion involves the use of nonlinear, multiparameter estimation techniques. The input surface-deformation data can be in multiple formats, with absolute or differential positioning. The input data can be derived from multiple sources, including interferometric synthetic-aperture radar, the Global Positioning System, and strain meters. Parameters can be constrained or free. Estimates can be calculated for single or multiple faults. Estimates of parameters are accompanied by reports of their covariances and uncertainties. Simplex has been tested extensively against forward models and against other means of inverting geodetic data and seismic observations. This work
Power-output regularization in global sound equalization.
Stefanakis, Nick; Sarris, John; Cambourakis, George; Jacobsen, Finn
2008-01-01
The purpose of equalization in room acoustics is to compensate for the undesired modification that an enclosure introduces to signals such as audio or speech. In this work, equalization in a large part of the volume of a room is addressed. The multiple point method is employed with an acoustic power-output penalty term instead of the traditional quadratic source effort penalty term. Simulation results demonstrate that this technique gives a smoother decline of the reproduction performance away from the control points.
D-D neutron generator development at LBNL.
Reijonen, J; Gicquel, F; Hahto, S K; King, M; Lou, T-P; Leung, K-N
2005-01-01
The plasma and ion source technology group in Lawrence Berkeley National Laboratory is developing advanced, next generation D-D neutron generators. There are three distinctive developments, which are discussed in this presentation, namely, multi-stage, accelerator-based axial neutron generator, high-output co-axial neutron generator and point source neutron generator. These generators employ RF-induction discharge to produce deuterium ions. The distinctive feature of RF-discharge is its capability to generate high atomic hydrogen species, high current densities and stable and long-life operation. The axial neutron generator is designed for applications that require fast pulsing together with medium to high D-D neutron output. The co-axial neutron generator is aimed for high neutron output with cw or pulsed operation, using either the D-D or D-T fusion reaction. The point source neutron generator is a new concept, utilizing a toroidal-shaped plasma generator. The beam is extracted from multiple apertures and focus to the target tube, which is located at the middle of the generator. This will generate a point source of D-D, T-T or D-T neutrons with high output flux. The latest development together with measured data will be discussed in this article.
NASA Astrophysics Data System (ADS)
Kraemer, Kathleen E.; Sloan, G. C.
2015-01-01
We compare infrared observations of the Small Magellanic Cloud (SMC) by the Midcourse Space Experiment (MSX) and the Spitzer Space Telescope to better understand what components of a metal-poor galaxy dominate radiative processes in the infrared. The SMC, at a distance of ~60 kpc and with a metallicity of ~0.1-0.2 solar, can serve as a nearby proxy for metal-poor galaxies at high redshift. The MSX Point Source Catalog contains 243 objects in the SMC that were detected at 8.3 microns, the most sensitive MSX band. Multi-epoch, multi-band mapping with Spitzer, supplemented with observations from the Two-Micron All-Sky Survey (2MASS) and the Wide-field Infrared Survey Explorer (WISE), provides variability information, and, together with spectra from Spitzer for ~15% of the sample, enables us to determine what these luminous sources are. How many remain simple point sources? What fraction break up into multiple stars? Which are star forming regions, with both bright diffuse emission and point sources? How do evolved stars and stellar remnants contribute at these wavelengths? What role do young stellar objects and HII regions play? Answering these questions sets the stage for understanding what we will see with the James Webb Space Telescope (JWST).
Nearby Dwarf Stars: Duplicity, Binarity, and Masses
NASA Astrophysics Data System (ADS)
Mason, Brian D.; Hartkopf, William I.; Henry, Todd J.; Jao, Wei-Chun; Subasavage, John; Riedel, Adric; Winters, Jennifer
2010-02-01
Double stars have proven to be both a blessing and a curse for astronomers since their discovery over two centuries ago. They remain the only reliable source of masses, the most fundamental parameter defining stars. On the other hand, their sobriquet ``vermin of the sky'' is well-earned, due to the complications they present to both observers and theoreticians. These range from non-linear proper motions to stray light in detectors, to confusion in pointing of instruments due to non-symmetric point spread functions, to angular momentum conservation in multiple stars which results in binaries closer than allowed by evolution of two single stars. This proposal is primarily focused on targets where precise astrophysical information is sorely lacking: white dwarfs, red dwarfs, and subdwarfs. The proposed work will refine current statistics regarding duplicity (chance alignments of nearby point sources) and binarity (actual physical relationships), and improve the precisions and accuracies of stellar masses. Several targets support Riedel's and Winters' theses.
Nearby Dwarf Stars: Duplicity, Binarity, and Masses
NASA Astrophysics Data System (ADS)
Mason, Brian D.; Hartkopf, William I.; Henry, Todd J.; Jao, Wei-Chun; Subasavage, John; Riedel, Adric; Winters, Jennifer
2009-08-01
Double stars have proven to be both a blessing and a curse for astronomers since their discovery over two centuries ago. They remain the only reliable source of masses, the most fundamental parameter defining stars. On the other hand, their sobriquet ``vermin of the sky'' is well-earned, due to the complications they present to both observers and theoreticians. These range from non-linear proper motions to stray light in detectors, to confusion in pointing of instruments due to non-symmetric point spread functions, to angular momentum conservation in multiple stars which results in binaries closer than allowed by evolution of two single stars. This proposal is primarily focused on targets where precise astrophysical information is sorely lacking: white dwarfs, red dwarfs, and subdwarfs. The proposed work will refine current statistics regarding duplicity (chance alignments of nearby point sources) and binarity (actual physical relationships), and improve the precisions and accuracies of stellar masses. Several targets support Riedel's and Winters' theses.
Multiple-component Decomposition from Millimeter Single-channel Data
NASA Astrophysics Data System (ADS)
Rodríguez-Montoya, Iván; Sánchez-Argüelles, David; Aretxaga, Itziar; Bertone, Emanuele; Chávez-Dagostino, Miguel; Hughes, David H.; Montaña, Alfredo; Wilson, Grant W.; Zeballos, Milagros
2018-03-01
We present an implementation of a blind source separation algorithm to remove foregrounds off millimeter surveys made by single-channel instruments. In order to make possible such a decomposition over single-wavelength data, we generate levels of artificial redundancy, then perform a blind decomposition, calibrate the resulting maps, and lastly measure physical information. We simulate the reduction pipeline using mock data: atmospheric fluctuations, extended astrophysical foregrounds, and point-like sources, but we apply the same methodology to the Aztronomical Thermal Emission Camera/ASTE survey of the Great Observatories Origins Deep Survey–South (GOODS-S). In both applications, our technique robustly decomposes redundant maps into their underlying components, reducing flux bias, improving signal-to-noise ratio, and minimizing information loss. In particular, GOODS-S is decomposed into four independent physical components: one of them is the already-known map of point sources, two are atmospheric and systematic foregrounds, and the fourth component is an extended emission that can be interpreted as the confusion background of faint sources.
Oil spill contamination probability in the southeastern Levantine basin.
Goldman, Ron; Biton, Eli; Brokovich, Eran; Kark, Salit; Levin, Noam
2015-02-15
Recent gas discoveries in the eastern Mediterranean Sea led to multiple operations with substantial economic interest, and with them there is a risk of oil spills and their potential environmental impacts. To examine the potential spatial distribution of this threat, we created seasonal maps of the probability of oil spill pollution reaching an area in the Israeli coastal and exclusive economic zones, given knowledge of its initial sources. We performed simulations of virtual oil spills using realistic atmospheric and oceanic conditions. The resulting maps show dominance of the alongshore northerly current, which causes the high probability areas to be stretched parallel to the coast, increasing contamination probability downstream of source points. The seasonal westerly wind forcing determines how wide the high probability areas are, and may also restrict these to a small coastal region near source points. Seasonal variability in probability distribution, oil state, and pollution time is also discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
Ivahnenko, Tamara; Ortiz, Roderick F.; Stogner, Sr., Robert W.
2013-01-01
As a result of continued water-quality concerns in the Arkansas River, including metal contamination from historical mining practices, potential effects associated with storage and movement of water, point- and nonpoint-source contamination, population growth, storm-water flows, and future changes in land and water use, the Arkansas River Basin Regional Resource Planning Group (RRPG) developed a strategy to address these issues. As such, a cooperative strategic approach to address the multiple water-quality concerns within selected reaches of the Arkansas River was developed to (1) identify stream reaches where stream-aquifer interactions have a pronounced effect on water quality and (or) where reactive transport, and physical and (or) chemical alteration of flow during conveyance, is occurring, (2) quantify loading from point sources, and (3) determine source areas and mass loading for selected constituents. (To see the complete abstract, open Report PDF.)
Chen, Yanxi; Niu, Zhiguang; Zhang, Hongwei
2013-06-01
Landscape lakes in the city suffer high eutrophication risk because of their special characters and functions in the water circulation system. Using a landscape lake HMLA located in Tianjin City, North China, with a mixture of point source (PS) pollution and non-point source (NPS) pollution, we explored the methodology of Fluent and AQUATOX to simulate and predict the state of HMLA, and trophic index was used to assess the eutrophication state. Then, we use water compensation optimization and three scenarios to determine the optimal management methodology. Three scenarios include ecological restoration scenario, best management practices (BMPs) scenario, and a scenario combining both. Our results suggest that the maintenance of a healthy ecosystem with ecoremediation is necessary and the BMPs have a far-reaching effect on water reusing and NPS pollution control. This study has implications for eutrophication control and management under development for urbanization in China.
Single and Multiple Scattered Solar Radiation
1982-08-30
so that factor can be expected to vary considerably from one scattering point to the next. The monochromatic intensity at the observer due to all of...the single scattering sources within the line-of-sight is obtained by summing over the optical path the product of the source function and the...the observer. Using a dot product 1)etwecen position_ vectors on the unit sphere, it can be Chown that cosA cost coss cost) cos4o + 0 S 0 0 "+ cost
Finite-Length Line Source Superposition Model (FLLSSM)
NASA Astrophysics Data System (ADS)
1980-03-01
A linearized thermal conduction model was developed to economically determine media temperatures in geologic repositories for nuclear wastes. Individual canisters containing either high level waste or spent fuel assemblies were represented as finite length line sources in a continuous media. The combined effects of multiple canisters in a representative storage pattern were established at selected points of interest by superposition of the temperature rises calculated for each canister. The methodology is outlined and the computer code FLLSSM which performs required numerical integrations and superposition operations is described.
NASA Astrophysics Data System (ADS)
Gong, K.; Fritsch, D.
2018-05-01
Nowadays, multiple-view stereo satellite imagery has become a valuable data source for digital surface model generation and 3D reconstruction. In 2016, a well-organized multiple view stereo publicly benchmark for commercial satellite imagery has been released by the John Hopkins University Applied Physics Laboratory, USA. This benchmark motivates us to explore the method that can generate accurate digital surface models from a large number of high resolution satellite images. In this paper, we propose a pipeline for processing the benchmark data to digital surface models. As a pre-procedure, we filter all the possible image pairs according to the incidence angle and capture date. With the selected image pairs, the relative bias-compensated model is applied for relative orientation. After the epipolar image pairs' generation, dense image matching and triangulation, the 3D point clouds and DSMs are acquired. The DSMs are aligned to a quasi-ground plane by the relative bias-compensated model. We apply the median filter to generate the fused point cloud and DSM. By comparing with the reference LiDAR DSM, the accuracy, the completeness and the robustness are evaluated. The results show, that the point cloud reconstructs the surface with small structures and the fused DSM generated by our pipeline is accurate and robust.
Feedback power control strategies in wireless sensor networks with joint channel decoding.
Abrardo, Andrea; Ferrari, Gianluigi; Martalò, Marco; Perna, Fabio
2009-01-01
In this paper, we derive feedback power control strategies for block-faded multiple access schemes with correlated sources and joint channel decoding (JCD). In particular, upon the derivation of the feasible signal-to-noise ratio (SNR) region for the considered multiple access schemes, i.e., the multidimensional SNR region where error-free communications are, in principle, possible, two feedback power control strategies are proposed: (i) a classical feedback power control strategy, which aims at equalizing all link SNRs at the access point (AP), and (ii) an innovative optimized feedback power control strategy, which tries to make the network operational point fall in the feasible SNR region at the lowest overall transmit energy consumption. These strategies will be referred to as "balanced SNR" and "unbalanced SNR," respectively. While they require, in principle, an unlimited power control range at the sources, we also propose practical versions with a limited power control range. We preliminary consider a scenario with orthogonal links and ideal feedback. Then, we analyze the robustness of the proposed power control strategies to possible non-idealities, in terms of residual multiple access interference and noisy feedback channels. Finally, we successfully apply the proposed feedback power control strategies to a limiting case of the class of considered multiple access schemes, namely a central estimating officer (CEO) scenario, where the sensors observe noisy versions of a common binary information sequence and the AP's goal is to estimate this sequence by properly fusing the soft-output information output by the JCD algorithm.
Naser, Mohamed A.; Patterson, Michael S.
2011-01-01
Reconstruction algorithms are presented for two-step solutions of the bioluminescence tomography (BLT) and the fluorescence tomography (FT) problems. In the first step, a continuous wave (cw) diffuse optical tomography (DOT) algorithm is used to reconstruct the tissue optical properties assuming known anatomical information provided by x-ray computed tomography or other methods. Minimization problems are formed based on L1 norm objective functions, where normalized values for the light fluence rates and the corresponding Green’s functions are used. Then an iterative minimization solution shrinks the permissible regions where the sources are allowed by selecting points with higher probability to contribute to the source distribution. Throughout this process the permissible region shrinks from the entire object to just a few points. The optimum reconstructed bioluminescence and fluorescence distributions are chosen to be the results of the iteration corresponding to the permissible region where the objective function has its global minimum This provides efficient BLT and FT reconstruction algorithms without the need for a priori information about the bioluminescence sources or the fluorophore concentration. Multiple small sources and large distributed sources can be reconstructed with good accuracy for the location and the total source power for BLT and the total number of fluorophore molecules for the FT. For non-uniform distributed sources, the size and magnitude become degenerate due to the degrees of freedom available for possible solutions. However, increasing the number of data points by increasing the number of excitation sources can improve the accuracy of reconstruction for non-uniform fluorophore distributions. PMID:21326647
TiConverter: A training image converting tool for multiple-point geostatistics
NASA Astrophysics Data System (ADS)
Fadlelmula F., Mohamed M.; Killough, John; Fraim, Michael
2016-11-01
TiConverter is a tool developed to ease the application of multiple-point geostatistics whether by the open source Stanford Geostatistical Modeling Software (SGeMS) or other available commercial software. TiConverter has a user-friendly interface and it allows the conversion of 2D training images into numerical representations in four different file formats without the need for additional code writing. These are the ASCII (.txt), the geostatistical software library (GSLIB) (.txt), the Isatis (.dat), and the VTK formats. It performs the conversion based on the RGB color system. In addition, TiConverter offers several useful tools including image resizing, smoothing, and segmenting tools. The purpose of this study is to introduce the TiConverter, and to demonstrate its application and advantages with several examples from the literature.
Minet, E P; Goodhue, R; Meier-Augenstein, W; Kalin, R M; Fenton, O; Richards, K G; Coxon, C E
2017-11-01
Excessive nitrate (NO 3 - ) concentration in groundwater raises health and environmental issues that must be addressed by all European Union (EU) member states under the Nitrates Directive and the Water Framework Directive. The identification of NO 3 - sources is critical to efficiently control or reverse NO 3 - contamination that affects many aquifers. In that respect, the use of stable isotope ratios 15 N/ 14 N and 18 O/ 16 O in NO 3 - (expressed as δ 15 N-NO 3 - and δ 18 O-NO 3 - , respectively) has long shown its value. However, limitations exist in complex environments where multiple nitrogen (N) sources coexist. This two-year study explores a method for improved NO 3 - source investigation in a shallow unconfined aquifer with mixed N inputs and a long established NO 3 - problem. In this tillage-dominated area of free-draining soil and subsoil, suspected NO 3 - sources were diffuse applications of artificial fertiliser and organic point sources (septic tanks and farmyards). Bearing in mind that artificial diffuse sources were ubiquitous, groundwater samples were first classified according to a combination of two indicators relevant of point source contamination: presence/absence of organic point sources (i.e. septic tank and/or farmyard) near sampling wells and exceedance/non-exceedance of a contamination threshold value for sodium (Na + ) in groundwater. This classification identified three contamination groups: agricultural diffuse source but no point source (D+P-), agricultural diffuse and point source (D+P+) and agricultural diffuse but point source occurrence ambiguous (D+P±). Thereafter δ 15 N-NO 3 - and δ 18 O-NO 3 - data were superimposed on the classification. As δ 15 N-NO 3 - was plotted against δ 18 O-NO 3 - , comparisons were made between the different contamination groups. Overall, both δ variables were significantly and positively correlated (p < 0.0001, r s = 0.599, slope of 0.5), which was indicative of denitrification. An inspection of the contamination groups revealed that denitrification did not occur in the absence of point source contamination (group D+P-). In fact, strong significant denitrification lines occurred only in the D+P+ and D+P± groups (p < 0.0001, r s > 0.6, 0.53 ≤ slope ≤ 0.76), i.e. where point source contamination was characterised or suspected. These lines originated from the 2-6‰ range for δ 15 N-NO 3 - , which suggests that i) NO 3 - contamination was dominated by an agricultural diffuse N source (most likely the large organic matter pool that has incorporated 15 N-depleted nitrogen from artificial fertiliser in agricultural soils and whose nitrification is stimulated by ploughing and fertilisation) rather than point sources and ii) denitrification was possibly favoured by high dissolved organic content (DOC) from point sources. Combining contamination indicators and a large stable isotope dataset collected over a large study area could therefore improve our understanding of the NO 3 - contamination processes in groundwater for better land use management. We hypothesise that in future research, additional contamination indicators (e.g. pharmaceutical molecules) could also be combined to disentangle NO 3 - contamination from animal and human wastes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Methane source identification in Boston, Massachusetts using isotopic and ethane measurements
NASA Astrophysics Data System (ADS)
Down, A.; Jackson, R. B.; Plata, D.; McKain, K.; Wofsy, S. C.; Rella, C.; Crosson, E.; Phillips, N. G.
2012-12-01
Methane has substantial greenhouse warming potential and is the principle component of natural gas. Fugitive natural gas emissions could be a significant source of methane to the atmosphere. However, the cumulative magnitude of natural gas leaks is not yet well constrained. We used a combination of point source measurements and ambient monitoring to characterize the methane sources in the Boston urban area. We developed distinct fingerprints for natural gas and multiple biogenic methane sources based on hydrocarbon concentration and isotopic composition. We combine these data with periodic measurements of atmospheric methane and ethane concentration to estimate the fractional contribution of natural gas and biogenic methane sources to the cumulative urban methane flux in Boston. These results are used to inform an inverse model of urban methane concentration and emissions.
Song, Hajun; Hwang, Sejin; Song, Jong-In
2017-05-15
This study presents an optical frequency switching scheme for a high-speed broadband terahertz (THz) measurement system based on the photomixing technique. The proposed system can achieve high-speed broadband THz measurements using narrow optical frequency scanning of a tunable laser source combined with a wavelength-switchable laser source. In addition, this scheme can provide a larger output power of an individual THz signal compared with that of a multi-mode THz signal generated by multiple CW laser sources. A swept-source THz tomography system implemented with a two-channel wavelength-switchable laser source achieves a reduced time for acquisition of a point spread function and a higher depth resolution in the same amount of measurement time compared with a system with a single optical source.
NASA Astrophysics Data System (ADS)
Lakshmi, V.; Sen, I. S.; Mishra, G.
2017-12-01
There has been much discussion amongst biologists, ecologists, chemists, geologists, environmental firms, and science policy makers about the impact of human activities on river health. As a result, multiple river restoration projects are on going on many large river basins around the world. In the Indian subcontinent, the Ganges River is the focal point of all restoration actions as it provides food and water security to half a billion people. Serious concerns have been raised about the quality of Ganga water as toxic chemicals and many more enters the river system through point-sources such as direct wastewater discharge to rivers, or non-point-sources. Point source pollution can be easily identified and remedial actions can be taken; however, non-point pollution sources are harder to quantify and mitigate. A large non-point pollution source in the Indo-Gangetic floodplain is the network of small floodplain rivers. However, these rivers are rarely studied since they are small in catchment area ( 1000-10,000 km2) and discharge (<100 m3/s). As a result, the impact of these small floodplain rivers on the dissolved chemical load of large river systems is not constrained. To fill this knowledge gap we have monitored the Pandu River for one year between February 2015 and April 2016. Pandu river is 242 km long and is a right bank tributary of Ganges with a total catchment area of 1495 km2. Water samples were collected every month for dissolved major and trace elements. Here we show that the concentration of heavy metals in river Pandu is in higher range as compared to the world river average, and all the dissolved elements shows a large spatial-temporal variation. We show that the Pandu river exports 192170, 168517, 57802, 32769, 29663, 1043, 279, 241, 225, 162, 97, 28, 25, 22, 20, 8, 4 Kg/yr of Ca, Na, Mg, K, Si, Sr, Zn, B, Ba, Mn, Al, Li, Rb, Mo, U, Cu, and Sb, respectively, to the Ganga river, and the exported chemical flux effects the water chemistry of the Ganga river downstream of its confluence point. We further speculate that small floodplain rivers is an important source that contributes to the dissolved chemical budget of large river systems, and they must be better monitored to address future challenges in river basin management.
Collaborative Action Research on Technology Integration for Science Learning
ERIC Educational Resources Information Center
Wang, Chien-hsing; Ke, Yi-Ting; Wu, Jin-Tong; Hsu, Wen-Hua
2012-01-01
This paper briefly reports the outcomes of an action research inquiry on the use of blogs, MS PowerPoint [PPT], and the Internet as learning tools with a science class of sixth graders for project-based learning. Multiple sources of data were essential to triangulate the key findings articulated in this paper. Corresponding to previous studies,…
Anthropogenic impervious surfaces have an important relationship with non-point source pollution (NPS) in urban watersheds. The amount of impervious surface area in a watershed is a key indicator of landscape change. As a single variable, it serves to intcgrate a number of concur...
Low Fruit/Vegetable Consumption in the Home: Cumulative Risk Factors in Early Childhood
ERIC Educational Resources Information Center
Ward, Wendy L.; Swindle, Taren M.; Kyzer, Angela L.; Whiteside-Mansell, Leanne
2015-01-01
Cumulative risk theory suggests that a variety of social risk factors would have an additive effect on obesity risk. Multiple studies have suggested that obesity is related to basic resources such as transportation and financial resources. Additional research points to parental engagement and parental monitoring as additional sources of risk. This…
Outdoor air pollution in close proximity to a continuous point source
NASA Astrophysics Data System (ADS)
Klepeis, Neil E.; Gabel, Etienne B.; Ott, Wayne R.; Switzer, Paul
Data are lacking on human exposure to air pollutants occurring in ground-level outdoor environments within a few meters of point sources. To better understand outdoor exposure to tobacco smoke from cigarettes or cigars, and exposure to other types of outdoor point sources, we performed more than 100 controlled outdoor monitoring experiments on a backyard residential patio in which we released pure carbon monoxide (CO) as a tracer gas for continuous time periods lasting 0.5-2 h. The CO was emitted from a single outlet at a fixed per-experiment rate of 120-400 cc min -1 (˜140-450 mg min -1). We measured CO concentrations every 15 s at up to 36 points around the source along orthogonal axes. The CO sensors were positioned at standing or sitting breathing heights of 2-5 ft (up to 1.5 ft above and below the source) and at horizontal distances of 0.25-2 m. We simultaneously measured real-time air speed, wind direction, relative humidity, and temperature at single points on the patio. The ground-level air speeds on the patio were similar to those we measured during a survey of 26 outdoor patio locations in 5 nearby towns. The CO data exhibited a well-defined proximity effect similar to the indoor proximity effect reported in the literature. Average concentrations were approximately inversely proportional to distance. Average CO levels were approximately proportional to source strength, supporting generalization of our results to different source strengths. For example, we predict a cigarette smoker would cause average fine particle levels of approximately 70-110 μg m -3 at horizontal distances of 0.25-0.5 m. We also found that average CO concentrations rose significantly as average air speed decreased. We fit a multiplicative regression model to the empirical data that predicts outdoor concentrations as a function of source emission rate, source-receptor distance, air speed and wind direction. The model described the data reasonably well, accounting for ˜50% of the log-CO variability in 5-min CO concentrations.
Pilot points method for conditioning multiple-point statistical facies simulation on flow data
NASA Astrophysics Data System (ADS)
Ma, Wei; Jafarpour, Behnam
2018-05-01
We propose a new pilot points method for conditioning discrete multiple-point statistical (MPS) facies simulation on dynamic flow data. While conditioning MPS simulation on static hard data is straightforward, their calibration against nonlinear flow data is nontrivial. The proposed method generates conditional models from a conceptual model of geologic connectivity, known as a training image (TI), by strategically placing and estimating pilot points. To place pilot points, a score map is generated based on three sources of information: (i) the uncertainty in facies distribution, (ii) the model response sensitivity information, and (iii) the observed flow data. Once the pilot points are placed, the facies values at these points are inferred from production data and then are used, along with available hard data at well locations, to simulate a new set of conditional facies realizations. While facies estimation at the pilot points can be performed using different inversion algorithms, in this study the ensemble smoother (ES) is adopted to update permeability maps from production data, which are then used to statistically infer facies types at the pilot point locations. The developed method combines the information in the flow data and the TI by using the former to infer facies values at selected locations away from the wells and the latter to ensure consistent facies structure and connectivity where away from measurement locations. Several numerical experiments are used to evaluate the performance of the developed method and to discuss its important properties.
NASA Astrophysics Data System (ADS)
Van Kha, Tran; Van Vuong, Hoang; Thanh, Do Duc; Hung, Duong Quoc; Anh, Le Duc
2018-05-01
The maximum horizontal gradient method was first proposed by Blakely and Simpson (1986) for determining the boundaries between geological bodies with different densities. The method involves the comparison of a center point with its eight nearest neighbors in four directions within each 3 × 3 calculation grid. The horizontal location and magnitude of the maximum values are found by interpolating a second-order polynomial through the trio of points provided that the magnitude of the middle point is greater than its two nearest neighbors in one direction. In theoretical models of multiple sources, however, the above condition does not allow the maximum horizontal locations to be fully located, and it could be difficult to correlate the edges of complicated sources. In this paper, the authors propose an additional condition to identify more maximum horizontal locations within the calculation grid. This additional condition will improve the method algorithm for interpreting the boundaries of magnetic and/or gravity sources. The improved algorithm was tested on gravity models and applied to gravity data for the Phu Khanh basin on the continental shelf of the East Vietnam Sea. The results show that the additional locations of the maximum horizontal gradient could be helpful for connecting the edges of complicated source bodies.
Time-dependent clustering analysis of the second BATSE gamma-ray burst catalog
NASA Technical Reports Server (NTRS)
Brainerd, J. J.; Meegan, C. A.; Briggs, Michael S.; Pendleton, G. N.; Brock, M. N.
1995-01-01
A time-dependent two-point correlation-function analysis of the Burst and Transient Source Experiment (BATSE) 2B catalog finds no evidence of burst repetition. As part of this analysis, we discuss the effects of sky exposure on the observability of burst repetition and present the equation describing the signature of burst repetition in the data. For a model of all burst repetition from a source occurring in less than five days we derive upper limits on the number of bursts in the catalog from repeaters and model-dependent upper limits on the fraction of burst sources that produce multiple outbursts.
Atmospheric scattering of middle uv radiation from an internal source.
Meier, R R; Lee, J S; Anderson, D E
1978-10-15
A Monte Carlo model has been developed which simulates the multiple-scattering of middle-uv radiation in the lower atmosphere. The source of radiation is assumed to be monochromatic and located at a point. The physical effects taken into account in the model are Rayleigh and Mie scattering, pure absorption by particulates and trace atmospheric gases, and ground albedo. The model output consists of the multiply scattered radiance as a function of look-angle of a detector located within the atmosphere. Several examples are discussed, and comparisons are made with direct-source and single-scattered contributions to the signal received by the detector.
PSD Applicability Determination for Multiple Owner/Operator Point Sources Within a Single Facility
This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Wise, Daniel R.; Rinella, Frank A.; Rinella, Joseph F.; Fuhrer, Greg J.; Embrey, Sandra S.; Clark, Gregory M.; Schwarz, Gregory E.; Sobieszczyk, Steven
2007-01-01
This study focused on three areas that might be of interest to water-quality managers in the Pacific Northwest: (1) annual loads of total nitrogen (TN), total phosphorus (TP) and suspended sediment (SS) transported through the Columbia River and Puget Sound Basins, (2) annual yields of TN, TP, and SS relative to differences in landscape and climatic conditions between subbasin catchments (drainage basins), and (3) trends in TN, TP, and SS concentrations and loads in comparison to changes in landscape and climatic conditions in the catchments. During water year 2000, an average streamflow year in the Pacific Northwest, the Columbia River discharged about 570,000 pounds per day of TN, about 55,000 pounds per day of TP, and about 14,000 tons per day of SS to the Pacific Ocean. The Snake, Yakima, Deschutes, and Willamette Rivers contributed most of the load discharged to the Columbia River. Point-source nutrient loads to the catchments (almost exclusively from municipal wastewater treatment plants) generally were a small percentage of the total in-stream nutrient loads; however, in some reaches of the Spokane, Boise, Walla Walla, and Willamette River Basins, point sources were responsible for much of the annual in-stream nutrient load. Point-source nutrient loads generally were a small percentage of the total catchment nutrient loads compared to nonpoint sources, except for a few catchments where point-source loads comprised as much as 30 percent of the TN load and as much as 80 percent of the TP load. The annual TN and TP loads from point sources discharging directly to the Puget Sound were about equal to the annual loads from eight major tributaries. Yields of TN, TP, and SS generally were greater in catchments west of the Cascade Range. A multiple linear regression analysis showed that TN yields were significantly (p < 0.05) and positively related to precipitation, atmospheric nitrogen load, fertilizer and manure load, and point-source load, and were negatively related to average slope. TP yields were significantly related positively to precipitation, and point-source load and SS yields were significantly related positively to precipitation. Forty-eight percent of the available monitoring sites for TN had significant trends in concentration (2 increasing, 19 decreasing), 32 percent of the available sites for TP had significant trends in concentration (7 increasing, 9 decreasing), and 40 percent of the available sites for SS had significant trends in concentration (4 increasing, 15 decreasing). The trends in load followed a similar pattern, but with fewer sites showing significant trends. The results from this study indicate that inputs from nonpoint sources of nutrients probably have decreased over time in many of the catchments. Despite the generally small contribution of point-source nutrient loads, they still may have been partially responsible for the significant decreasing trends for nutrients at sites where the total point-source nutrient loads to the catchments equaled a substantial proportion of the in-stream load.
Multiple-target tracking implementation in the ebCMOS camera system: the LUSIPHER prototype
NASA Astrophysics Data System (ADS)
Doan, Quang Tuyen; Barbier, Remi; Dominjon, Agnes; Cajgfinger, Thomas; Guerin, Cyrille
2012-06-01
The domain of the low light imaging systems progresses very fast, thanks to detection and electronic multiplication technology evolution, such as the emCCD (electron multiplying CCD) or the ebCMOS (electron bombarded CMOS). We present an ebCMOS camera system that is able to track every 2 ms more than 2000 targets with a mean number of photons per target lower than two. The point light sources (targets) are spots generated by a microlens array (Shack-Hartmann) used in adaptive optics. The Multiple-Target-Tracking designed and implemented on a rugged workstation is described. The results and the performances of the system on the identification and tracking are presented and discussed.
NASA Astrophysics Data System (ADS)
Maćkowiak-Pawłowska, Maja; Przybyła, Piotr
2018-05-01
The incomplete particle identification limits the experimentally-available phase space region for identified particle analysis. This problem affects ongoing fluctuation and correlation studies including the search for the critical point of strongly interacting matter performed on SPS and RHIC accelerators. In this paper we provide a procedure to obtain nth order moments of the multiplicity distribution using the identity method, generalising previously published solutions for n=2 and n=3. Moreover, we present an open source software implementation of this computation, called Idhim, that allows one to obtain the true moments of identified particle multiplicity distributions from the measured ones provided the response function of the detector is known.
Nguyen, Hoa T; Meir, Patrick; Sack, Lawren; Evans, John R; Oliveira, Rafael S; Ball, Marilyn C
2017-08-01
Leaf structure and water relations were studied in a temperate population of Avicennia marina subsp. australasica along a natural salinity gradient [28 to 49 parts per thousand (ppt)] and compared with two subspecies grown naturally in similar soil salinities to those of subsp. australasica but under different climates: subsp. eucalyptifolia (salinity 30 ppt, wet tropics) and subsp. marina (salinity 46 ppt, arid tropics). Leaf thickness, leaf dry mass per area and water content increased with salinity and aridity. Turgor loss point declined with increase in soil salinity, driven mainly by differences in osmotic potential at full turgor. Nevertheless, a high modulus of elasticity (ε) contributed to maintenance of high cell hydration at turgor loss point. Despite similarity among leaves in leaf water storage capacitance, total leaf water storage increased with increasing salinity and aridity. The time that stored water alone could sustain an evaporation rate of 1 mmol m -2 s -1 ranged from 77 to 126 min from subspecies eucalyptifolia to ssp. marina, respectively. Achieving full leaf hydration or turgor would require water from sources other than the roots, emphasizing the importance of multiple water sources to growth and survival of Avicennia marina across gradients in salinity and aridity. © 2017 John Wiley & Sons Ltd.
Multiple point statistical simulation using uncertain (soft) conditional data
NASA Astrophysics Data System (ADS)
Hansen, Thomas Mejer; Vu, Le Thanh; Mosegaard, Klaus; Cordua, Knud Skou
2018-05-01
Geostatistical simulation methods have been used to quantify spatial variability of reservoir models since the 80s. In the last two decades, state of the art simulation methods have changed from being based on covariance-based 2-point statistics to multiple-point statistics (MPS), that allow simulation of more realistic Earth-structures. In addition, increasing amounts of geo-information (geophysical, geological, etc.) from multiple sources are being collected. This pose the problem of integration of these different sources of information, such that decisions related to reservoir models can be taken on an as informed base as possible. In principle, though difficult in practice, this can be achieved using computationally expensive Monte Carlo methods. Here we investigate the use of sequential simulation based MPS simulation methods conditional to uncertain (soft) data, as a computational efficient alternative. First, it is demonstrated that current implementations of sequential simulation based on MPS (e.g. SNESIM, ENESIM and Direct Sampling) do not account properly for uncertain conditional information, due to a combination of using only co-located information, and a random simulation path. Then, we suggest two approaches that better account for the available uncertain information. The first make use of a preferential simulation path, where more informed model parameters are visited preferentially to less informed ones. The second approach involves using non co-located uncertain information. For different types of available data, these approaches are demonstrated to produce simulation results similar to those obtained by the general Monte Carlo based approach. These methods allow MPS simulation to condition properly to uncertain (soft) data, and hence provides a computationally attractive approach for integration of information about a reservoir model.
Feedback Power Control Strategies in Wireless Sensor Networks with Joint Channel Decoding
Abrardo, Andrea; Ferrari, Gianluigi; Martalò, Marco; Perna, Fabio
2009-01-01
In this paper, we derive feedback power control strategies for block-faded multiple access schemes with correlated sources and joint channel decoding (JCD). In particular, upon the derivation of the feasible signal-to-noise ratio (SNR) region for the considered multiple access schemes, i.e., the multidimensional SNR region where error-free communications are, in principle, possible, two feedback power control strategies are proposed: (i) a classical feedback power control strategy, which aims at equalizing all link SNRs at the access point (AP), and (ii) an innovative optimized feedback power control strategy, which tries to make the network operational point fall in the feasible SNR region at the lowest overall transmit energy consumption. These strategies will be referred to as “balanced SNR” and “unbalanced SNR,” respectively. While they require, in principle, an unlimited power control range at the sources, we also propose practical versions with a limited power control range. We preliminary consider a scenario with orthogonal links and ideal feedback. Then, we analyze the robustness of the proposed power control strategies to possible non-idealities, in terms of residual multiple access interference and noisy feedback channels. Finally, we successfully apply the proposed feedback power control strategies to a limiting case of the class of considered multiple access schemes, namely a central estimating officer (CEO) scenario, where the sensors observe noisy versions of a common binary information sequence and the AP's goal is to estimate this sequence by properly fusing the soft-output information output by the JCD algorithm. PMID:22291536
Probabilistic Analysis of Earthquake-Led Water Contamination: A Case of Sichuan, China
NASA Astrophysics Data System (ADS)
Yang, Yan; Li, Lin; Benjamin Zhan, F.; Zhuang, Yanhua
2016-06-01
The objective of this paper is to evaluate seismic-led point source and non-point source water pollution, under the seismic hazard of 10 % probability of exceedance in 50 years, and with the minimum value of the water quality standard in Sichuan, China. The soil conservation service curve number method of calculating the runoff depth in the single rainfall event combined with the seismic damage index were applied to estimate the potential degree of non-point source water pollution. To estimate the potential impact of point source water pollution, a comprehensive water pollution evaluation framework is constructed using a combination of Water Quality Index and Seismic Damage Index methods. The four key findings of this paper are: (1) The water catchment that has the highest factory concentration does not have the highest risk of non-point source water contamination induced by the outbreak of potential earthquake. (2) The water catchment that has the highest numbers of cumulative water pollutants types are typically located in the south western parts of Sichuan where the main river basins in the regions flow through. (3) The most common pollutants in sample factories studied is COD and NH3-N which are found in all catchments. The least common pollutant is pathogen—found present in W1 catchment which has the best rating in the water quality index. (4) Using water quality index as a standardization parameter, parallel comparisons is made among the 16 water catchments. Only catchment W1 reaches level II water quality status which has the rating of moderately polluted in events of earthquake induced water contamination. All other areas suffer from severe water contamination with multiple pollution sources. The results from the data model are significant to urban planning commissions and businesses to strategically choose their factory locations in order to minimize potential hazardous impact during the outbreak of earthquake.
Modeling the Volcanic Source at Long Valley, CA, Using a Genetic Algorithm Technique
NASA Technical Reports Server (NTRS)
Tiampo, Kristy F.
1999-01-01
In this project, we attempted to model the deformation pattern due to the magmatic source at Long Valley caldera using a real-value coded genetic algorithm (GA) inversion similar to that found in Michalewicz, 1992. The project has been both successful and rewarding. The genetic algorithm, coded in the C programming language, performs stable inversions over repeated trials, with varying initial and boundary conditions. The original model used a GA in which the geophysical information was coded into the fitness function through the computation of surface displacements for a Mogi point source in an elastic half-space. The program was designed to invert for a spherical magmatic source - its depth, horizontal location and volume - using the known surface deformations. It also included the capability of inverting for multiple sources.
Polarized γ source based on Compton backscattering in a laser cavity
NASA Astrophysics Data System (ADS)
Yakimenko, V.; Pogorelsky, I. V.
2006-09-01
We propose a novel gamma source suitable for generating a polarized positron beam for the next generation of electron-positron colliders, such as the International Linear Collider (ILC), and the Compact Linear Collider (CLIC). This 30-MeV polarized gamma source is based on Compton scattering inside a picosecond CO2 laser cavity generated from electron bunches produced by a 4-GeV linac. We identified and experimentally verified the optimum conditions for obtaining at least one gamma photon per electron. After multiplication at several consecutive interaction points, the circularly polarized gamma rays are stopped on a target, thereby creating copious numbers of polarized positrons. We address the practicality of having an intracavity Compton-polarized positron source as the injector for these new colliders.
The foodscape: classification and field validation of secondary data sources.
Lake, Amelia A; Burgoine, Thomas; Greenhalgh, Fiona; Stamp, Elaine; Tyrrell, Rachel
2010-07-01
The aims were to: develop a food environment classification tool and to test the acceptability and validity of three secondary sources of food environment data within a defined urban area of Newcastle-Upon-Tyne, using a field validation method. A 21 point (with 77 sub-categories) classification tool was developed. The fieldwork recorded 617 establishments selling food and/or food products. The sensitivity analysis of the secondary sources against fieldwork for the Newcastle City Council data was good (83.6%), while Yell.com and the Yellow Pages were low (51.2% and 50.9%, respectively). To improve the quality of secondary data, multiple sources should be used in order to achieve a realistic picture of the foodscape. 2010 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gwyn, Stephen D. J., E-mail: Stephen.Gwyn@nrc-cnrc.gc.ca
This paper describes the image stacks and catalogs of the Canada-France-Hawaii Telescope Legacy Survey produced using the MegaPipe data pipeline at the Canadian Astronomy Data Centre. The Legacy Survey is divided into two parts. The Deep Survey consists of four fields each of 1 deg{sup 2}, with magnitude limits (50% completeness for point sources) of u = 27.5, g = 27.9, r = 27.7, i = 27.4, and z = 26.2. It contains 1.6 Multiplication-Sign 10{sup 6} sources. The Wide Survey consists of 150 deg{sup 2} split over four fields, with magnitude limits of u = 26.0, g = 26.5,more » r = 25.9, i = 25.7, and z = 24.6. It contains 3 Multiplication-Sign 10{sup 7} sources. This paper describes the calibration, image stacking, and catalog generation process. The images and catalogs are available on the web through several interfaces: normal image and text file catalog downloads, a 'Google Sky' interface, an image cutout service, and a catalog database query service.« less
ERIC Educational Resources Information Center
Cho, In-Young; Park, Hyun-Ju; Choi, Byung-Soon
This study was conducted to describe in detail Korean students' conceptual change learning processes in the study of kinetic theory of gases. The study was interpretive, using multiple data sources to achieve a triangulation of data. Three students from a public high school for boys served as representative cases. How epistemological aspect and…
Study on the Spatial Resolution of Single and Multiple Coincidences Compton Camera
NASA Astrophysics Data System (ADS)
Andreyev, Andriy; Sitek, Arkadiusz; Celler, Anna
2012-10-01
In this paper we study the image resolution that can be obtained from the Multiple Coincidences Compton Camera (MCCC). The principle of MCCC is based on a simultaneous acquisition of several gamma-rays emitted in cascade from a single nucleus. Contrary to a standard Compton camera, MCCC can theoretically provide the exact location of a radioactive source (based only on the identification of the intersection point of three cones created by a single decay), without complicated tomographic reconstruction. However, practical implementation of the MCCC approach encounters several problems, such as low detection sensitivities result in very low probability of coincident triple gamma-ray detection, which is necessary for the source localization. It is also important to evaluate how the detection uncertainties (finite energy and spatial resolution) influence identification of the intersection of three cones, thus the resulting image quality. In this study we investigate how the spatial resolution of the reconstructed images using the triple-cone reconstruction (TCR) approach compares to images reconstructed from the same data using standard iterative method based on single-cone. Results show, that FWHM for the point source reconstructed with TCR was 20-30% higher than the one obtained from the standard iterative reconstruction based on expectation maximization (EM) algorithm and conventional single-cone Compton imaging. Finite energy and spatial resolutions of the MCCC detectors lead to errors in conical surfaces definitions (“thick” conical surfaces) which only amplify in image reconstruction when intersection of three cones is being sought. Our investigations show that, in spite of being conceptually appealing, the identification of triple cone intersection constitutes yet another restriction of the multiple coincidence approach which limits the image resolution that can be obtained with MCCC and TCR algorithm.
Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems
Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao
2016-01-01
In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm. PMID:26985896
Wearable Sensor Localization Considering Mixed Distributed Sources in Health Monitoring Systems.
Wan, Liangtian; Han, Guangjie; Wang, Hao; Shu, Lei; Feng, Nanxing; Peng, Bao
2016-03-12
In health monitoring systems, the base station (BS) and the wearable sensors communicate with each other to construct a virtual multiple input and multiple output (VMIMO) system. In real applications, the signal that the BS received is a distributed source because of the scattering, reflection, diffraction and refraction in the propagation path. In this paper, a 2D direction-of-arrival (DOA) estimation algorithm for incoherently-distributed (ID) and coherently-distributed (CD) sources is proposed based on multiple VMIMO systems. ID and CD sources are separated through the second-order blind identification (SOBI) algorithm. The traditional estimating signal parameters via the rotational invariance technique (ESPRIT)-based algorithm is valid only for one-dimensional (1D) DOA estimation for the ID source. By constructing the signal subspace, two rotational invariant relationships are constructed. Then, we extend the ESPRIT to estimate 2D DOAs for ID sources. For DOA estimation of CD sources, two rational invariance relationships are constructed based on the application of generalized steering vectors (GSVs). Then, the ESPRIT-based algorithm is used for estimating the eigenvalues of two rational invariance matrices, which contain the angular parameters. The expressions of azimuth and elevation for ID and CD sources have closed forms, which means that the spectrum peak searching is avoided. Therefore, compared to the traditional 2D DOA estimation algorithms, the proposed algorithm imposes significantly low computational complexity. The intersecting point of two rays, which come from two different directions measured by two uniform rectangle arrays (URA), can be regarded as the location of the biosensor (wearable sensor). Three BSs adopting the smart antenna (SA) technique cooperate with each other to locate the wearable sensors using the angulation positioning method. Simulation results demonstrate the effectiveness of the proposed algorithm.
The effect of baryons in the cosmological lensing PDFs
NASA Astrophysics Data System (ADS)
Castro, Tiago; Quartin, Miguel; Giocoli, Carlo; Borgani, Stefano; Dolag, Klaus
2018-07-01
Observational cosmology is passing through a unique moment of grandeur with the amount of quality data growing fast. However, in order to better take advantage of this moment, data analysis tools have to keep up the pace. Understanding the effect of baryonic matter on the large-scale structure is one of the challenges to be faced in cosmology. In this work, we have thoroughly studied the effect of baryonic physics on different lensing statistics. Making use of the Magneticum Pathfinder suite of simulations, we show that the influence of luminous matter on the 1-point lensing statistics of point sources is significant, enhancing the probability of magnified objects with μ > 3 by a factor of 2 and the occurrence of multiple images by a factor of 5-500, depending on the source redshift and size. We also discuss the dependence of the lensing statistics on the angular resolution of sources. Our results and methodology were carefully tested to guarantee that our uncertainties are much smaller than the effects here presented.
A point-source outbreak of Coccidioidomycosis among a highway construction crew.
Nicas, Mark
2018-01-01
Coccidioidomycosis is an infection caused by inhaling spores of the soil fungus Coccidioides immitis (hereafter termed Cocci). Cocci is endemic in certain areas of California. When soil containing the fungus is disturbed, as during earth-moving activities, respirable Cocci spores can become airborne and be inhaled by persons in the vicinity. This article describes a cluster of seven Cocciodioidomycosis cases among a highway construction crew that occurred in June/July 2008 in Kern County, CA, which is among the most highly endemic regions for Cocci in California. The exposures spanned no more than seven work days, and illness developed within two to three weeks of the exposures. Given the common source of exposure (soil dust generated at the work site) and the multiple cases occurring close in time, the cluster can also be termed a "point-source outbreak." The contractor was not informed of the infection risk and did not take adequate precautions against dust exposure. Appropriate engineering/administrative controls and respiratory protection are discussed.
The effect of baryons in the cosmological lensing PDFs
NASA Astrophysics Data System (ADS)
Castro, Tiago; Quartin, Miguel; Giocoli, Carlo; Borgani, Stefano; Dolag, Klaus
2018-05-01
Observational cosmology is passing through a unique moment of grandeur with the amount of quality data growing fast. However, in order to better take advantage of this moment, data analysis tools have to keep up the pace. Understanding the effect of baryonic matter on the large-scale structure is one of the challenges to be faced in cosmology. In this work, we have thoroughly studied the effect of baryonic physics on different lensing statistics. Making use of the Magneticum Pathfinder suite of simulations we show that the influence of luminous matter on the 1-point lensing statistics of point sources is significant, enhancing the probability of magnified objects with μ > 3 by a factor of 2 and the occurrence of multiple-images by a factor 5 - 500 depending on the source redshift and size. We also discuss the dependence of the lensing statistics on the angular resolution of sources. Our results and methodology were carefully tested in order to guarantee that our uncertainties are much smaller than the effects here presented.
SEARCHES FOR HIGH-ENERGY NEUTRINO EMISSION IN THE GALAXY WITH THE COMBINED ICECUBE-AMANDA DETECTOR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbasi, R.; Ahlers, M.; Andeen, K.
2013-01-20
We report on searches for neutrino sources at energies above 200 GeV in the Northern sky of the Galactic plane, using the data collected by the South Pole neutrino telescope, IceCube, and AMANDA. The Galactic region considered in this work includes the local arm toward the Cygnus region and our closest approach to the Perseus Arm. The searches are based on the data collected between 2007 and 2009. During this time AMANDA was an integrated part of IceCube, which was still under construction and operated with 22 strings (2007-2008) and 40 strings (2008-2009) of optical modules deployed in the ice.more » By combining the advantages of the larger IceCube detector with the lower energy threshold of the more compact AMANDA detector, we obtain an improved sensitivity at energies below {approx}10 TeV with respect to previous searches. The analyses presented here are a scan for point sources within the Galactic plane, a search optimized for multiple and extended sources in the Cygnus region, which might be below the sensitivity of the point source scan, and studies of seven pre-selected neutrino source candidates. For one of them, Cygnus X-3, a time-dependent search for neutrino emission in coincidence with observed radio and X-ray flares has been performed. No evidence of a signal is found, and upper limits are reported for each of the searches. We investigate neutrino spectra proportional to E {sup -2} and E {sup -3} in order to cover the entire range of possible neutrino spectra. The steeply falling E {sup -3} neutrino spectrum can also be used to approximate neutrino energy spectra with energy cutoffs below 50 TeV since these result in a similar energy distribution of events in the detector. For the region of the Galactic plane visible in the Northern sky, the 90% confidence level muon neutrino flux upper limits are in the range E {sup 3} dN/dE {approx} 5.4-19.5 Multiplication-Sign 10{sup -11} TeV{sup 2} cm{sup -2} s{sup -1} for point-like neutrino sources in the energy region [180.0 GeV-20.5 TeV]. These represent the most stringent upper limits for soft-spectra neutrino sources within the Galaxy reported to date.« less
NASA Astrophysics Data System (ADS)
Hain, Roger; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Primini, Francis Anthony; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula
2018-01-01
The Second Chandra Source Catalog (CSC2.0) combines data at multiple stages to improve detection efficiency, enhance source region identification, and match observations of the same celestial source taken with significantly different point spread functions on Chandra's detectors. The need to group data for different reasons at different times in processing results in a hierarchy of groups to which individual sources belong. Source data are initially identified as belonging to each Chandra observation ID and number (an "obsid"). Data from each obsid whose pointings are within sixty arcseconds of each other are reprojected to the same aspect reference coordinates and grouped into stacks. Detection is performed on all data in the same stack, and individual sources are identified. Finer source position and region data are determined by further processing sources whose photons may be commingled together, grouping such sources into bundles. Individual stacks which overlap to any extent are grouped into ensembles, and all stacks in the same ensemble are later processed together to identify master sources and determine their properties.We discuss the basis for the various methods of combining data for processing and precisely define how the groups are determined. We also investigate some of the issues related to grouping data and discuss what options exist and how groups have evolved from prior releases.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.
Drivers of microbiological quality of household drinking water - a case study in rural Ethiopia.
Usman, Muhammed A; Gerber, Nicolas; Pangaribowo, Evita H
2018-04-01
This study aims at assessing the determinants of microbiological contamination of household drinking water under multiple-use water systems in rural areas of Ethiopia. For this analysis, a random sample of 454 households was surveyed between February and March 2014, and water samples from community sources and household storage containers were collected and tested for fecal contamination. The number of Escherichia coli (E. coli) colony-forming units per 100 mL water was used as an indicator of fecal contamination. The microbiological tests demonstrated that 58% of household stored water samples and 38% of protected community water sources were contaminated with E. coli. Moreover, most improved water sources often considered to provide safe water showed the presence of E. coli. The result shows that households' stored water collected from unprotected wells/springs had higher levels of E. coli than stored water from alternative sources. Distance to water sources and water collection containers are also strongly associated with stored water quality. To ensure the quality of stored water, the study suggests that there is a need to promote water safety from the point-of-source to point-of-use, with due considerations for the linkages between water and agriculture to advance the Sustainable Development Goal 6 of ensuring access to clean water for everyone.
NASA Astrophysics Data System (ADS)
Unterberg, Ea; Donovan, D.; Barton, J.; Wampler, Wr; Abrams, T.; Thomas, Dm; Petrie, T.; Guo, Hy; Stangeby, Pg; Elder, Jd; Rudakov, D.; Grierson, B.; Victor, B.
2017-10-01
Experiments using metal inserts with novel isotopically-enriched tungsten coatings at the outer divertor strike point (OSP) have provided unique insight into the ELM-induced sourcing, main-SOL transport, and core accumulation control mechanisms of W for a range of operating conditions. This experimental approach has used a multi-head, dual-facing collector probe (CP) at the outboard midplane, as well as W-I and core W spectroscopy. Using the CP system, the total amount of W deposited relative to source measurements shows a clear dependence on ELM size, ELM frequency, and strike point location, with large ELMs depositing significantly more W on the CP from the far-SOL source. Additionally, high spatial ( 1mm) and ELM resolved spectroscopic measurements of W sourcing indicate shifts in the peak erosion rate. Furthermore, high performance discharges with rapid ELMs show core W concentrations of few 10-5, and the CP deposition profile indicates W is predominantly transported to the midplane from the OSP rather than from the far-SOL region. The low central W concentration is shown to be due to flattening of the main plasma density profile, presumably by on-axis electron cyclotron heating. Work supported under USDOE Cooperative Agreement DE-FC02-04ER54698.
On the VHF Source Retrieval Errors Associated with Lightning Mapping Arrays (LMAs)
NASA Technical Reports Server (NTRS)
Koshak, W.
2016-01-01
This presentation examines in detail the standard retrieval method: that of retrieving the (x, y, z, t) parameters of a lightning VHF point source from multiple ground-based Lightning Mapping Array (LMA) time-of-arrival (TOA) observations. The solution is found by minimizing a chi-squared function via the Levenberg-Marquardt algorithm. The associated forward problem is examined to illustrate the importance of signal-to-noise ratio (SNR). Monte Carlo simulated retrievals are used to assess the benefits of changing various LMA network properties. A generalized retrieval method is also introduced that, in addition to TOA data, uses LMA electric field amplitude measurements to retrieve a transient VHF dipole moment source.
NASA Astrophysics Data System (ADS)
Zheng, Sifa; Liu, Haitao; Dan, Jiabi; Lian, Xiaomin
2015-05-01
Linear time-invariant assumption for the determination of acoustic source characteristics, the source strength and the source impedance in the frequency domain has been proved reasonable in the design of an exhaust system. Different methods have been proposed to its identification and the multi-load method is widely used for its convenience by varying the load number and impedance. Theoretical error analysis has rarely been referred to and previous results have shown an overdetermined set of open pipes can reduce the identification error. This paper contributes a theoretical error analysis for the load selection. The relationships between the error in the identification of source characteristics and the load selection were analysed. A general linear time-invariant model was built based on the four-load method. To analyse the error of the source impedance, an error estimation function was proposed. The dispersion of the source pressure was obtained by an inverse calculation as an indicator to detect the accuracy of the results. It was found that for a certain load length, the load resistance at the frequency points of one-quarter wavelength of odd multiples results in peaks and in the maximum error for source impedance identification. Therefore, the load impedance of frequency range within the one-quarter wavelength of odd multiples should not be used for source impedance identification. If the selected loads have more similar resistance values (i.e., the same order of magnitude), the identification error of the source impedance could be effectively reduced.
Investigation of Finite Sources through Time Reversal
NASA Astrophysics Data System (ADS)
Kremers, Simon; Brietzke, Gilbert; Igel, Heiner; Larmat, Carene; Fichtner, Andreas; Johnson, Paul A.; Huang, Lianjie
2010-05-01
Under certain conditions time reversal is a promising method to determine earthquake source characteristics without any a-priori information (except the earth model and the data). It consists of injecting flipped-in-time records from seismic stations within the model to create an approximate reverse movie of wave propagation from which the location of the hypocenter and other information might be inferred. In this study, the backward propagation is performed numerically using a parallel cartesian spectral element code. Initial tests using point source moment tensors serve as control for the adaptability of the used wave propagation algorithm. After that we investigated the potential of time reversal to recover finite source characteristics (e.g., size of ruptured area, rupture velocity etc.). We used synthetic data from the SPICE kinematic source inversion blind test initiated to investigate the performance of current kinematic source inversion approaches (http://www.spice-rtn.org/library/valid). The synthetic data set attempts to reproduce the 2000 Tottori earthquake with 33 records close to the fault. We discuss the influence of various assumptions made on the source (e.g., origin time, hypocenter, fault location, etc.), adjoint source weighting (e.g., correct for epicentral distance) and structure (uncertainty in the velocity model) on the results of the time reversal process. We give an overview about the quality of focussing of the different wavefield properties (i.e., displacements, strains, rotations, energies). Additionally, the potential to recover source properties of multiple point sources at the same time is discussed.
Gaussian process surrogates for failure detection: A Bayesian experimental design approach
NASA Astrophysics Data System (ADS)
Wang, Hongqiao; Lin, Guang; Li, Jinglai
2016-05-01
An important task of uncertainty quantification is to identify the probability of undesired events, in particular, system failures, caused by various sources of uncertainties. In this work we consider the construction of Gaussian process surrogates for failure detection and failure probability estimation. In particular, we consider the situation that the underlying computer models are extremely expensive, and in this setting, determining the sampling points in the state space is of essential importance. We formulate the problem as an optimal experimental design for Bayesian inferences of the limit state (i.e., the failure boundary) and propose an efficient numerical scheme to solve the resulting optimization problem. In particular, the proposed limit-state inference method is capable of determining multiple sampling points at a time, and thus it is well suited for problems where multiple computer simulations can be performed in parallel. The accuracy and performance of the proposed method is demonstrated by both academic and practical examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Dong; Heidelberger, Philip; Sugawara, Yutaka
An apparatus and method for extending the scalability and improving the partitionability of networks that contain all-to-all links for transporting packet traffic from a source endpoint to a destination endpoint with low per-endpoint (per-server) cost and a small number of hops. An all-to-all wiring in the baseline topology is decomposed into smaller all-to-all components in which each smaller all-to-all connection is replaced with star topology by using global switches. Stacking multiple copies of the star topology baseline network creates a multi-planed switching topology for transporting packet traffic. Point-to-point unified stacking method using global switch wiring methods connects multiple planes ofmore » a baseline topology by using the global switches to create a large network size with a low number of hops, i.e., low network latency. Grouped unified stacking method increases the scalability (network size) of a stacked topology.« less
Acoustic 3D modeling by the method of integral equations
NASA Astrophysics Data System (ADS)
Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.
2018-02-01
This paper presents a parallel algorithm for frequency-domain acoustic modeling by the method of integral equations (IE). The algorithm is applied to seismic simulation. The IE method reduces the size of the problem but leads to a dense system matrix. A tolerable memory consumption and numerical complexity were achieved by applying an iterative solver, accompanied by an effective matrix-vector multiplication operation, based on the fast Fourier transform (FFT). We demonstrate that, the IE system matrix is better conditioned than that of the finite-difference (FD) method, and discuss its relation to a specially preconditioned FD matrix. We considered several methods of matrix-vector multiplication for the free-space and layered host models. The developed algorithm and computer code were benchmarked against the FD time-domain solution. It was demonstrated that, the method could accurately calculate the seismic field for the models with sharp material boundaries and a point source and receiver located close to the free surface. We used OpenMP to speed up the matrix-vector multiplication, while MPI was used to speed up the solution of the system equations, and also for parallelizing across multiple sources. The practical examples and efficiency tests are presented as well.
NASA Technical Reports Server (NTRS)
McGill, Matthew J. (Inventor); Scott, Vibart S. (Inventor); Marzouk, Marzouk (Inventor)
2001-01-01
A holographic optical element transforms a spectral distribution of light to image points. The element comprises areas, each of which acts as a separate lens to image the light incident in its area to an image point. Each area contains the recorded hologram of a point source object. The image points can be made to lie in a line in the same focal plane so as to align with a linear array detector. A version of the element has been developed that has concentric equal areas to match the circular fringe pattern of a Fabry-Perot interferometer. The element has high transmission efficiency, and when coupled with high quantum efficiency solid state detectors, provides an efficient photon-collecting detection system. The element may be used as part of the detection system in a direct detection Doppler lidar system or multiple field of view lidar system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mosher, J. C.; Baillet, S.; Jerbi, K.
2001-01-01
We describe the use of truncated multipolar expansions for producing dynamic images of cortical neural activation from measurements of the magnetoencephalogram. We use a signal-subspace method to find the locations of a set of multipolar sources, each of which represents a region of activity in the cerebral cortex. Our method builds up an estimate of the sources in a recursive manner, i.e. we first search for point current dipoles, then magnetic dipoles, and finally first order multipoles. The dynamic behavior of these sources is then computed using a linear fit to the spatiotemporal data. The final step in the proceduremore » is to map each of the multipolar sources into an equivalent distributed source on the cortical surface. The method is illustrated through an application to epileptic interictal MEG data.« less
Method of Making Large Area Nanostructures
NASA Technical Reports Server (NTRS)
Marks, Alvin M.
1995-01-01
A method which enables the high speed formation of nanostructures on large area surfaces is described. The method uses a super sub-micron beam writer (Supersebter). The Supersebter uses a large area multi-electrode (Spindt type emitter source) to produce multiple electron beams simultaneously scanned to form a pattern on a surface in an electron beam writer. A 100,000 x 100,000 array of electron point sources, demagnified in a long electron beam writer to simultaneously produce 10 billion nano-patterns on a 1 meter squared surface by multi-electron beam impact on a 1 cm squared surface of an insulating material is proposed.
Tissue engineering and regenerative medicine as applied to the gastrointestinal tract.
Bitar, Khalil N; Zakhem, Elie
2013-10-01
The gastrointestinal (GI) tract is a complex system characterized by multiple cell types with a determined architectural arrangement. Tissue engineering of the GI tract aims to reinstate the architecture and function of all structural layers. The key point for successful tissue regeneration includes the use of cells/biomaterials that elucidate minimal immune response after implantation. Different biomaterial choices and cell sources have been proposed to engineer the GI tract. This review summarizes the recent advances in bioengineering the GI tract with emphasis on cell sources and scaffolding biomaterials. Copyright © 2013 Elsevier Ltd. All rights reserved.
MICA: Multiple interval-based curve alignment
NASA Astrophysics Data System (ADS)
Mann, Martin; Kahle, Hans-Peter; Beck, Matthias; Bender, Bela Johannes; Spiecker, Heinrich; Backofen, Rolf
2018-01-01
MICA enables the automatic synchronization of discrete data curves. To this end, characteristic points of the curves' shapes are identified. These landmarks are used within a heuristic curve registration approach to align profile pairs by mapping similar characteristics onto each other. In combination with a progressive alignment scheme, this enables the computation of multiple curve alignments. Multiple curve alignments are needed to derive meaningful representative consensus data of measured time or data series. MICA was already successfully applied to generate representative profiles of tree growth data based on intra-annual wood density profiles or cell formation data. The MICA package provides a command-line and graphical user interface. The R interface enables the direct embedding of multiple curve alignment computation into larger analyses pipelines. Source code, binaries and documentation are freely available at https://github.com/BackofenLab/MICA
NASA Astrophysics Data System (ADS)
Lachat, E.; Landes, T.; Grussenmeyer, P.
2018-05-01
Terrestrial and airborne laser scanning, photogrammetry and more generally 3D recording techniques are used in a wide range of applications. After recording several individual 3D datasets known in local systems, one of the first crucial processing steps is the registration of these data into a common reference frame. To perform such a 3D transformation, commercial and open source software as well as programs from the academic community are available. Due to some lacks in terms of computation transparency and quality assessment in these solutions, it has been decided to develop an open source algorithm which is presented in this paper. It is dedicated to the simultaneous registration of multiple point clouds as well as their georeferencing. The idea is to use this algorithm as a start point for further implementations, involving the possibility of combining 3D data from different sources. Parallel to the presentation of the global registration methodology which has been employed, the aim of this paper is to confront the results achieved this way with the above-mentioned existing solutions. For this purpose, first results obtained with the proposed algorithm to perform the global registration of ten laser scanning point clouds are presented. An analysis of the quality criteria delivered by two selected software used in this study and a reflexion about these criteria is also performed to complete the comparison of the obtained results. The final aim of this paper is to validate the current efficiency of the proposed method through these comparisons.
THE CHANDRA COSMOS SURVEY. III. OPTICAL AND INFRARED IDENTIFICATION OF X-RAY POINT SOURCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Civano, F.; Elvis, M.; Aldcroft, T.
2012-08-01
The Chandra COSMOS Survey (C-COSMOS) is a large, 1.8 Ms, Chandra program that has imaged the central 0.9 deg{sup 2} of the COSMOS field down to limiting depths of 1.9 Multiplication-Sign 10{sup -16} erg cm{sup -2} s{sup -1} in the soft (0.5-2 keV) band, 7.3 Multiplication-Sign 10{sup -16} erg cm{sup -2} s{sup -1} in the hard (2-10 keV) band, and 5.7 Multiplication-Sign 10{sup -16} erg cm{sup -2} s{sup -1} in the full (0.5-10 keV) band. In this paper we report the i, K, and 3.6 {mu}m identifications of the 1761 X-ray point sources. We use the likelihood ratio technique tomore » derive the association of optical/infrared counterparts for 97% of the X-ray sources. For most of the remaining 3%, the presence of multiple counterparts or the faintness of the possible counterpart prevented a unique association. For only 10 X-ray sources we were not able to associate a counterpart, mostly due to the presence of a very bright field source close by. Only two sources are truly empty fields. The full catalog, including spectroscopic and photometric redshifts and classification described here in detail, is available online. Making use of the large number of X-ray sources, we update the 'classic locus' of active galactic nuclei (AGNs) defined 20 years ago in soft X-ray surveys and define a new locus containing 90% of the AGNs in the survey with full-band luminosity >10{sup 42} erg s{sup -1}. We present the linear fit between the total i-band magnitude and the X-ray flux in the soft and hard bands, drawn over two orders of magnitude in X-ray flux, obtained using the combined C-COSMOS and XMM-COSMOS samples. We focus on the X-ray to optical flux ratio (X/O) and we test its known correlation with redshift and luminosity, and a recently introduced anti-correlation with the concentration index (C). We find a strong anti-correlation (though the dispersion is of the order of 0.5 dex) between X/O computed in the hard band and C and that 90% of the obscured AGNs in the sample with morphological information live in galaxies with regular morphology (bulgy and disky/spiral), suggesting that secular processes govern a significant fraction of the black hole growth at X-ray luminosities of 10{sup 43}-10{sup 44.5} erg s{sup -1}. We also investigate the degree of obscuration of the sample using the hardness ratio, and we compare the X-ray color with the near-infrared to optical color.« less
Statistical signatures of a targeted search by bacteria
NASA Astrophysics Data System (ADS)
Jashnsaz, Hossein; Anderson, Gregory G.; Pressé, Steve
2017-12-01
Chemoattractant gradients are rarely well-controlled in nature and recent attention has turned to bacterial chemotaxis toward typical bacterial food sources such as food patches or even bacterial prey. In environments with localized food sources reminiscent of a bacterium’s natural habitat, striking phenomena—such as the volcano effect or banding—have been predicted or expected to emerge from chemotactic models. However, in practice, from limited bacterial trajectory data it is difficult to distinguish targeted searches from an untargeted search strategy for food sources. Here we use a theoretical model to identify statistical signatures of a targeted search toward point food sources, such as prey. Our model is constructed on the basis that bacteria use temporal comparisons to bias their random walk, exhibit finite memory and are subject to random (Brownian) motion as well as signaling noise. The advantage with using a stochastic model-based approach is that a stochastic model may be parametrized from individual stochastic bacterial trajectories but may then be used to generate a very large number of simulated trajectories to explore average behaviors obtained from stochastic search strategies. For example, our model predicts that a bacterium’s diffusion coefficient increases as it approaches the point source and that, in the presence of multiple sources, bacteria may take substantially longer to locate their first source giving the impression of an untargeted search strategy.
Sykes, Melissa L.; Jones, Amy J.; Shelper, Todd B.; Simpson, Moana; Lang, Rebecca; Poulsen, Sally-Ann; Sleebs, Brad E.
2017-01-01
ABSTRACT Open-access drug discovery provides a substantial resource for diseases primarily affecting the poor and disadvantaged. The open-access Pathogen Box collection is comprised of compounds with demonstrated biological activity against specific pathogenic organisms. The supply of this resource by the Medicines for Malaria Venture has the potential to provide new chemical starting points for a number of tropical and neglected diseases, through repurposing of these compounds for use in drug discovery campaigns for these additional pathogens. We tested the Pathogen Box against kinetoplastid parasites and malaria life cycle stages in vitro. Consequently, chemical starting points for malaria, human African trypanosomiasis, Chagas disease, and leishmaniasis drug discovery efforts have been identified. Inclusive of this in vitro biological evaluation, outcomes from extensive literature reviews and database searches are provided. This information encompasses commercial availability, literature reference citations, other aliases and ChEMBL number with associated biological activity, where available. The release of this new data for the Pathogen Box collection into the public domain will aid the open-source model of drug discovery. Importantly, this will provide novel chemical starting points for drug discovery and target identification in tropical disease research. PMID:28674055
Duffy, Sandra; Sykes, Melissa L; Jones, Amy J; Shelper, Todd B; Simpson, Moana; Lang, Rebecca; Poulsen, Sally-Ann; Sleebs, Brad E; Avery, Vicky M
2017-09-01
Open-access drug discovery provides a substantial resource for diseases primarily affecting the poor and disadvantaged. The open-access Pathogen Box collection is comprised of compounds with demonstrated biological activity against specific pathogenic organisms. The supply of this resource by the Medicines for Malaria Venture has the potential to provide new chemical starting points for a number of tropical and neglected diseases, through repurposing of these compounds for use in drug discovery campaigns for these additional pathogens. We tested the Pathogen Box against kinetoplastid parasites and malaria life cycle stages in vitro Consequently, chemical starting points for malaria, human African trypanosomiasis, Chagas disease, and leishmaniasis drug discovery efforts have been identified. Inclusive of this in vitro biological evaluation, outcomes from extensive literature reviews and database searches are provided. This information encompasses commercial availability, literature reference citations, other aliases and ChEMBL number with associated biological activity, where available. The release of this new data for the Pathogen Box collection into the public domain will aid the open-source model of drug discovery. Importantly, this will provide novel chemical starting points for drug discovery and target identification in tropical disease research. Copyright © 2017 Duffy et al.
Levine, Zachary H.; Pintar, Adam L.; Dobler, Jeremy T.; ...
2016-04-13
Laser absorption spectroscopy (LAS) has been used over the last several decades for the measurement of trace gasses in the atmosphere. For over a decade, LAS measurements from multiple sources and tens of retroreflectors have been combined with sparse-sample tomography methods to estimate the 2-D distribution of trace gas concentrations and underlying fluxes from point-like sources. In this work, we consider the ability of such a system to detect and estimate the position and rate of a single point leak which may arise as a failure mode for carbon dioxide storage. The leak is assumed to be at a constant ratemore » giving rise to a plume with a concentration and distribution that depend on the wind velocity. Lastly, we demonstrate the ability of our approach to detect a leak using numerical simulation and also present a preliminary measurement.« less
Overlapped optics induced perfect coherent effects.
Li, Jian Jie; Zang, Xiao Fei; Mao, Jun Fa; Tang, Min; Zhu, Yi Ming; Zhuang, Song Lin
2013-12-20
For traditional coherent effects, two separated identical point sources can be interfered with each other only when the optical path difference is integer number of wavelengths, leading to alternate dark and bright fringes for different optical path difference. For hundreds of years, such a perfect coherent condition seems insurmountable. However, in this paper, based on transformation optics, two separated in-phase identical point sources can induce perfect interference with each other without satisfying the traditional coherent condition. This shifting illusion media is realized by inductor-capacitor transmission line network. Theoretical analysis, numerical simulations and experimental results are performed to confirm such a kind of perfect coherent effect and it is found that the total radiation power of multiple elements system can be greatly enhanced. Our investigation may be applicable to National Ignition Facility (NIF), Inertial Confined Fusion (ICF) of China, LED lighting technology, terahertz communication, and so on.
NASA Astrophysics Data System (ADS)
Mulia, Iyan E.; Gusman, Aditya Riadi; Satake, Kenji
2017-12-01
Recently, there are numerous tsunami observation networks deployed in several major tsunamigenic regions. However, guidance on where to optimally place the measurement devices is limited. This study presents a methodological approach to select strategic observation locations for the purpose of tsunami source characterizations, particularly in terms of the fault slip distribution. Initially, we identify favorable locations and determine the initial number of observations. These locations are selected based on extrema of empirical orthogonal function (EOF) spatial modes. To further improve the accuracy, we apply an optimization algorithm called a mesh adaptive direct search to remove redundant measurement locations from the EOF-generated points. We test the proposed approach using multiple hypothetical tsunami sources around the Nankai Trough, Japan. The results suggest that the optimized observation points can produce more accurate fault slip estimates with considerably less number of observations compared to the existing tsunami observation networks.
Titanium oxidation state and coordination in the lunar high-titanium glass source mantle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krawczynski, M.J.; Sutton, S.R.; Grove, T.L.
2009-03-20
XANES and EXAFS spectra from synthetic HiTi lunar glasses determine coordination of Ti in the HiTi source region. The amount of Ti{sup 3+} present affects the olivine-opx equilibrium, and the total amount of Ti{sup 3+} present requires a pyx bearing source. Lunar high-titanium (HiTi) ultramafic glasses provide us with evidence of the mantle processes that led to the melting of the lunar magma ocean cumulates nearly one billion years after the magma ocean solidified. Constraints on the depth, temperature and melting processes that formed the HiTi glasses are crucial for understanding the melting history of LMO products. The Apollo 17more » orange glass (A17O) and Apollo 15 red glass (A15R) represent two of the HiTi compositions in the spectrum of pristine ultramafic glasses returned from the moon. The difference between these two compositions is that the A15R contains {approx}40% more TiO{sub 2} than the A17O. The low f{sub O2} of the ultramafic glass source regions allows for a certain amount of Ti{sup 3+} in the source mineralogy; however the amount of Ti{sup 3+} in the source and the host mineral for this element remain relatively unconstrained. In addition to the unknown mineralogy of the source region, the high amount of TiO*{sub 2} and FeO* in the HiTi magmas makes the phase relations extremely sensitive to changes in the oxidation state of the source region. We have previously investigated the oxidation state effect on the olivine-orthopyroxene multiple saturations points of the A15R and A17O and shown that the magnitude of the effect is proportional to the amount of Ti in the system. X-ray absorption near-edge spectroscopy (XANES) and extended X-ray absorption fine-structure (EXAFS) measurements have been made on minerals and glasses in experiments on synthetic analogues to the A17O and A15R. Our results show that Ti{sup 3+} concentration does indeed affect the multiple saturation points, and is an important constituent in the lunar interior.« less
HerMES: ALMA Imaging of Herschel-selected Dusty Star-forming Galaxies
NASA Astrophysics Data System (ADS)
Bussmann, R. S.; Riechers, D.; Fialkov, A.; Scudder, J.; Hayward, C. C.; Cowley, W. I.; Bock, J.; Calanog, J.; Chapman, S. C.; Cooray, A.; De Bernardis, F.; Farrah, D.; Fu, Hai; Gavazzi, R.; Hopwood, R.; Ivison, R. J.; Jarvis, M.; Lacey, C.; Loeb, A.; Oliver, S. J.; Pérez-Fournon, I.; Rigopoulou, D.; Roseboom, I. G.; Scott, Douglas; Smith, A. J.; Vieira, J. D.; Wang, L.; Wardlow, J.
2015-10-01
The Herschel Multi-tiered Extragalactic Survey (HerMES) has identified large numbers of dusty star-forming galaxies (DSFGs) over a wide range in redshift. A detailed understanding of these DSFGs is hampered by the limited spatial resolution of Herschel. We present 870 μm 0.″45 resolution imaging obtained with the Atacama Large Millimeter/submillimeter Array (ALMA) of a sample of 29 HerMES DSFGs that have far-infrared (FIR) flux densities that lie between the brightest of sources found by Herschel and fainter DSFGs found via ground-based surveys in the submillimeter region. The ALMA imaging reveals that these DSFGs comprise a total of 62 sources (down to the 5σ point-source sensitivity limit in our ALMA sample; σ ≈ 0.2 {mJy}). Optical or near-infrared imaging indicates that 36 of the ALMA sources experience a significant flux boost from gravitational lensing (μ \\gt 1.1), but only six are strongly lensed and show multiple images. We introduce and make use of uvmcmcfit, a general-purpose and publicly available Markov chain Monte Carlo visibility-plane analysis tool to analyze the source properties. Combined with our previous work on brighter Herschel sources, the lens models presented here tentatively favor intrinsic number counts for DSFGs with a break near 8 {mJy} at 880 μ {{m}} and a steep fall-off at higher flux densities. Nearly 70% of the Herschel sources break down into multiple ALMA counterparts, consistent with previous research indicating that the multiplicity rate is high in bright sources discovered in single-dish submillimeter or FIR surveys. The ALMA counterparts to our Herschel targets are located significantly closer to each other than ALMA counterparts to sources found in the LABOCA ECDFS Submillimeter Survey. Theoretical models underpredict the excess number of sources with small separations seen in our ALMA sample. The high multiplicity rate and small projected separations between sources seen in our sample argue in favor of interactions and mergers plausibly driving both the prodigious emission from the brightest DSFGs as well as the sharp downturn above {S}880=8 {mJy}. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.
The virtual library: Coming of age
NASA Technical Reports Server (NTRS)
Hunter, Judy F.; Cotter, Gladys A.
1994-01-01
With the high speed networking capabilities, multiple media options, and massive amounts of information that exist in electronic format today, the concept of a 'virtual' library or 'library without walls' is becoming viable. In virtual library environment, the information processed goes beyond the traditional definition of documents to include the results of scientific and technical research and development (reports, software, data) recorded in any format or media: electronic, audio, video, or scanned images. Network access to information must include tools to help locate information sources and navigate the networks to connect to the sources, as well as methods to extract the relevant information. Graphical User Interfaces (GUI's) that are intuitive and navigational tools such as Intelligent Gateway Processors (IGP) will provide users with seamless and transparent use of high speed networks to access, organize, and manage information. Traditional libraries will become points of electronic access to information on multiple medias. The emphasis will be towards unique collections of information at each library rather than entire collections at every library. It is no longer a question of whether there is enough information available; it is more a question of how to manage the vast volumes of information. The future equation will involve being able to organize knowledge, manage information, and provide access at the point of origin.
3D Seismic Imaging using Marchenko Methods
NASA Astrophysics Data System (ADS)
Lomas, A.; Curtis, A.
2017-12-01
Marchenko methods are novel, data driven techniques that allow seismic wavefields from sources and receivers on the Earth's surface to be redatumed to construct wavefields with sources in the subsurface - including complex multiply-reflected waves, and without the need for a complex reference model. In turn, this allows subsurface images to be constructed at any such subsurface redatuming points (image or virtual receiver points). Such images are then free of artefacts from multiply-scattered waves that usually contaminate migrated seismic images. Marchenko algorithms require as input the same information as standard migration methods: the full reflection response from sources and receivers at the Earth's surface, and an estimate of the first arriving wave between the chosen image point and the surface. The latter can be calculated using a smooth velocity model estimated using standard methods. The algorithm iteratively calculates a signal that focuses at the image point to create a virtual source at that point, and this can be used to retrieve the signal between the virtual source and the surface. A feature of these methods is that the retrieved signals are naturally decomposed into up- and down-going components. That is, we obtain both the signal that initially propagated upwards from the virtual source and arrived at the surface, separated from the signal that initially propagated downwards. Figure (a) shows a 3D subsurface model with a variable density but a constant velocity (3000m/s). Along the surface of this model (z=0) in both the x and y directions are co-located sources and receivers at 20-meter intervals. The redatumed signal in figure (b) has been calculated using Marchenko methods from a virtual source (1200m, 500m and 400m) to the surface. For comparison the true solution is given in figure (c), and shows a good match when compared to figure (b). While these 2D redatuming and imaging methods are still in their infancy having first been developed in 2012, we have extended them to 3D media and wavefields. We show that while the wavefield effects may be more complex in 3D, Marchenko methods are still valid, and 3D images that are free of multiple-related artefacts, are a realistic possibility.
Gravitational lensing by ring-like structures
NASA Astrophysics Data System (ADS)
Lake, Ethan; Zheng, Zheng
2017-02-01
We study a class of gravitational lensing systems consisting of an inclined ring/belt, with and without an added point mass at the centre. We show that a common feature of such systems are so-called pseudo-caustics, across which the magnification of a point source changes discontinuously and yet remains finite. Such a magnification change can be associated with either a change in image multiplicity or a sudden change in the size of a lensed image. The existence of pseudo-caustics and the complex interplay between them and the formal caustics (which correspond to points of infinite magnification) can lead to interesting consequences, such as truncated or open caustics and a non-conservation of total image parity. The origin of the pseudo-caustics is found to be the non-differentiability of the solutions to the lens equation across the ring/belt boundaries, with the pseudo-caustics corresponding to ring/belt boundaries mapped into the source plane. We provide a few illustrative examples to understand the pseudo-caustic features, and in a separate paper consider a specific astronomical application of our results to study microlensing by extrasolar asteroid belts.
Zheng, Xiaoming; Liu, Xin
2017-01-01
Our research draws upon social cognitive theory and incorporates a regulatory approach to investigate why and when abusive supervision influences employee creative performance. The analyses of data from multiple time points and multiple sources reveal that abusive supervision hampers employee self-efficacy at work, which in turn impairs employee creative performance. Further, employee mindfulness buffers the negative effects of abusive supervision on employee self-efficacy at work as well as the indirect effects of abusive supervision on employee creative performance. Our findings have implications for both theory and practice. Limitations and directions for future research are also discussed.
Zheng, Xiaoming; Liu, Xin
2017-01-01
Our research draws upon social cognitive theory and incorporates a regulatory approach to investigate why and when abusive supervision influences employee creative performance. The analyses of data from multiple time points and multiple sources reveal that abusive supervision hampers employee self-efficacy at work, which in turn impairs employee creative performance. Further, employee mindfulness buffers the negative effects of abusive supervision on employee self-efficacy at work as well as the indirect effects of abusive supervision on employee creative performance. Our findings have implications for both theory and practice. Limitations and directions for future research are also discussed. PMID:28955285
NASA Astrophysics Data System (ADS)
Zhang, Hao; Yuan, Yan; Su, Lijuan; Huang, Fengzhen; Bai, Qing
2016-09-01
The Risley-prism-based light beam steering apparatus delivers superior pointing accuracy and it is used in imaging LIDAR and imaging microscopes. A general model for pointing error analysis of the Risley prisms is proposed in this paper, based on ray direction deviation in light refraction. This model captures incident beam deviation, assembly deflections, and prism rotational error. We derive the transmission matrixes of the model firstly. Then, the independent and cumulative effects of different errors are analyzed through this model. Accuracy study of the model shows that the prediction deviation of pointing error for different error is less than 4.1×10-5° when the error amplitude is 0.1°. Detailed analyses of errors indicate that different error sources affect the pointing accuracy to varying degree, and the major error source is the incident beam deviation. The prism tilting has a relative big effect on the pointing accuracy when prism tilts in the principal section. The cumulative effect analyses of multiple errors represent that the pointing error can be reduced by tuning the bearing tilting in the same direction. The cumulative effect of rotational error is relative big when the difference of these two prism rotational angles equals 0 or π, while it is relative small when the difference equals π/2. The novelty of these results suggests that our analysis can help to uncover the error distribution and aid in measurement calibration of Risley-prism systems.
NASA Astrophysics Data System (ADS)
Jacobson, Abram R.; Shao, Xuan-Min
2001-07-01
The Earth's ionosphere is magnetized by the geomagnetic field and imposes birefringent modulation on VHF radio signals propagating through the ionosphere. Satellites viewing VHF emissions from terrestrial sources receive ordinary and extraordinary modes successively from each broadband pulse emitted by the source. The birefringent intermode frequency separation can be used to determine the value of ƒce cos β, where ƒce is the electron gyrofrequency and β is the angle between the wave vector k and the geomagnetic field B at the point where the VHF ray path intersects the ionosphere. Successive receptions of multiple signals (from the same source) cause variation in ƒce cos β, and from the resulting variation in the signal intermode frequency separation the source location on Earth can be inferred. We test the method with signals emitted by the Los Alamos Portable Pulser and received by the FORTE satellite.
NASA Astrophysics Data System (ADS)
Wapenaar, C. P. A.; Van der Neut, J.; Thorbecke, J.; Broggini, F.; Slob, E. C.; Snieder, R.
2015-12-01
Imagine one could place seismic sources and receivers at any desired position inside the earth. Since the receivers would record the full wave field (direct waves, up- and downward reflections, multiples, etc.), this would give a wealth of information about the local structures, material properties and processes in the earth's interior. Although in reality one cannot place sources and receivers anywhere inside the earth, it appears to be possible to create virtual sources and receivers at any desired position, which accurately mimics the desired situation. The underlying method involves some major steps beyond standard seismic interferometry. With seismic interferometry, virtual sources can be created at the positions of physical receivers, assuming these receivers are illuminated isotropically. Our proposed method does not need physical receivers at the positions of the virtual sources; moreover, it does not require isotropic illumination. To create virtual sources and receivers anywhere inside the earth, it suffices to record the reflection response with physical sources and receivers at the earth's surface. We do not need detailed information about the medium parameters; it suffices to have an estimate of the direct waves between the virtual-source positions and the acquisition surface. With these prerequisites, our method can create virtual sources and receivers, anywhere inside the earth, which record the full wave field. The up- and downward reflections, multiples, etc. in the virtual responses are extracted directly from the reflection response at the surface. The retrieved virtual responses form an ideal starting point for accurate seismic imaging, characterization and monitoring.
NASA Astrophysics Data System (ADS)
Käufl, Paul; Valentine, Andrew P.; O'Toole, Thomas B.; Trampert, Jeannot
2014-03-01
The determination of earthquake source parameters is an important task in seismology. For many applications, it is also valuable to understand the uncertainties associated with these determinations, and this is particularly true in the context of earthquake early warning (EEW) and hazard mitigation. In this paper, we develop a framework for probabilistic moment tensor point source inversions in near real time. Our methodology allows us to find an approximation to p(m|d), the conditional probability of source models (m) given observations (d). This is obtained by smoothly interpolating a set of random prior samples, using Mixture Density Networks (MDNs)-a class of neural networks which output the parameters of a Gaussian mixture model. By combining multiple networks as `committees', we are able to obtain a significant improvement in performance over that of a single MDN. Once a committee has been constructed, new observations can be inverted within milliseconds on a standard desktop computer. The method is therefore well suited for use in situations such as EEW, where inversions must be performed routinely and rapidly for a fixed station geometry. To demonstrate the method, we invert regional static GPS displacement data for the 2010 MW 7.2 El Mayor Cucapah earthquake in Baja California to obtain estimates of magnitude, centroid location and depth and focal mechanism. We investigate the extent to which we can constrain moment tensor point sources with static displacement observations under realistic conditions. Our inversion results agree well with published point source solutions for this event, once the uncertainty bounds of each are taken into account.
Grell, Kathrine; Diggle, Peter J; Frederiksen, Kirsten; Schüz, Joachim; Cardis, Elisabeth; Andersen, Per K
2015-10-15
We study methods for how to include the spatial distribution of tumours when investigating the relation between brain tumours and the exposure from radio frequency electromagnetic fields caused by mobile phone use. Our suggested point process model is adapted from studies investigating spatial aggregation of a disease around a source of potential hazard in environmental epidemiology, where now the source is the preferred ear of each phone user. In this context, the spatial distribution is a distribution over a sample of patients rather than over multiple disease cases within one geographical area. We show how the distance relation between tumour and phone can be modelled nonparametrically and, with various parametric functions, how covariates can be included in the model and how to test for the effect of distance. To illustrate the models, we apply them to a subset of the data from the Interphone Study, a large multinational case-control study on the association between brain tumours and mobile phone use. Copyright © 2015 John Wiley & Sons, Ltd.
EPA Office of Water (OW): 2002 SPARROW Total NP (Catchments)
SPARROW (SPAtially Referenced Regressions On Watershed attributes) is a watershed modeling tool with output that allows the user to interpret water quality monitoring data at the regional and sub-regional scale. The model relates in-stream water-quality measurements to spatially referenced characteristics of watersheds, including pollutant sources and environmental factors that affect rates of pollutant delivery to streams from the land and aquatic, in-stream processing . The core of the model consists of a nonlinear regression equation describing the non-conservative transport of contaminants from point and non-point (or ??diffuse??) sources on land to rivers and through the stream and river network. SPARROW estimates contaminant concentrations, loads (or ??mass,?? which is the product of concentration and streamflow), and yields in streams (mass of nitrogen and of phosphorus entering a stream per acre of land). It empirically estimates the origin and fate of contaminants in streams and receiving bodies, and quantifies uncertainties in model predictions. The model predictions are illustrated through detailed maps that provide information about contaminant loadings and source contributions at multiple scales for specific stream reaches, basins, or other geographic areas.
Localization from near-source quasi-static electromagnetic fields
NASA Astrophysics Data System (ADS)
Mosher, J. C.
1993-09-01
A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. The nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUltiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.
Open Source Platform Application to Groundwater Characterization and Monitoring
NASA Astrophysics Data System (ADS)
Ntarlagiannis, D.; Day-Lewis, F. D.; Falzone, S.; Lane, J. W., Jr.; Slater, L. D.; Robinson, J.; Hammett, S.
2017-12-01
Groundwater characterization and monitoring commonly rely on the use of multiple point sensors and human labor. Due to the number of sensors, labor, and other resources needed, establishing and maintaining an adequate groundwater monitoring network can be both labor intensive and expensive. To improve and optimize the monitoring network design, open source software and hardware components could potentially provide the platform to control robust and efficient sensors thereby reducing costs and labor. This work presents early attempts to create a groundwater monitoring system incorporating open-source software and hardware that will control the remote operation of multiple sensors along with data management and file transfer functions. The system is built around a Raspberry PI 3, that controls multiple sensors in order to perform on-demand, continuous or `smart decision' measurements while providing flexibility to incorporate additional sensors to meet the demands of different projects. The current objective of our technology is to monitor exchange of ionic tracers between mobile and immobile porosity using a combination of fluid and bulk electrical-conductivity measurements. To meet this objective, our configuration uses four sensors (pH, specific conductance, pressure, temperature) that can monitor the fluid electrical properties of interest and guide the bulk electrical measurement. This system highlights the potential of using open source software and hardware components for earth sciences applications. The versatility of the system makes it ideal for use in a large number of applications, and the low cost allows for high resolution (spatially and temporally) monitoring.
Multiple-reflection optical gas cell
Matthews, Thomas G.
1983-01-01
A multiple-reflection optical cell for Raman or fluorescence gas analysis consists of two spherical mirrors positioned transverse to a multiple-pass laser cell in a confronting plane-parallel alignment. The two mirrors are of equal diameter but possess different radii of curvature. The spacing between the mirrors is uniform and less than half of the radius of curvature of either mirror. The mirror of greater curvature possesses a small circular portal in its center which is the effective point source for conventional F1 double lens collection optics of a monochromator-detection system. Gas to be analyzed is flowed into the cell and irradiated by a multiply-reflected composite laser beam centered between the mirrors of the cell. Raman or fluorescence radiation originating from a large volume within the cell is (1) collected via multiple reflections with the cell mirrors, (2) partially collimated and (3) directed through the cell portal in a geometric array compatible with F1 collection optics.
Computer-assisted 3D kinematic analysis of all leg joints in walking insects.
Bender, John A; Simpson, Elaine M; Ritzmann, Roy E
2010-10-26
High-speed video can provide fine-scaled analysis of animal behavior. However, extracting behavioral data from video sequences is a time-consuming, tedious, subjective task. These issues are exacerbated where accurate behavioral descriptions require analysis of multiple points in three dimensions. We describe a new computer program written to assist a user in simultaneously extracting three-dimensional kinematics of multiple points on each of an insect's six legs. Digital video of a walking cockroach was collected in grayscale at 500 fps from two synchronized, calibrated cameras. We improved the legs' visibility by painting white dots on the joints, similar to techniques used for digitizing human motion. Compared to manual digitization of 26 points on the legs over a single, 8-second bout of walking (or 106,496 individual 3D points), our software achieved approximately 90% of the accuracy with 10% of the labor. Our experimental design reduced the complexity of the tracking problem by tethering the insect and allowing it to walk in place on a lightly oiled glass surface, but in principle, the algorithms implemented are extensible to free walking. Our software is free and open-source, written in the free language Python and including a graphical user interface for configuration and control. We encourage collaborative enhancements to make this tool both better and widely utilized.
NASA Astrophysics Data System (ADS)
Laborda, Francisco; Medrano, Jesús; Castillo, Juan R.
2004-06-01
The quality of the quantitative results obtained from transient signals in high-performance liquid chromatography-inductively coupled plasma mass spectrometry (HPLC-ICPMS) and flow injection-inductively coupled plasma mass spectrometry (FI-ICPMS) was investigated under multielement conditions. Quantification methods were based on multiple-point calibration by simple and weighted linear regression, and double-point calibration (measurement of the baseline and one standard). An uncertainty model, which includes the main sources of uncertainty from FI-ICPMS and HPLC-ICPMS (signal measurement, sample flow rate and injection volume), was developed to estimate peak area uncertainties and statistical weights used in weighted linear regression. The behaviour of the ICPMS instrument was characterized in order to be considered in the model, concluding that the instrument works as a concentration detector when it is used to monitorize transient signals from flow injection or chromatographic separations. Proper quantification by the three calibration methods was achieved when compared to reference materials, although the double-point calibration allowed to obtain results of the same quality as the multiple-point calibration, shortening the calibration time. Relative expanded uncertainties ranged from 10-20% for concentrations around the LOQ to 5% for concentrations higher than 100 times the LOQ.
Perumal, Madhumathy; Dhandapani, Sivakumar
2015-01-01
Data gathering and optimal path selection for wireless sensor networks (WSN) using existing protocols result in collision. Increase in collision further increases the possibility of packet drop. Thus there is a necessity to eliminate collision during data aggregation. Increasing the efficiency is the need of the hour with maximum security. This paper is an effort to come up with a reliable and energy efficient WSN routing and secure protocol with minimum delay. This technique is named as relay node based secure routing protocol for multiple mobile sink (RSRPMS). This protocol finds the rendezvous point for optimal transmission of data using a "splitting tree" technique in tree-shaped network topology and then to determine all the subsequent positions of a sink the "Biased Random Walk" model is used. In case of an event, the sink gathers the data from all sources, when they are in the sensing range of rendezvous point. Otherwise relay node is selected from its neighbor to transfer packets from rendezvous point to sink. A symmetric key cryptography is used for secure transmission. The proposed relay node based secure routing protocol for multiple mobile sink (RSRPMS) is experimented and simulation results are compared with Intelligent Agent-Based Routing (IAR) protocol to prove that there is increase in the network lifetime compared with other routing protocols.
Hepburn, Emily; Northway, Anne; Bekele, Dawit; Liu, Gang-Jun; Currell, Matthew
2018-06-11
Determining sources of heavy metals in soils, sediments and groundwater is important for understanding their fate and transport and mitigating human and environmental exposures. Artificially imported fill, natural sediments and groundwater from 240 ha of reclaimed land at Fishermans Bend in Australia, were analysed for heavy metals and other parameters to determine the relative contributions from different possible sources. Fishermans Bend is Australia's largest urban re-development project, however, complicated land-use history, geology, and multiple contamination sources pose challenges to successful re-development. We developed a method for heavy metal source separation in groundwater using statistical categorisation of the data, analysis of soil leaching values and fill/sediment XRF profiling. The method identified two major sources of heavy metals in groundwater: 1. Point sources from local or up-gradient groundwater contaminated by industrial activities and/or legacy landfills; and 2. contaminated fill, where leaching of Cu, Mn, Pb and Zn was observed. Across the precinct, metals were most commonly sourced from a combination of these sources; however, eight locations indicated at least one metal sourced solely from fill leaching, and 23 locations indicated at least one metal sourced solely from impacted groundwater. Concentrations of heavy metals in groundwater ranged from 0.0001 to 0.003 mg/L (Cd), 0.001-0.1 mg/L (Cr), 0.001-0.2 mg/L (Cu), 0.001-0.5 mg/L (Ni), 0.001-0.01 mg/L (Pb), and 0.005-1.2 mg/L (Zn). Our method can determine the likely contribution of different metal sources to groundwater, helping inform more detailed contamination assessments and precinct-wide management and remediation strategies. Copyright © 2018 Elsevier Ltd. All rights reserved.
The Chandra Source Catalog: Algorithms
NASA Astrophysics Data System (ADS)
McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.
2009-09-01
Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.
Separating Turbofan Engine Noise Sources Using Auto and Cross Spectra from Four Microphones
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2008-01-01
The study of core noise from turbofan engines has become more important as noise from other sources such as the fan and jet were reduced. A multiple-microphone and acoustic-source modeling method to separate correlated and uncorrelated sources is discussed. The auto- and cross spectra in the frequency range below 1000 Hz are fitted with a noise propagation model based on a source couplet consisting of a single incoherent monopole source with a single coherent monopole source or a source triplet consisting of a single incoherent monopole source with two coherent monopole point sources. Examples are presented using data from a Pratt& Whitney PW4098 turbofan engine. The method separates the low-frequency jet noise from the core noise at the nozzle exit. It is shown that at low power settings, the core noise is a major contributor to the noise. Even at higher power settings, it can be more important than jet noise. However, at low frequencies, uncorrelated broadband noise and jet noise become the important factors as the engine power setting is increased.
Long-Term Stability of the NIST Standard Ultrasonic Source.
Fick, Steven E
2008-01-01
The National Institute of Standards and Technology (NIST) Standard Ultrasonic Source (SUS) is a system comprising a transducer capable of output power levels up to 1 W at multiple frequencies between 1 MHz and 30 MHz, and an electrical impedance-matching network that allows the system to be driven by a conventional 50 Ω rf (radio-frequency) source. It is designed to allow interlaboratory replication of ultrasonic power levels with high accuracy using inexpensive readily available ancillary equipment. The SUS was offered for sale for 14 years (1985 to 1999). Each system was furnished with data for the set of calibration points (combinations of power level and frequency) specified by the customer. Of the systems that had been ordered with some calibration points in common, three were returned more than once to NIST for recalibration. Another system retained at NIST has been recalibrated periodically since 1984. The collective data for these systems comprise 9 calibration points and 102 measurements spanning a 17 year interval ending in 2001, the last year NIST ultrasonic power measurement services were available to the public. These data have been analyzed to compare variations in output power with frequency, power level, and time elapsed since the first calibration. The results verify the claim, made in the instruction sheet furnished with every SUS, that "long-term drift, if any, in the calibration of NIST Standard Sources is insignificant compared to the uncertainties associated with a single measurement of ultrasonic power by any method available at NIST."
The Chandra Xbootes Survey - IV: Mid-Infrared and Submillimeter Counterparts
NASA Astrophysics Data System (ADS)
Brown, Arianna; Mitchell-Wynne, Ketron; Cooray, Asantha R.; Nayyeri, Hooshang
2016-06-01
In this work, we use a Bayesian technique to identify mid-IR and submillimeter counterparts for 3,213 X-ray point sources detected in the Chandra XBoötes Survey so as to characterize the relationship between black hole activity and star formation in the XBoötes region. The Chandra XBoötes Survey is a 5-ks X-ray survey of the 9.3 square degree Boötes Field of the NOAO Deep Wide-Field Survey (NDWFS), a survey imaged from the optical to the near-IR. We use a likelihood ratio analysis on Spitzer-IRAC data taken from The Spitzer Deep, Wide-Field Survey (SDWFS) to determine mid-IR counterparts, and a similar method on Herschel-SPIRE sources detected at 250µm from The Herschel Multi-tiered Extragalactic Survey to determine the submillimeter counterparts. The likelihood ratio analysis (LRA) provides the probability that a(n) IRAC or SPIRE point source is the true counterpart to a Chandra source. The analysis is comprised of three parts: the normalized magnitude distributions of counterparts and background sources, and the radial probability distribution of the separation distance between the IRAC or SPIRE source and the Chandra source. Many Chandra sources have multiple prospective counterparts in each band, so additional analysis is performed to determine the identification reliability of the candidates. Identification reliability values lie between 0 and 1, and sources with identification reliability values ≥0.8 are chosen to be the true counterparts. With these results, we will consider the statistical implications of the sample's redshifts, mid-IR and submillimeter luminosities, and star formation rates.
Lee, Dae-Young; Lee, Hung; Trevors, Jack T; Weir, Susan C; Thomas, Janis L; Habash, Marc
2014-04-15
Sources of fecal water pollution were assessed in the Grand River and two of its tributaries (Ontario, Canada) using total and host-specific (human and bovine) Bacteroidales genetic markers in conjunction with reference information, such as land use and weather. In-stream levels of the markers and culturable Escherichia coli were also monitored during multiple rain events to gain information on fecal loadings to catchment from diffuse sources. Elevated human-specific marker levels were accurately identified in river water impacted by a municipal wastewater treatment plant (WWTP) effluent and at a downstream site in the Grand River. In contrast, the bovine-specific marker showed high levels of cattle fecal pollution in two tributaries, both of which are characterized as intensely farmed areas. The bovine-specific Bacteroidales marker increased with rainfall in the agricultural tributaries, indicating enhanced loading of cattle-derived fecal pollutants to river from non-point sources following rain events. However, rain-triggered fecal loading was not substantiated in urban settings, indicating continuous inputs of human-originated fecal pollutants from point sources, such as WWTP effluent. This study demonstrated that the Bacteroidales source tracking assays, in combination with land use information and hydrological data, may provide additional insight into the spatial and temporal distribution of source-specific fecal contamination in streams impacted by varying land uses. Using the approach described in this study may help to characterize impacted water sources and to design targeted land use management plans in other watersheds in the future. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Conner, David A.; Page, Juliet A.
2002-01-01
To improve aircraft noise impact modeling capabilities and to provide a tool to aid in the development of low noise terminal area operations for rotorcraft and tiltrotors, the Rotorcraft Noise Model (RNM) was developed by the NASA Langley Research Center and Wyle Laboratories. RNM is a simulation program that predicts how sound will propagate through the atmosphere and accumulate at receiver locations located on flat ground or varying terrain, for single and multiple vehicle flight operations. At the core of RNM are the vehicle noise sources, input as sound hemispheres. As the vehicle "flies" along its prescribed flight trajectory, the source sound propagation is simulated and accumulated at the receiver locations (single points of interest or multiple grid points) in a systematic time-based manner. These sound signals at the receiver locations may then be analyzed to obtain single event footprints, integrated noise contours, time histories, or numerous other features. RNM may also be used to generate spectral time history data over a ground mesh for the creation of single event sound animation videos. Acoustic properties of the noise source(s) are defined in terms of sound hemispheres that may be obtained from theoretical predictions, wind tunnel experimental results, flight test measurements, or a combination of the three. The sound hemispheres may contain broadband data (source levels as a function of one-third octave band) and pure-tone data (in the form of specific frequency sound pressure levels and phase). A PC executable version of RNM is publicly available and has been adopted by a number of organizations for Environmental Impact Assessment studies of rotorcraft noise. This paper provides a review of the required input data, the theoretical framework of RNM's propagation model and the output results. Code validation results are provided from a NATO helicopter noise flight test as well as a tiltrotor flight test program that used the RNM as a tool to aid in the development of low noise approach profiles.
On Road Study of Colorado Front Range Greenhouse Gases Distribution and Sources
NASA Astrophysics Data System (ADS)
Petron, G.; Hirsch, A.; Trainer, M. K.; Karion, A.; Kofler, J.; Sweeney, C.; Andrews, A.; Kolodzey, W.; Miller, B. R.; Miller, L.; Montzka, S. A.; Kitzis, D. R.; Patrick, L.; Frost, G. J.; Ryerson, T. B.; Robers, J. M.; Tans, P.
2008-12-01
The Global Monitoring Division and Chemical Sciences Division of the NOAA Earth System Research Laboratory have teamed up over the summer 2008 to experiment with a new measurement strategy to characterize greenhouse gases distribution and sources in the Colorado Front Range. Combining expertise in greenhouse gases measurements and in local to regional scales air quality study intensive campaigns, we have built the 'Hybrid Lab'. A continuous CO2 and CH4 cavity ring down spectroscopic analyzer (Picarro, Inc.), a CO gas-filter correlation instrument (Thermo Environmental, Inc.) and a continuous UV absorption ozone monitor (2B Technologies, Inc., model 202SC) have been installed securely onboard a 2006 Toyota Prius Hybrid vehicle with an inlet bringing in outside air from a few meters above the ground. To better characterize point and distributed sources, air samples were taken with a Portable Flask Package (PFP) for later multiple species analysis in the lab. A GPS unit hooked up to the ozone analyzer and another one installed on the PFP kept track of our location allowing us to map measured concentrations on the driving route using Google Earth. The Hybrid Lab went out for several drives in the vicinity of the NOAA Boulder Atmospheric Observatory (BAO) tall tower located in Erie, CO and covering areas from Boulder, Denver, Longmont, Fort Collins and Greeley. Enhancements in CO2, CO and destruction of ozone mainly reflect emissions from traffic. Methane enhancements however are clearly correlated with nearby point sources (landfill, feedlot, natural gas compressor ...) or with larger scale air masses advected from the NE Colorado, where oil and gas drilling operations are widespread. The multiple species analysis (hydrocarbons, CFCs, HFCs) of the air samples collected along the way bring insightful information about the methane sources at play. We will present results of the analysis and interpretation of the Hybrid Lab Front Range Study and conclude with perspectives on how we will adapt the measurement strategy to study CO2 anthropogenic emissions in Denver Basin.
Pitching Flexible Propulsors: Experimental Assessment of Performance Characteristics
2014-05-09
velocities pointing in this direction contribute to an overall momentum deficit in the wake , which may be quantitatively related to the drag force on...and explained the source of some of the additional vorticity in the wake of the foil that may have otherwise been ignored or treated as noise in the...is conducted through reduction of the measured force and torque data and multiple wake flow analysis techniques, including particle image
Knee point search using cascading top-k sorting with minimized time complexity.
Wang, Zheng; Tseng, Shian-Shyong
2013-01-01
Anomaly detection systems and many other applications are frequently confronted with the problem of finding the largest knee point in the sorted curve for a set of unsorted points. This paper proposes an efficient knee point search algorithm with minimized time complexity using the cascading top-k sorting when a priori probability distribution of the knee point is known. First, a top-k sort algorithm is proposed based on a quicksort variation. We divide the knee point search problem into multiple steps. And in each step an optimization problem of the selection number k is solved, where the objective function is defined as the expected time cost. Because the expected time cost in one step is dependent on that of the afterwards steps, we simplify the optimization problem by minimizing the maximum expected time cost. The posterior probability of the largest knee point distribution and the other parameters are updated before solving the optimization problem in each step. An example of source detection of DNS DoS flooding attacks is provided to illustrate the applications of the proposed algorithm.
Gaussian random bridges and a geometric model for information equilibrium
NASA Astrophysics Data System (ADS)
Mengütürk, Levent Ali
2018-03-01
The paper introduces a class of conditioned stochastic processes that we call Gaussian random bridges (GRBs) and proves some of their properties. Due to the anticipative representation of any GRB as the sum of a random variable and a Gaussian (T , 0) -bridge, GRBs can model noisy information processes in partially observed systems. In this spirit, we propose an asset pricing model with respect to what we call information equilibrium in a market with multiple sources of information. The idea is to work on a topological manifold endowed with a metric that enables us to systematically determine an equilibrium point of a stochastic system that can be represented by multiple points on that manifold at each fixed time. In doing so, we formulate GRB-based information diversity over a Riemannian manifold and show that it is pinned to zero over the boundary determined by Dirac measures. We then define an influence factor that controls the dominance of an information source in determining the best estimate of a signal in the L2-sense. When there are two sources, this allows us to construct information equilibrium as a functional of a geodesic-valued stochastic process, which is driven by an equilibrium convergence rate representing the signal-to-noise ratio. This leads us to derive price dynamics under what can be considered as an equilibrium probability measure. We also provide a semimartingale representation of Markovian GRBs associated with Gaussian martingales and a non-anticipative representation of fractional Brownian random bridges that can incorporate degrees of information coupling in a given system via the Hurst exponent.
Quadrupole ion traps and trap arrays: geometry, material, scale, performance.
Ouyang, Z; Gao, L; Fico, M; Chappell, W J; Noll, R J; Cooks, R G
2007-01-01
Quadrupole ion traps are reviewed, emphasizing recent developments, especially the investigation of new geometries, guided by multiple particle simulations such as the ITSIM program. These geometries include linear ion traps (LITs) and the simplified rectilinear ion trap (RIT). Various methods of fabrication are described, including the use of rapid prototyping apparatus (RPA), in which 3D objects are generated through point-by-point laser polymerization. Fabrication in silicon using multilayer semi-conductor fabrication techniques has been used to construct arrays of micro-traps. The performance of instruments containing individual traps as well as arrays of traps of various sizes and geometries is reviewed. Two types of array are differentiated. In the first type, trap arrays constitute fully multiplexed mass spectrometers in which multiple samples are examined using multiple sources, analyzers and detectors, to achieve high throughput analysis. In the second, an array of individual traps acts collectively as a composite trap to increase trapping capacity and performance for a single sample. Much progress has been made in building miniaturized mass spectrometers; a specific example is a 10 kg hand-held tandem mass spectrometer based on the RIT mass analyzer. The performance of this instrument in air and water analysis, using membrane sampling, is described.
NASA Technical Reports Server (NTRS)
Maskew, B.
1976-01-01
A discrete singularity method has been developed for calculating the potential flow around two-dimensional airfoils. The objective was to calculate velocities at any arbitrary point in the flow field, including points that approach the airfoil surface. That objective was achieved and is demonstrated here on a Joukowski airfoil. The method used combined vortices and sources ''submerged'' a small distance below the airfoil surface and incorporated a near-field subvortex technique developed earlier. When a velocity calculation point approached the airfoil surface, the number of discrete singularities effectively increased (but only locally) to keep the point just outside the error region of the submerged singularity discretization. The method could be extended to three dimensions, and should improve nonlinear methods, which calculate interference effects between multiple wings, and which include the effects of force-free trailing vortex sheets. The capability demonstrated here would extend the scope of such calculations to allow the close approach of wings and vortex sheets (or vortices).
Xrt And Shinx Joint Flare Study: Ar 11024
NASA Astrophysics Data System (ADS)
Engell, Alexander; Sylwester, J.; Siarkowski, M.
2010-05-01
From 12:00 UT on July 3 through July 7, 2009 SphinX (Solar Photometer IN X-rays) observes 130 flares with active region (AR) 11024 being the only AR on disk. XRT (X-Ray Telescope) is able to observe 64 of these flare events. The combination of both instruments results in a flare study revealing (1) a relationship between flux emergence and flare rate, (2) that the presence of active region loops typically results in different flare morphologies (single and multiple loop flares) then when there is a lack of an active region loop environment where more cusp and point-like flares are observed, (3) cusp and point-like flares often originate from the same location, and (4) a distribution of flare temperatures corresponding to the different flare morphologies. The differences between the observed flare morphologies may occur as the result of the heated plasma through the flaring process being confined by the proximity of loop structures as for the single and multiple loop flares, while for cusp and point-like flares they occur in an early-phase environment that lack loop presence. The continuing flux emergence of AR 11024 likely provides different magnetic interactions and may be the source responsible for all of the flares.
PUBLIC EXPOSURE TO MULTIPLE RF SOURCES IN GHANA.
Deatanyah, P; Abavare, E K K; Menyeh, A; Amoako, J K
2018-03-16
This paper describes an effort to respond to the suggestion in World Health Organization (WHO) research agenda to better quantify potential exposure levels from a range of radiofrequency (RF) sources at 200 public access locations in Ghana. Wide-band measurements were performed-with a spectrum analyser and a log-periodic antenna using three-point spatial averaging method. The overall results represented a maximum of 0.19% of the ICNIRP reference levels for public exposure. These results were generally lower than found in some previous but were 58% (2.0 dB) greater, than found in similar work conducted in the USA. Major contributing sources of RF fields were identified to be FM broadcast and mobile base station sites. Three locations with the greatest measured RF fields could represent potential areas for epidemiological studies.
Resolving z ~2 galaxy using adaptive coadded source plane reconstruction
NASA Astrophysics Data System (ADS)
Sharma, Soniya; Richard, Johan; Kewley, Lisa; Yuan, Tiantian
2018-06-01
Natural magnification provided by gravitational lensing coupled with Integral field spectrographic observations (IFS) and adaptive optics (AO) imaging techniques have become the frontier of spatially resolved studies of high redshift galaxies (z>1). Mass models of gravitational lenses hold the key for understanding the spatially resolved source–plane (unlensed) physical properties of the background lensed galaxies. Lensing mass models very sensitively control the accuracy and precision of source-plane reconstructions of the observed lensed arcs. Effective source-plane resolution defined by image-plane (observed) point spread function (PSF) makes it challenging to recover the unlensed (source-plane) surface brightness distribution.We conduct a detailed study to recover the source-plane physical properties of z=2 lensed galaxy using spatially resolved observations from two different multiple images of the lensed target. To deal with PSF’s from two data sets on different multiple images of the galaxy, we employ a forward (Source to Image) approach to merge these independent observations. Using our novel technique, we are able to present a detailed analysis of the source-plane dynamics at scales much better than previously attainable through traditional image inversion methods. Moreover, our technique is adapted to magnification, thus allowing us to achieve higher resolution in highly magnified regions of the source. We find that this lensed system is highly evident of a minor merger. In my talk, I present this case study of z=2 lensed galaxy and also discuss the applications of our algorithm to study plethora of lensed systems, which will be available through future telescopes like JWST and GMT.
Multimode entanglement assisted QKD through a free-space maritime channel
NASA Astrophysics Data System (ADS)
Gariano, John; Djordjevic, Ivan B.
2017-10-01
When using quantum key distribution (QKD), one of the trade-offs for security is that the generation rate of a secret key is typically very low. Recent works have shown that using a weak coherent source allows for higher secret key generation rates compared to an entangled photon source, when a channel with low loss is considered. In most cases, the system that is being studied is over a fiber-optic communication channel. Here a theoretical QKD system using the BB92 protocol and entangled photons over a free-space maritime channel with multiple spatial modes is presented. The entangled photons are generated from a spontaneous parametric down conversion (SPDC) source of type II. To employ multiple spatial modes, the transmit apparatus will contain multiple SPDC sources, all driven by the pump lasers assumed to have the same intensity. The receive apparatuses will contain avalanche photo diodes (APD), modeled based on the NuCrypt CPDS-1000 detector, and located at the focal point of the receive aperture lens. The transmitter is assumed to be located at Alice and Bob will be located 30 km away, implying no channel crosstalk will be introduced in the measurements at Alice's side due to turbulence. To help mitigate the effects of atmospheric turbulence, adaptive optics will be considered at the transmitter and the receiver. An eavesdropper, Eve, is located 15 km from Alice and has no control over the devices at Alice or Bob. Eve is performing the intercept resend attack and listening to the communication over the public channel. Additionally, it is assumed that Eve can correct any aberrations caused by the atmospheric turbulence to determine which source the photon was transmitted from. One, four and nine spatial modes are considered with and without applying adaptive optics and compared to one another.
NASA Astrophysics Data System (ADS)
Vetrivel, Anand; Gerke, Markus; Kerle, Norman; Nex, Francesco; Vosselman, George
2018-06-01
Oblique aerial images offer views of both building roofs and façades, and thus have been recognized as a potential source to detect severe building damages caused by destructive disaster events such as earthquakes. Therefore, they represent an important source of information for first responders or other stakeholders involved in the post-disaster response process. Several automated methods based on supervised learning have already been demonstrated for damage detection using oblique airborne images. However, they often do not generalize well when data from new unseen sites need to be processed, hampering their practical use. Reasons for this limitation include image and scene characteristics, though the most prominent one relates to the image features being used for training the classifier. Recently features based on deep learning approaches, such as convolutional neural networks (CNNs), have been shown to be more effective than conventional hand-crafted features, and have become the state-of-the-art in many domains, including remote sensing. Moreover, often oblique images are captured with high block overlap, facilitating the generation of dense 3D point clouds - an ideal source to derive geometric characteristics. We hypothesized that the use of CNN features, either independently or in combination with 3D point cloud features, would yield improved performance in damage detection. To this end we used CNN and 3D features, both independently and in combination, using images from manned and unmanned aerial platforms over several geographic locations that vary significantly in terms of image and scene characteristics. A multiple-kernel-learning framework, an effective way for integrating features from different modalities, was used for combining the two sets of features for classification. The results are encouraging: while CNN features produced an average classification accuracy of about 91%, the integration of 3D point cloud features led to an additional improvement of about 3% (i.e. an average classification accuracy of 94%). The significance of 3D point cloud features becomes more evident in the model transferability scenario (i.e., training and testing samples from different sites that vary slightly in the aforementioned characteristics), where the integration of CNN and 3D point cloud features significantly improved the model transferability accuracy up to a maximum of 7% compared with the accuracy achieved by CNN features alone. Overall, an average accuracy of 85% was achieved for the model transferability scenario across all experiments. Our main conclusion is that such an approach qualifies for practical use.
Radiative transfer in multilayered random medium with laminar structure - Green's function approach
NASA Technical Reports Server (NTRS)
Karam, M. A.; Fung, A. K.
1986-01-01
For a multilayered random medium with a laminar structure a Green's function approach is introduced to obtain the emitted intensity due to an arbitrary point source. It is then shown that the approach is applicable to both active and passive remote sensing. In active remote sensing, the computed radar backscattering cross section for the multilayered medium includes the effects of both volume multiple scattering and surface multiple scattering at the layer boundaries. In passive remote sensing, the brightness temperature is obtained for arbitrary temperature profiles in the layers. As an illustration the brightness temperature and reflectivity are calculated for a bounded layer and compared with results in the literature.
NASA Astrophysics Data System (ADS)
Lee, Roh Pin
2016-04-01
Misconceptions and biases in energy perception could influence people's support for developments integral to the success of restructuring a nation's energy system. Science education, in equipping young adults with the cognitive skills and knowledge necessary to navigate in the confusing energy environment, could play a key role in paving the way for informed decision-making. This study examined German students' knowledge of the contribution of diverse energy sources to their nation's energy mix as well as their affective energy responses so as to identify implications for science education. Specifically, the study investigated whether and to what extent students hold mistaken beliefs about the role of multiple energy sources in their nation's energy mix, and assessed how misconceptions could act as self-generated reference points to underpin support/resistance of proposed developments. An in-depth analysis of spontaneous affective associations with five key energy sources also enabled the identification of underlying concerns driving people's energy responses and facilitated an examination of how affective perception, in acting as a heuristic, could lead to biases in energy judgment and decision-making. Finally, subgroup analysis differentiated by education and gender supported insights into a 'two culture' effect on energy perception and the challenge it poses to science education.
Breaking the acoustic diffraction barrier with localization optoacoustic tomography
NASA Astrophysics Data System (ADS)
Deán-Ben, X. Luís.; Razansky, Daniel
2018-02-01
Diffraction causes blurring of high-resolution features in images and has been traditionally associated to the resolution limit in light microscopy and other imaging modalities. The resolution of an imaging system can be generally assessed via its point spread function, corresponding to the image acquired from a point source. However, the precision in determining the position of an isolated source can greatly exceed the diffraction limit. By combining the estimated positions of multiple sources, localization-based imaging has resulted in groundbreaking methods such as super-resolution fluorescence optical microscopy and has also enabled ultrasound imaging of microvascular structures with unprecedented spatial resolution in deep tissues. Herein, we introduce localization optoacoustic tomography (LOT) and discuss on the prospects of using localization imaging principles in optoacoustic imaging. LOT was experimentally implemented by real-time imaging of flowing particles in 3D with a recently-developed volumetric optoacoustic tomography system. Provided the particles were separated by a distance larger than the diffraction-limited resolution, their individual locations could be accurately determined in each frame of the acquired image sequence and the localization image was formed by superimposing a set of points corresponding to the localized positions of the absorbers. The presented results demonstrate that LOT can significantly enhance the well-established advantages of optoacoustic imaging by breaking the acoustic diffraction barrier in deep tissues and mitigating artifacts due to limited-view tomographic acquisitions.
An iterative method for obtaining the optimum lightning location on a spherical surface
NASA Technical Reports Server (NTRS)
Chao, Gao; Qiming, MA
1991-01-01
A brief introduction to the basic principles of an eigen method used to obtain the optimum source location of lightning is presented. The location of the optimum source is obtained by using multiple direction finders (DF's) on a spherical surface. An improvement of this method, which takes the distance of source-DF's as a constant, is presented. It is pointed out that using a weight factor of signal strength is not the most ideal method because of the inexact inverse signal strength-distance relation and the inaccurate signal amplitude. An iterative calculation method is presented using the distance from the source to the DF as a weight factor. This improved method has higher accuracy and needs only a little more calculation time. Some computer simulations for a 4DF system are presented to show the improvement of location through use of the iterative method.
Guralnick, M J; Hammond, M A; Neville, B; Connor, R T
2008-12-01
In this longitudinal study, we examined the relationship between the sources and functions of social support and dimensions of child- and parent-related stress for mothers of young children with mild developmental delays. Sixty-three mothers completed assessments of stress and support at two time points. Multiple regression analyses revealed that parenting support during the early childhood period (i.e. advice on problems specific to their child and assistance with child care responsibilities), irrespective of source, consistently predicted most dimensions of parent stress assessed during the early elementary years and contributed unique variance. General support (i.e. primarily emotional support and validation) from various sources had other, less widespread effects on parental stress. The multidimensional perspective of the construct of social support that emerged suggested mechanisms mediating the relationship between support and stress and provided a framework for intervention.
Yang, Xiaoying; Tan, Lit; He, Ruimin; Fu, Guangtao; Ye, Jinyin; Liu, Qun; Wang, Guoqing
2017-12-01
It is increasingly recognized that climate change could impose both direct and indirect impacts on the quality of the water environment. Previous studies have mostly concentrated on evaluating the impacts of climate change on non-point source pollution in agricultural watersheds. Few studies have assessed the impacts of climate change on the water quality of river basins with complex point and non-point pollution sources. In view of the gap, this paper aims to establish a framework for stochastic assessment of the sensitivity of water quality to future climate change in a river basin with complex pollution sources. A sub-daily soil and water assessment tool (SWAT) model was developed to simulate the discharge, transport, and transformation of nitrogen from multiple point and non-point pollution sources in the upper Huai River basin of China. A weather generator was used to produce 50 years of synthetic daily weather data series for all 25 combinations of precipitation (changes by - 10, 0, 10, 20, and 30%) and temperature change (increases by 0, 1, 2, 3, and 4 °C) scenarios. The generated daily rainfall series was disaggregated into the hourly scale and then used to drive the sub-daily SWAT model to simulate the nitrogen cycle under different climate change scenarios. Our results in the study region have indicated that (1) both total nitrogen (TN) loads and concentrations are insensitive to temperature change; (2) TN loads are highly sensitive to precipitation change, while TN concentrations are moderately sensitive; (3) the impacts of climate change on TN concentrations are more spatiotemporally variable than its impacts on TN loads; and (4) wide distributions of TN loads and TN concentrations under individual climate change scenario illustrate the important role of climatic variability in affecting water quality conditions. In summary, the large variability in SWAT simulation results within and between each climate change scenario highlights the uncertainty of the impacts of climate change and the need to incorporate extreme conditions in managing water environment and developing climate change adaptation and mitigation strategies.
Zhang, Yuan-zhu; He, Qiu-fang; Jiang, Yong-jun; Li, Yong
2016-04-15
In a karst groundwater system, it develops complex multiple flows because of its special geological structure and unique physical patterns of aquifers. In order to investigate the characteristics and transport patterns of ammonia, nitrite and nitrate in epikarst water and subterranean stream, the water samples were collected monthly in a fast-urbanizing karst region. The results showed distinctive characteristics of three forms of inorganic nitrogen. The concentration of inorganic nitrogen was stable in the epikarst water while it was fluctuant in the subterranean stream. Epikarst water was less affected by rainfall and sewage compared with subterranean stream. In epikarst water, the nitrate concentration was much higher than the ammonia concentration. Dissolved inorganic nitrogen, mainly from non-point source pollution related to agricultural activities, passed in and out of the epikarst water based on a series of physical; chemical and biological processes in the epikarst zone, such as ammonification, adsorption and nitrification. On the contrary, subterranean stream showed a result of NH₄⁺-N > NO₃⁻-N in dry seasons and NO₃⁻-N > NH₄⁺-N in rainy seasons. This can be due to the fact that sanitary and industrial sewage flowed into subterranean river through sinkholes, fissures and grikes in dry season. Dissolved inorganic nitrogen in subterranean river was mainly from the non-point source pollution in wet season. Non-point source pollutants entered into subterranean water by two transport ways, one by penetration along with vadose flow through fissures and grikes, and the other by conduit flow through sinkholes from the surface runoff, soil water flow and epikarst flow. The export flux of DIN was 56.05 kg · (hm² · a)⁻¹, and NH₄⁺-N and NO₃⁻-N accounted for 46.03% and 52.51%, respectively. The contributions of point-source pollution and non point-source pollution to the export flux of DIN were 25.08% and 74.92%, respectively, based on run-off division method.
Birationality and Landau-Ginzburg Models
NASA Astrophysics Data System (ADS)
Clarke, Patrick
2017-08-01
We introduce a new technique for approaching birationality questions that arise in the mirror symmetry of complete intersections in toric varieties. As an application we answer affirmatively and conclusively the question of Batyrev-Nill (Integer points in polyhedra—geometry, number theory, representation theory, algebra, optimization, statistics, volume 452 of Contemporary mathematics. American Mathematical Society, Providence, pp 35-66,
Military Role in Countering Terrorist Use of Weapons of Mass Destruction
1999-04-01
chemical and biological mobile point detection. “The M21 Remote Sensing Chemical Agent Alarm (RSCAAL) is an automatic scanning, passive infrared sensor...The M21 detects nerve and blister agent clouds based on changes in the background infrared spectra caused by the presence of the agent vapor.”15...required if greater than 3 years since last vaccine. VEE Yes Multiple vaccines required. VHF No Botulism Yes SEB No Ricin No Mycotoxin s No Source
NASA Technical Reports Server (NTRS)
Elkins-Tanton, Linda T.; Chatterjee, Nilanjan; Grove, Timothy L.
2003-01-01
Phase equilibrium experiments on the most magnesian Apollo 15C green picritic glass composition indicate a multiple saturation point with olivine and orthopyroxene at 1520 C and 1.3 GPa (about 260 km depth in the moon). This composition has the highest Mg# of any lunar picritic glass and the shallowest multiple saturation point. Experiments on an Apollo 15A composition indicate a multiple saturation point with olivine and orthopyroxene at 1520 C and 2.2 GPa (about 440 km depth in the moon). The importance of the distinctive compositional trends of the Apollo 15 groups A, B, and C picritic glasses merits the reanalysis of NASA slide 15426,72 with modern electron microprobe techniques. We confirm the compositional trends reported by Delano (1979, 1986) in the major element oxides SiO2, TiO2, Al2O3, Cr2O3, FeO, MnO, MgO, and CaO, and we also obtained data for the trace elements P2O5, K2O, Na2O, NiO, S, Cu, Cl, Zn, and F. Petrogenetic modeling demonstrates that the Apollo 15 A-B-C glass trends could not have been formed by fractional crystallization or any continuous assimilation/fractional crystallization (AFC) process. The B and C glass compositional trends could not have been formed by batch or incremental melting of an olivine + orthopyroxene source or any other homogeneous source, though the A glasses may have been formed by congruent melting over a small pressure range at depth. The B compositional trend is well modeled by starting with an intermediate A composition and assimilating a shallower, melted cumulate, and the C compositional trend is well modeled by a second assimilation event. The assimilation process envisioned is one in which heat and mass transfer were separated in space and time. In an initial intrusive event, a picritic magma crystallized and provided heat to melt magma ocean cumulates. In a later replenishment event, the picritic magma incrementally mixed with the melted cumulate (creating the compositional trends in the green glass data set), ascended to the lunar surface, and erupted as a fire fountain. A barometer created from multiple saturation points provides a depth estimate of other glasses in the A-B-C trend and of the depths of assimilation. This barometer demonstrates that the Apollo 15 A-B-C trend originated over a depth range of approx.460 km to approx.260 km within the moon.
Multiple Auto-Adapting Color Balancing for Large Number of Images
NASA Astrophysics Data System (ADS)
Zhou, X.
2015-04-01
This paper presents a powerful technology of color balance between images. It does not only work for small number of images but also work for unlimited large number of images. Multiple adaptive methods are used. To obtain color seamless mosaic dataset, local color is adjusted adaptively towards the target color. Local statistics of the source images are computed based on the so-called adaptive dodging window. The adaptive target colors are statistically computed according to multiple target models. The gamma function is derived from the adaptive target and the adaptive source local stats. It is applied to the source images to obtain the color balanced output images. Five target color surface models are proposed. They are color point (or single color), color grid, 1st, 2nd and 3rd 2D polynomials. Least Square Fitting is used to obtain the polynomial target color surfaces. Target color surfaces are automatically computed based on all source images or based on an external target image. Some special objects such as water and snow are filtered by percentage cut or a given mask. Excellent results are achieved. The performance is extremely fast to support on-the-fly color balancing for large number of images (possible of hundreds of thousands images). Detailed algorithm and formulae are described. Rich examples including big mosaic datasets (e.g., contains 36,006 images) are given. Excellent results and performance are presented. The results show that this technology can be successfully used in various imagery to obtain color seamless mosaic. This algorithm has been successfully using in ESRI ArcGis.
Eta Carinae: Viewed from Multiple Vantage Points
NASA Technical Reports Server (NTRS)
Gull, Theodore
2007-01-01
The central source of Eta Carinae and its ejecta is a massive binary system buried within a massive interacting wind structure which envelops the two stars. However the hot, less massive companion blows a small cavity in the very massive primary wind, plus ionizes a portion of the massive wind just beyond the wind-wind boundary. We gain insight on this complex structure by examining the spatially-resolved Space Telescope Imaging Spectrograph (STIS) spectra of the central source (0.1") with the wind structure which extends out to nearly an arcsecond (2300AU) and the wind-blown boundaries, plus the ejecta of the Little Homunculus. Moreover, the spatially resolved Very Large Telescope/UltraViolet Echelle Spectrograph (VLT/UVES) stellar spectrum (one arcsecond) and spatially sampled spectra across the foreground lobe of the Homunculus provide us vantage points from different angles relative to line of sight. Examples of wind line profiles of Fe II, and the.highly excited [Fe III], [Ne III], [Ar III] and [S III)], plus other lines will be presented.
Ford Motor Company NDE facility shielding design.
Metzger, Robert L; Van Riper, Kenneth A; Jones, Martin H
2005-01-01
Ford Motor Company proposed the construction of a large non-destructive evaluation laboratory for radiography of automotive power train components. The authors were commissioned to design the shielding and to survey the completed facility for compliance with radiation doses for occupationally and non-occupationally exposed personnel. The two X-ray sources are Varian Linatron 3000 accelerators operating at 9-11 MV. One performs computed tomography of automotive transmissions, while the other does real-time radiography of operating engines and transmissions. The shield thickness for the primary barrier and all secondary barriers were determined by point-kernel techniques. Point-kernel techniques did not work well for skyshine calculations and locations where multiple sources (e.g. tube head leakage and various scatter fields) impacted doses. Shielding for these areas was determined using transport calculations. A number of MCNP [Briesmeister, J. F. MCNPCA general Monte Carlo N-particle transport code version 4B. Los Alamos National Laboratory Manual (1997)] calculations focused on skyshine estimates and the office areas. Measurements on the operational facility confirmed the shielding calculations.
NASA Astrophysics Data System (ADS)
Ostrovski, Fernanda; McMahon, Richard G.; Connolly, Andrew J.; Lemon, Cameron A.; Auger, Matthew W.; Banerji, Manda; Hung, Johnathan M.; Koposov, Sergey E.; Lidman, Christopher E.; Reed, Sophie L.; Allam, Sahar; Benoit-Lévy, Aurélien; Bertin, Emmanuel; Brooks, David; Buckley-Geer, Elizabeth; Carnero Rosell, Aurelio; Carrasco Kind, Matias; Carretero, Jorge; Cunha, Carlos E.; da Costa, Luiz N.; Desai, Shantanu; Diehl, H. Thomas; Dietrich, Jörg P.; Evrard, August E.; Finley, David A.; Flaugher, Brenna; Fosalba, Pablo; Frieman, Josh; Gerdes, David W.; Goldstein, Daniel A.; Gruen, Daniel; Gruendl, Robert A.; Gutierrez, Gaston; Honscheid, Klaus; James, David J.; Kuehn, Kyler; Kuropatkin, Nikolay; Lima, Marcos; Lin, Huan; Maia, Marcio A. G.; Marshall, Jennifer L.; Martini, Paul; Melchior, Peter; Miquel, Ramon; Ogando, Ricardo; Plazas Malagón, Andrés; Reil, Kevin; Romer, Kathy; Sanchez, Eusebio; Santiago, Basilio; Scarpine, Vic; Sevilla-Noarbe, Ignacio; Soares-Santos, Marcelle; Sobreira, Flavia; Suchyta, Eric; Tarle, Gregory; Thomas, Daniel; Tucker, Douglas L.; Walker, Alistair R.
2017-03-01
We present the discovery and preliminary characterization of a gravitationally lensed quasar with a source redshift zs = 2.74 and image separation of 2.9 arcsec lensed by a foreground zl = 0.40 elliptical galaxy. Since optical observations of gravitationally lensed quasars show the lens system as a superposition of multiple point sources and a foreground lensing galaxy, we have developed a morphology-independent multi-wavelength approach to the photometric selection of lensed quasar candidates based on Gaussian Mixture Models (GMM) supervised machine learning. Using this technique and gi multicolour photometric observations from the Dark Energy Survey (DES), near-IR JK photometry from the VISTA Hemisphere Survey (VHS) and WISE mid-IR photometry, we have identified a candidate system with two catalogue components with IAB = 18.61 and IAB = 20.44 comprising an elliptical galaxy and two blue point sources. Spectroscopic follow-up with NTT and the use of an archival AAT spectrum show that the point sources can be identified as a lensed quasar with an emission line redshift of z = 2.739 ± 0.003 and a foreground early-type galaxy with z = 0.400 ± 0.002. We model the system as a single isothermal ellipsoid and find the Einstein radius θE ˜ 1.47 arcsec, enclosed mass Menc ˜ 4 × 1011 M⊙ and a time delay of ˜52 d. The relatively wide separation, month scale time delay duration and high redshift make this an ideal system for constraining the expansion rate beyond a redshift of 1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Currie, Thayne; Cloutier, Ryan; Brittain, Sean
2015-12-01
We report Gemini Planet Imager H-band high-contrast imaging/integral field spectroscopy and polarimetry of the HD 100546, a 10 Myr old early-type star recently confirmed to host a thermal infrared (IR) bright (super-)Jovian protoplanet at wide separation, HD 100546 b. We resolve the inner disk cavity in polarized light, recover the thermal IR-bright arm, and identify one additional spiral arm. We easily recover HD 100546 b and show that much of its emission plausibly originates from an unresolved point source. The point-source component of HD 100546 b has extremely red IR colors compared to field brown dwarfs, qualitatively similar to youngmore » cloudy super-Jovian planets; however, these colors may instead indicate that HD 100546 b is still accreting material from a circumplanetary disk. Additionally, we identify a second point-source-like peak at r{sub proj} ∼ 14 AU, located just interior to or at the inner disk wall consistent with being a <10–20 M{sub J} candidate second protoplanet—“HD 100546 c”—and lying within a weakly polarized region of the disk but along an extension of the thermal IR-bright spiral arm. Alternatively, it is equally plausible that this feature is a weakly polarized but locally bright region of the inner disk wall. Astrometric monitoring of this feature over the next 2 years and emission line measurements could confirm its status as a protoplanet, rotating disk hot spot that is possibly a signpost of a protoplanet, or a stationary emission source from within the disk.« less
NASA Astrophysics Data System (ADS)
Currie, Thayne; Cloutier, Ryan; Brittain, Sean; Grady, Carol; Burrows, Adam; Muto, Takayuki; Kenyon, Scott J.; Kuchner, Marc J.
2015-12-01
We report Gemini Planet Imager H-band high-contrast imaging/integral field spectroscopy and polarimetry of the HD 100546, a 10 Myr old early-type star recently confirmed to host a thermal infrared (IR) bright (super-)Jovian protoplanet at wide separation, HD 100546 b. We resolve the inner disk cavity in polarized light, recover the thermal IR-bright arm, and identify one additional spiral arm. We easily recover HD 100546 b and show that much of its emission plausibly originates from an unresolved point source. The point-source component of HD 100546 b has extremely red IR colors compared to field brown dwarfs, qualitatively similar to young cloudy super-Jovian planets; however, these colors may instead indicate that HD 100546 b is still accreting material from a circumplanetary disk. Additionally, we identify a second point-source-like peak at rproj ˜ 14 AU, located just interior to or at the inner disk wall consistent with being a <10-20 MJ candidate second protoplanet—“HD 100546 c”—and lying within a weakly polarized region of the disk but along an extension of the thermal IR-bright spiral arm. Alternatively, it is equally plausible that this feature is a weakly polarized but locally bright region of the inner disk wall. Astrometric monitoring of this feature over the next 2 years and emission line measurements could confirm its status as a protoplanet, rotating disk hot spot that is possibly a signpost of a protoplanet, or a stationary emission source from within the disk.
NASA Astrophysics Data System (ADS)
Zarnetske, J. P.; Abbott, B. W.; Bowden, W. B.; Iannucci, F.; Griffin, N.; Parker, S.; Pinay, G.; Aanderud, Z.
2017-12-01
Dissolved organic carbon (DOC), nutrients, and other solute concentrations are increasing in rivers across the Arctic. Two hypotheses have been proposed to explain these trends: 1. distributed, top-down permafrost degradation, and 2. discrete, point-source delivery of DOC and nutrients from permafrost collapse features (thermokarst). While long-term monitoring at a single station cannot discriminate between these mechanisms, synoptic sampling of multiple points in the stream network could reveal the spatial structure of solute sources. In this context, we sampled carbon and nutrient chemistry three times over two years in 119 subcatchments of three distinct Arctic catchments (North Slope, Alaska). Subcatchments ranged from 0.1 to 80 km2, and included three distinct types of Arctic landscapes - mountainous, tundra, and glacial-lake catchments. We quantified the stability of spatial patterns in synoptic water chemistry and analyzed high-frequency time series from the catchment outlets across the thaw season to identify source areas for DOC, nutrients, and major ions. We found that variance in solute concentrations between subcatchments collapsed at spatial scales between 1 to 20 km2, indicating a continuum of diffuse- and point-source dynamics, depending on solute and catchment characteristics (e.g. reactivity, topography, vegetation, surficial geology). Spatially-distributed mass balance revealed conservative transport of DOC and nitrogen, and indicates there may be strong in-stream retention of phosphorus, providing a network-scale confirmation of previous reach-scale studies in these Arctic catchments. Overall, we present new approaches to analyzing synoptic data for change detection and quantification of ecohydrological mechanisms in ecosystems in the Arctic and beyond.
Koch, Jeffrey A [Livermore, CA
2003-07-08
An x-ray interferometer for analyzing high density plasmas and optically opaque materials includes a point-like x-ray source for providing a broadband x-ray source. The x-rays are directed through a target material and then are reflected by a high-quality ellipsoidally-bent imaging crystal to a diffraction grating disposed at 1.times. magnification. A spherically-bent imaging crystal is employed when the x-rays that are incident on the crystal surface are normal to that surface. The diffraction grating produces multiple beams which interfere with one another to produce an interference pattern which contains information about the target. A detector is disposed at the position of the image of the target produced by the interfering beams.
Strategies for satellite-based monitoring of CO2 from distributed area and point sources
NASA Astrophysics Data System (ADS)
Schwandner, Florian M.; Miller, Charles E.; Duren, Riley M.; Natraj, Vijay; Eldering, Annmarie; Gunson, Michael R.; Crisp, David
2014-05-01
Atmospheric CO2 budgets are controlled by the strengths, as well as the spatial and temporal variabilities of CO2 sources and sinks. Natural CO2 sources and sinks are dominated by the vast areas of the oceans and the terrestrial biosphere. In contrast, anthropogenic and geogenic CO2 sources are dominated by distributed area and point sources, which may constitute as much as 70% of anthropogenic (e.g., Duren & Miller, 2012), and over 80% of geogenic emissions (Burton et al., 2013). Comprehensive assessments of CO2 budgets necessitate robust and highly accurate satellite remote sensing strategies that address the competing and often conflicting requirements for sampling over disparate space and time scales. Spatial variability: The spatial distribution of anthropogenic sources is dominated by patterns of production, storage, transport and use. In contrast, geogenic variability is almost entirely controlled by endogenic geological processes, except where surface gas permeability is modulated by soil moisture. Satellite remote sensing solutions will thus have to vary greatly in spatial coverage and resolution to address distributed area sources and point sources alike. Temporal variability: While biogenic sources are dominated by diurnal and seasonal patterns, anthropogenic sources fluctuate over a greater variety of time scales from diurnal, weekly and seasonal cycles, driven by both economic and climatic factors. Geogenic sources typically vary in time scales of days to months (geogenic sources sensu stricto are not fossil fuels but volcanoes, hydrothermal and metamorphic sources). Current ground-based monitoring networks for anthropogenic and geogenic sources record data on minute- to weekly temporal scales. Satellite remote sensing solutions would have to capture temporal variability through revisit frequency or point-and-stare strategies. Space-based remote sensing offers the potential of global coverage by a single sensor. However, no single combination of orbit and sensor provides the full range of temporal sampling needed to characterize distributed area and point source emissions. For instance, point source emission patterns will vary with source strength, wind speed and direction. Because wind speed, direction and other environmental factors change rapidly, short term variabilities should be sampled. For detailed target selection and pointing verification, important lessons have already been learned and strategies devised during JAXA's GOSAT mission (Schwandner et al, 2013). The fact that competing spatial and temporal requirements drive satellite remote sensing sampling strategies dictates a systematic, multi-factor consideration of potential solutions. Factors to consider include vista, revisit frequency, integration times, spatial resolution, and spatial coverage. No single satellite-based remote sensing solution can address this problem for all scales. It is therefore of paramount importance for the international community to develop and maintain a constellation of atmospheric CO2 monitoring satellites that complement each other in their temporal and spatial observation capabilities: Polar sun-synchronous orbits (fixed local solar time, no diurnal information) with agile pointing allow global sampling of known distributed area and point sources like megacities, power plants and volcanoes with daily to weekly temporal revisits and moderate to high spatial resolution. Extensive targeting of distributed area and point sources comes at the expense of reduced mapping or spatial coverage, and the important contextual information that comes with large-scale contiguous spatial sampling. Polar sun-synchronous orbits with push-broom swath-mapping but limited pointing agility may allow mapping of individual source plumes and their spatial variability, but will depend on fortuitous environmental conditions during the observing period. These solutions typically have longer times between revisits, limiting their ability to resolve temporal variations. Geostationary and non-sun-synchronous low-Earth-orbits (precessing local solar time, diurnal information possible) with agile pointing have the potential to provide, comprehensive mapping of distributed area sources such as megacities with longer stare times and multiple revisits per day, at the expense of global access and spatial coverage. An ad hoc CO2 remote sensing constellation is emerging. NASA's OCO-2 satellite (launch July 2014) joins JAXA's GOSAT satellite in orbit. These will be followed by GOSAT-2 and NASA's OCO-3 on the International Space Station as early as 2017. Additional polar orbiting satellites (e.g., CarbonSat, under consideration at ESA) and geostationary platforms may also become available. However, the individual assets have been designed with independent science goals and requirements, and limited consideration of coordinated observing strategies. Every effort must be made to maximize the science return from this constellation. We discuss the opportunities to exploit the complementary spatial and temporal coverage provided by these assets as well as the crucial gaps in the capabilities of this constellation. References Burton, M.R., Sawyer, G.M., and Granieri, D. (2013). Deep carbon emissions from volcanoes. Rev. Mineral. Geochem. 75: 323-354. Duren, R.M., Miller, C.E. (2012). Measuring the carbon emissions of megacities. Nature Climate Change 2, 560-562. Schwandner, F.M., Oda, T., Duren, R., Carn, S.A., Maksyutov, S., Crisp, D., Miller, C.E. (2013). Scientific Opportunities from Target-Mode Capabilities of GOSAT-2. NASA Jet Propulsion Laboratory, California Institute of Technology, Pasadena CA, White Paper, 6p., March 2013.
NASA Astrophysics Data System (ADS)
Strasburger, David; Gorjian, Varoujan; Burke, Todd; Childs, Linda; Odden, Caroline; Tambara, Kevin; Abate, Antoinette; Akhtar, Nadir; Beach, Skyler; Bhojwani, Ishaan; Brown, Caden; Dear, AnnaMaria; Dumont, Theodore; Harden, Olivia; Joli-Coeur, Laurent; Nahirny, Rachel; Nakahira, Andie; Nix, Sabine; Orgul, Sarp; Parry, Johnny; Picken, John; Taylor, Isabel; Toner, Emre; Turner, Aspen; Xu, Jessica; Zhu, Emily
2015-01-01
The Spitzer Space Telescope's original cryogenic mission imaged roughly 42 million sources, most of which were incidental and never specifically targeted for research. These have now been compiled in the publicly accessible Spitzer Enhanced Imaging Products (SEIP) catalog. The SEIP stores millions of never before examined sources that happened to be in the same field of view as objects specifically selected for study. This project examined the catalog to isolate previously unknown infrared excess (IRXS) candidates. The culling process utilized four steps. First, we considered only those objects with signal to noise ratios of at least 10 to 1 in the following five wavelengths: 3.6, 4.5, 5.8, 8 and 24 microns, which narrowed the source list to about one million. Second, objects were removed from highly studied regions, such as the galactic plane and previously conducted infrared surveys. This further reduced the population of sources to 283,758. Third, the remaining sources were plotted using a [3.6]-[4.5] vs. [8]-[24] color-color diagram to isolate IRXS candidates. Fourth, multiple images of sixty-three outlier points from the extrema of the color-color diagram were examined to verify that the sources had been cross matched correctly and to exclude any candidate sources that may have been compromised due to image artifacts or field crowding. The team will ultimately provide statistics for the prevalence of IRXS sources in the SEIP catalog and provide analysis of those extreme outliers from the main locus of points. This research was made possible through the NASA/IPAC Teacher Archive Research Program (NITARP) and was funded by NASA Astrophysics Data Program.
NASA Astrophysics Data System (ADS)
Kim, Jeong-Gyu; Kim, Woong-Tae; Ostriker, Eve C.; Skinner, M. Aaron
2017-12-01
We present an implementation of an adaptive ray-tracing (ART) module in the Athena hydrodynamics code that accurately and efficiently handles the radiative transfer involving multiple point sources on a three-dimensional Cartesian grid. We adopt a recently proposed parallel algorithm that uses nonblocking, asynchronous MPI communications to accelerate transport of rays across the computational domain. We validate our implementation through several standard test problems, including the propagation of radiation in vacuum and the expansions of various types of H II regions. Additionally, scaling tests show that the cost of a full ray trace per source remains comparable to that of the hydrodynamics update on up to ∼ {10}3 processors. To demonstrate application of our ART implementation, we perform a simulation of star cluster formation in a marginally bound, turbulent cloud, finding that its star formation efficiency is 12% when both radiation pressure forces and photoionization by UV radiation are treated. We directly compare the radiation forces computed from the ART scheme with those from the M1 closure relation. Although the ART and M1 schemes yield similar results on large scales, the latter is unable to resolve the radiation field accurately near individual point sources.
Field demonstration of foam injection to confine a chlorinated solvent source zone.
Portois, Clément; Essouayed, Elyess; Annable, Michael D; Guiserix, Nathalie; Joubert, Antoine; Atteia, Olivier
2018-05-01
A novel approach using foam to manage hazardous waste was successfully demonstrated under active site conditions. The purpose of the foam was to divert groundwater flow, that would normally enter the source zone area, to reduce dissolved contaminant release to the aquifer. During the demonstration, foam was pre generated and directly injected surrounding the chlorinated solvent source zone. Despite the constraints related to the industrial activities and non-optimal position of the injection points, the applicability and effectiveness of the approach have been highlighted using multiple metrics. A combination of measurements and modelling allowed definition of the foam extent surrounding each injection point, and this appears to be the critical metric to define the success of the foam injection approach. Information on the transport of chlorinated solvents in groundwater showed a decrease of contaminant flux by a factor of 4.4 downstream of the confined area. The effective permeability reduction was maintained over a period of three months. The successful containment provides evidence for consideration of the use of foam to improve traditional flushing techniques, by increasing the targeting of contaminants by remedial agents. Copyright © 2018 Elsevier B.V. All rights reserved.
High dose rate brachytherapy source measurement intercomparison.
Poder, Joel; Smith, Ryan L; Shelton, Nikki; Whitaker, May; Butler, Duncan; Haworth, Annette
2017-06-01
This work presents a comparison of air kerma rate (AKR) measurements performed by multiple radiotherapy centres for a single HDR 192 Ir source. Two separate groups (consisting of 15 centres) performed AKR measurements at one of two host centres in Australia. Each group travelled to one of the host centres and measured the AKR of a single 192 Ir source using their own equipment and local protocols. Results were compared to the 192 Ir source calibration certificate provided by the manufacturer by means of a ratio of measured to certified AKR. The comparisons showed remarkably consistent results with the maximum deviation in measurement from the decay-corrected source certificate value being 1.1%. The maximum percentage difference between any two measurements was less than 2%. The comparisons demonstrated the consistency of well-chambers used for 192 Ir AKR measurements in Australia, despite the lack of a local calibration service, and served as a valuable focal point for the exchange of ideas and dosimetry methods.
WASS: an open-source stereo processing pipeline for sea waves 3D reconstruction
NASA Astrophysics Data System (ADS)
Bergamasco, Filippo; Benetazzo, Alvise; Torsello, Andrea; Barbariol, Francesco; Carniel, Sandro; Sclavo, Mauro
2017-04-01
Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community. In fact, recent advances of both computer vision algorithms and CPU processing power can now allow the study of the spatio-temporal wave fields with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner so that the implementation of a 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well-tested software package that automates the steps from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS, a completely Open-Source stereo processing pipeline for sea waves 3D reconstruction, available at http://www.dais.unive.it/wass/. Our tool completely automates the recovery of dense point clouds from stereo images by providing three main functionalities. First, WASS can automatically recover the extrinsic parameters of the stereo rig (up to scale) so that no delicate calibration has to be performed on the field. Second, WASS implements a fast 3D dense stereo reconstruction procedure so that an accurate 3D point cloud can be computed from each stereo pair. We rely on the well-consolidated OpenCV library both for the image stereo rectification and disparity map recovery. Lastly, a set of 2D and 3D filtering techniques both on the disparity map and the produced point cloud are implemented to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface (examples are sun-glares, large white-capped areas, fog and water areosol, etc). Developed to be as fast as possible, WASS can process roughly four 5 MPixel stereo frames per minute (on a consumer i7 CPU) to produce a sequence of outlier-free point clouds with more than 3 million points each. Finally, it comes with an easy to use user interface and designed to be scalable on multiple parallel CPUs.
Source counting in MEG neuroimaging
NASA Astrophysics Data System (ADS)
Lei, Tianhu; Dell, John; Magee, Ralphy; Roberts, Timothy P. L.
2009-02-01
Magnetoencephalography (MEG) is a multi-channel, functional imaging technique. It measures the magnetic field produced by the primary electric currents inside the brain via a sensor array composed of a large number of superconducting quantum interference devices. The measurements are then used to estimate the locations, strengths, and orientations of these electric currents. This magnetic source imaging technique encompasses a great variety of signal processing and modeling techniques which include Inverse problem, MUltiple SIgnal Classification (MUSIC), Beamforming (BF), and Independent Component Analysis (ICA) method. A key problem with Inverse problem, MUSIC and ICA methods is that the number of sources must be detected a priori. Although BF method scans the source space on a point-to-point basis, the selection of peaks as sources, however, is finally made by subjective thresholding. In practice expert data analysts often select results based on physiological plausibility. This paper presents an eigenstructure approach for the source number detection in MEG neuroimaging. By sorting eigenvalues of the estimated covariance matrix of the acquired MEG data, the measured data space is partitioned into the signal and noise subspaces. The partition is implemented by utilizing information theoretic criteria. The order of the signal subspace gives an estimate of the number of sources. The approach does not refer to any model or hypothesis, hence, is an entirely data-led operation. It possesses clear physical interpretation and efficient computation procedure. The theoretical derivation of this method and the results obtained by using the real MEG data are included to demonstrates their agreement and the promise of the proposed approach.
Positron Emission Mammography with Multiple Angle Acquisition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mark F. Smith; Stan Majewski; Raymond R. Raylman
2002-11-01
Positron emission mammography (PEM) of F-18 fluorodeoxyglucose (FbG) uptake in breast tumors with dedicated detectors typically has been accomplished with two planar detectors in a fixed position with the breast under compression. The potential use of PEM imaging at two detector positions to guide stereotactic breast biopsy has motivated us to use PEM coincidence data acquired at two or more detector positions together in a single image reconstruction. Multiple angle PEM acquisition and iterative image reconstruction were investigated using point source and compressed breast phantom acquisitions with 5, 9, 12 and 15 mm diameter spheres and a simulated tumor:background activitymore » concentration ratio of 6:1. Image reconstruction was performed with an iterative MLEM algorithm that used coincidence events between any two detector pixels on opposed detector heads at each detector position. This present study compared two acquisition protocols: 2 angle acquisition with detector angular positions of -15 and +15 degrees and 11 angle acquisition with detector positions spaced at 3 degree increments over the range -15 to +15 degrees. Three-dimensional image resolution was assessed for the point source acquisitions, and contrast and signal-to-noise metrics were evaluated for the compressed breast phantom with different simulated tumor sizes. Radial and tangential resolutions were similar for the two protocols, while normal resolution was better for the 2 angle acquisition. Analysis is complicated by the asymmetric point spread functions. Signal- to-noise vs. contrast tradeoffs were better for 11 angle acquisition for the smallest visible 9 mm sphere, while tradeoff results were mixed for the larger and more easily visible 12 mm and 15 mm diameter spheres. Additional study is needed to better understand the performance of limited angle tomography for PEM. PEM tomography experiments with complete angular sampling are planned.« less
LEAP: Looking beyond pixels with continuous-space EstimAtion of Point sources
NASA Astrophysics Data System (ADS)
Pan, Hanjie; Simeoni, Matthieu; Hurley, Paul; Blu, Thierry; Vetterli, Martin
2017-12-01
Context. Two main classes of imaging algorithms have emerged in radio interferometry: the CLEAN algorithm and its multiple variants, and compressed-sensing inspired methods. They are both discrete in nature, and estimate source locations and intensities on a regular grid. For the traditional CLEAN-based imaging pipeline, the resolution power of the tool is limited by the width of the synthesized beam, which is inversely proportional to the largest baseline. The finite rate of innovation (FRI) framework is a robust method to find the locations of point-sources in a continuum without grid imposition. The continuous formulation makes the FRI recovery performance only dependent on the number of measurements and the number of sources in the sky. FRI can theoretically find sources below the perceived tool resolution. To date, FRI had never been tested in the extreme conditions inherent to radio astronomy: weak signal / high noise, huge data sets, large numbers of sources. Aims: The aims were (i) to adapt FRI to radio astronomy, (ii) verify it can recover sources in radio astronomy conditions with more accurate positioning than CLEAN, and possibly resolve some sources that would otherwise be missed, (iii) show that sources can be found using less data than would otherwise be required to find them, and (iv) show that FRI does not lead to an augmented rate of false positives. Methods: We implemented a continuous domain sparse reconstruction algorithm in Python. The angular resolution performance of the new algorithm was assessed under simulation, and with visibility measurements from the LOFAR telescope. Existing catalogs were used to confirm the existence of sources. Results: We adapted the FRI framework to radio interferometry, and showed that it is possible to determine accurate off-grid point-source locations and their corresponding intensities. In addition, FRI-based sparse reconstruction required less integration time and smaller baselines to reach a comparable reconstruction quality compared to a conventional method. The achieved angular resolution is higher than the perceived instrument resolution, and very close sources can be reliably distinguished. The proposed approach has cubic complexity in the total number (typically around a few thousand) of uniform Fourier data of the sky image estimated from the reconstruction. It is also demonstrated that the method is robust to the presence of extended-sources, and that false-positives can be addressed by choosing an adequate model order to match the noise level.
NASA Astrophysics Data System (ADS)
Keisman, J.; Sekellick, A.; Blomquist, J.; Devereux, O. H.; Hively, W. D.; Johnston, M.; Moyer, D.; Sweeney, J.
2014-12-01
Chesapeake Bay is a eutrophic ecosystem with periodic hypoxia and anoxia, algal blooms, diminished submerged aquatic vegetation, and degraded stocks of marine life. Knowledge of the effectiveness of actions taken across the watershed to reduce nitrogen (N) and phosphorus (P) loads to the bay (i.e. "best management practices" or BMPs) is essential to its restoration. While nutrient inputs from point sources (e.g. wastewater treatment plants and other industrial and municipal operations) are tracked, inputs from nonpoint sources, including atmospheric deposition, farms, lawns, septic systems, and stormwater, are difficult to measure. Estimating reductions in nonpoint source inputs attributable to BMPs requires compilation and comparison of data on water quality, climate, land use, point source discharges, and BMP implementation. To explore the relation of changes in nonpoint source inputs and BMP implementation to changes in water quality, a subset of small watersheds (those containing at least 10 years of water quality monitoring data) within the Chesapeake Watershed were selected for study. For these watersheds, data were compiled on geomorphology, demographics, land use, point source discharges, atmospheric deposition, and agricultural practices such as livestock populations, crop acres, and manure and fertilizer application. In addition, data on BMP implementation for 1985-2012 were provided by the Environmental Protection Agency Chesapeake Bay Program Office (CBPO) and the U.S. Department of Agriculture. A spatially referenced nonlinear regression model (SPARROW) provided estimates attributing N and P loads associated with receiving waters to different nutrient sources. A recently developed multiple regression technique ("Weighted Regressions on Time, Discharge and Season" or WRTDS) provided an enhanced understanding of long-term trends in N and P loads and concentrations. A suite of deterministic models developed by the CBPO was used to estimate expected nutrient load reductions attributable to BMPs. Further quantification of the relation of land-based nutrient sources and BMPs to water quality in the bay and its tributaries must account for inconsistency in BMP data over time and uncertainty regarding BMP locations and effectiveness.
Financing Renewable Energy Projects in Developing Countries: A Critical Review
NASA Astrophysics Data System (ADS)
Donastorg, A.; Renukappa, S.; Suresh, S.
2017-08-01
Access to clean and stable energy, meeting sustainable development goals, the fossil fuel dependency and depletion are some of the reasons that have impacted developing countries to transform the business as usual economy to a more sustainable economy. However, access and availability of finance is a major challenge for many developing countries. Financing renewable energy projects require access to significant resources, by multiple parties, at varying points in the project life cycles. This research aims to investigate sources and new trends in financing RE projects in developing countries. For this purpose, a detail and in-depth literature review have been conducted to explore the sources and trends of current RE financial investment and projects, to understand the gaps and limitations. This paper concludes that there are various internal and external sources of finance available for RE projects in developing countries.
2003-06-01
bed. This clay layer restricts the downward migration of pollutants and restricts saline water from Choctawhatchee Bay and the Gulf of Mexico from...Because it is saline , the Lower Limestone unit is not used as a water source (U.S. Air Force, 1995). Groundwater storage and movement in the Upper... purslane , among others. Inland from the produne zone is the “scrub” zone. Vegetation found in this zone is usually stunted and wind/salt sprayed
A space-frequency multiplicative regularization for force reconstruction problems
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2018-05-01
Dynamic forces reconstruction from vibration data is an ill-posed inverse problem. A standard approach to stabilize the reconstruction consists in using some prior information on the quantities to identify. This is generally done by including in the formulation of the inverse problem a regularization term as an additive or a multiplicative constraint. In the present article, a space-frequency multiplicative regularization is developed to identify mechanical forces acting on a structure. The proposed regularization strategy takes advantage of one's prior knowledge of the nature and the location of excitation sources, as well as that of their spectral contents. Furthermore, it has the merit to be free from the preliminary definition of any regularization parameter. The validity of the proposed regularization procedure is assessed numerically and experimentally. It is more particularly pointed out that properly exploiting the space-frequency characteristics of the excitation field to identify can improve the quality of the force reconstruction.
Calculating the n-point correlation function with general and efficient python code
NASA Astrophysics Data System (ADS)
Genier, Fred; Bellis, Matthew
2018-01-01
There are multiple approaches to understanding the evolution of large-scale structure in our universe and with it the role of baryonic matter, dark matter, and dark energy at different points in history. One approach is to calculate the n-point correlation function estimator for galaxy distributions, sometimes choosing a particular type of galaxy, such as luminous red galaxies. The standard way to calculate these estimators is with pair counts (for the 2-point correlation function) and with triplet counts (for the 3-point correlation function). These are O(n2) and O(n3) problems, respectively and with the number of galaxies that will be characterized in future surveys, having efficient and general code will be of increasing importance. Here we show a proof-of-principle approach to the 2-point correlation function that relies on pre-calculating galaxy locations in coarse “voxels”, thereby reducing the total number of necessary calculations. The code is written in python, making it easily accessible and extensible and is open-sourced to the community. Basic results and performance tests using SDSS/BOSS data will be shown and we discuss the application of this approach to the 3-point correlation function.
Multi-diversity combining and selection for relay-assisted mixed RF/FSO system
NASA Astrophysics Data System (ADS)
Chen, Li; Wang, Weidong
2017-12-01
We propose and analyze multi-diversity combining and selection to enhance the performance of relay-assisted mixed radio frequency/free-space optics (RF/FSO) system. We focus on a practical scenario for cellular network where a single-antenna source is communicating to a multi-apertures destination through a relay equipped with multiple receive antennas and multiple transmit apertures. The RF single input multiple output (SIMO) links employ either maximal-ratio combining (MRC) or receive antenna selection (RAS), and the FSO multiple input multiple output (MIMO) links adopt either repetition coding (RC) or transmit laser selection (TLS). The performance is evaluated via an outage probability analysis over Rayleigh fading RF links and Gamma-Gamma atmospheric turbulence FSO links with pointing errors where channel state information (CSI) assisted amplify-and-forward (AF) scheme is considered. Asymptotic closed-form expressions at high signal-to-noise ratio (SNR) are also derived. Coding gain and diversity order for different combining and selection schemes are further discussed. Numerical results are provided to verify and illustrate the analytical results.
Momentum and energy transport by waves in the solar atmosphere and solar wind
NASA Technical Reports Server (NTRS)
Jacques, S. A.
1977-01-01
The fluid equations for the solar wind are presented in a form which includes the momentum and energy flux of waves in a general and consistent way. The concept of conservation of wave action is introduced and is used to derive expressions for the wave energy density as a function of heliocentric distance. The explicit form of the terms due to waves in both the momentum and energy equations are given for radially propagating acoustic, Alfven, and fast mode waves. The effect of waves as a source of momentum is explored by examining the critical points of the momentum equation for isothermal spherically symmetric flow. We find that the principal effect of waves on the solutions is to bring the critical point closer to the sun's surface and to increase the Mach number at the critical point. When a simple model of dissipation is included for acoustic waves, in some cases there are multiple critical points.
Influence of Diagenesis on Bioavailable Phosphorus in Lake Mendota, USA
NASA Astrophysics Data System (ADS)
Hoffman, A.; Armstrong, D.; Lathrop, R.; Penn, M.
2013-12-01
Phosphorus (P) is a major driver of productivity in many freshwater systems and in excess P can cause a variety of deleterious effects. Lake Mendota, located in Madison, Wisconsin (USA), is a eutrophic calcareous lake that is influenced by both urban and agricultural sources. As measures have been implemented to control point and non-point source pollution, internal sources, including release by sediments, has become more important. We collected multiple sediment cores from seven depositional basins to determine how diagenesis is influencing the bioavailability of sediment P. Cores were sliced in 1-cm intervals and analyzed for total P (TP), various P fractions, total metals, and multiple stable isotopes. While the average amount of total P that was bioavailable was 64.8%, the range noted was 39.2% to 88.6%. Spatial differences existed among the cores when comparing TP and bioavailable P among the cores. Depth profiles elucidated temporal differences as occasional increases in TP with depth were noted. These increases were found to contain a higher percent of bioavailable P. This variation was explored to determine if it resulted from differences in source material, for example inorganic P formed by diagenesis of organic P (algal derived) rather than soil P from external inputs. Saturation index modeling using MINEQL+ suggests that phosphorus concentrations in Lake Mendota pore waters are influenced by precipitation of vivianite (Fe3(PO4)2●8H2O) and certain calcium phosphates. However, hydroxyl apatite (Ca5(PO4)3(OH)), was highly supersaturated, indicating that precipitation of hydroxyl apatite is hindered and not important in controlling phosphate concentrations in these sediments. Yet even more important than precipitation reactions, adsorption/desorption characteristics of P seem to play a major role in P bioavailability. Sediment 210Pb and 137Cs activity profiles indicate differences exist among sedimentation rates for the various depositional sites in Lake Mendota. Implications for the modeling of P cycling and changes in internal loading following external P reduction in lakes will be discussed.
Stroke in Primary Hyperoxaluria Type I
Rao, Neal M.; Yallapragada, Anil; Winden, Kellen D.; Saver, Jeffrey; Liebeskind, David S.
2014-01-01
We report the case of a 27-year-old man with a history of previously undiagnosed renal disease that presented with multiple cerebrovascular infarctions. Workup for traditional causes of cerebrovascular infarction including cardiac telemetry, multiple echocardiograms, and hypercoagulative workup was negative. However, a transcranial Doppler detected circulating microemboli at the rate of 14 per hour. A serum oxalate level greater than the supersaturation point of calcium oxalate was detected, providing a potential source of the microemboli. Furthermore, serial imaging recorded rapid mineralization of the infarcted territories. In the absence of any proximal vessel irregularities, atherosclerosis, valvular abnormalities, arrhythmias, or systemic shunt as potential stroke etiology in this patient, we propose that circulating oxalate precipitate may be a potential mechanism for stroke in patients with primary oxalosis. PMID:23551880
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nyland, Kristina; Marvil, Josh; Young, Lisa M.
We present the results of deep, high-resolution, 5 GHz Expanded Very Large Array (EVLA) observations of the nearby, dwarf lenticular galaxy and intermediate-mass black hole candidate (M{sub BH} {approx} 4.5 Multiplication-Sign 10{sup 5} M{sub Sun }), NGC 404. For the first time, radio emission at frequencies above 1.4 GHz has been detected in this galaxy. We found a modestly resolved source in the NGC 404 nucleus with a total radio luminosity of 7.6 {+-} 0.7 Multiplication-Sign 10{sup 17} W Hz{sup -1} at 5 GHz and a spectral index from 5 to 7.45 GHz of {alpha} = -0.88 {+-} 0.30. NGCmore » 404 is only the third central intermediate-mass black hole candidate detected in the radio regime with subarcsecond resolution. The position of the radio source is consistent with the optical center of the galaxy and the location of a known, hard X-ray point source (L{sub X} {approx} 1.2 Multiplication-Sign 10{sup 37} erg s{sup -1}). The faint radio and X-ray emission could conceivably be produced by an X-ray binary, star formation, a supernova remnant, or a low-luminosity active galactic nucleus powered by an intermediate-mass black hole. In light of our new EVLA observations, we find that the most likely scenario is an accreting intermediate-mass black hole, with other explanations being either incompatible with the observed X-ray and/or radio luminosities or statistically unlikely.« less
Building Facade Reconstruction by Fusing Terrestrial Laser Points and Images
Pu, Shi; Vosselman, George
2009-01-01
Laser data and optical data have a complementary nature for three dimensional feature extraction. Efficient integration of the two data sources will lead to a more reliable and automated extraction of three dimensional features. This paper presents a semiautomatic building facade reconstruction approach, which efficiently combines information from terrestrial laser point clouds and close range images. A building facade's general structure is discovered and established using the planar features from laser data. Then strong lines in images are extracted using Canny extractor and Hough transformation, and compared with current model edges for necessary improvement. Finally, textures with optimal visibility are selected and applied according to accurate image orientations. Solutions to several challenge problems throughout the collaborated reconstruction, such as referencing between laser points and multiple images and automated texturing, are described. The limitations and remaining works of this approach are also discussed. PMID:22408539
Dark Signal Characterization of 1.7 micron cutoff devices for SNAP
NASA Astrophysics Data System (ADS)
Smith, R. M.; SNAP Collaboration
2004-12-01
We report initial progress characterizing non-photometric sources of error -- dark current, noise, and zero point drift -- for 1.7 micron cutoff HgCdTe and InGaAs detectors under development by Raytheon, Rockwell, and Sensors Unlimited for SNAP. Dark current specifications can already be met with several detector types. Changes to the manufacturing process are being explored to improve the noise reduction available through multiple sampling. In some cases, a significant number of pixels suffer from popcorn noise, with a few percent of all pixels exhibiting a ten fold noise increase. A careful study of zero point drifts is also under way, since these errors can dominate dark current, and may contribute to the noise degradation seen in long exposures.
Support of Multidimensional Parallelism in the OpenMP Programming Model
NASA Technical Reports Server (NTRS)
Jin, Hao-Qiang; Jost, Gabriele
2003-01-01
OpenMP is the current standard for shared-memory programming. While providing ease of parallel programming, the OpenMP programming model also has limitations which often effect the scalability of applications. Examples for these limitations are work distribution and point-to-point synchronization among threads. We propose extensions to the OpenMP programming model which allow the user to easily distribute the work in multiple dimensions and synchronize the workflow among the threads. The proposed extensions include four new constructs and the associated runtime library. They do not require changes to the source code and can be implemented based on the existing OpenMP standard. We illustrate the concept in a prototype translator and test with benchmark codes and a cloud modeling code.
The Chandra Source Catalog 2.0
NASA Astrophysics Data System (ADS)
Evans, Ian N.; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McCollough, Michael L.; McDowell, Jonathan C.; McLaughlin, Warren; Miller, Joseph; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Primini, Francis Anthony; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula
2018-01-01
The current version of the Chandra Source Catalog (CSC) continues to be well utilized by the astronomical community. Usage over the past year has continued to average more than 15,000 searches per month. Version 1.1 of the CSC, released in 2010, includes properties and data for 158,071 detections, corresponding to 106,586 distinct X-ray sources on the sky. The second major release of the catalog, CSC 2.0, will be made available to the user community in early 2018, and preliminary lists of detections and sources are available now. Release 2.0 will roughly triple the size of the current version of the catalog to an estimated 375,000 detections, corresponding to ~315,000 unique X-ray sources. Compared to release 1.1, the limiting sensitivity for compact sources in CSC 2.0 is significantly enhanced. This improvement is achieved by using a two-stage approach that involves stacking (co-adding) multiple observations of the same field prior to source detection, and then using an improved source detection approach that enables us to detect point source down to ~5 net counts on-axis for exposures shorter than ~15 ks. In addition to enhanced source detection capabilities, improvements to the Bayesian aperture photometry code included in release 2.0 provides robust photometric probability density functions (PDFs) in crowded fields even for low count detections. All post-aperture photometry properties (e.g., hardness ratios, source variability) work directly from the PDFs in release 2.0. CSC 2.0 also adds a Bayesian Blocks analysis of the multi-band aperture photometry PDFs to identify multiple observations of the same source that have similar photometric properties, and therefore can be analyzed simultaneously to improve S/N.We briefly describe these and other updates that significantly enhance the scientific utility of CSC 2.0 when compared to the earlier catalog release.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.
NASA Astrophysics Data System (ADS)
Kocifaj, Miroslav
2018-02-01
The mechanism in which multiple scattering influences the radiance of a night sky has been poorly quantified until recently, or even completely unknown from the theoretical point of view. In this paper, the relative contribution of higher-scattering radiances to the total sky radiance is treated analytically for all orders of scattering, showing that a fast and accurate numerical solution to the problem exists. Unlike a class of ray tracing codes in which CPU requirements increase tremendously with each new scattering mode, the solution developed here requires the same processor time for each scattering mode. This allows for rapid estimation of higher-scattering radiances and residual error that is otherwise unknown if these radiances remain undetermined. Such convergence testing is necessary to guarantee accuracy and the stability of the numerical predictions. The performance of the method developed here is demonstrated in a set of numerical experiments aiming to uncover the relative importance of higher-scattering radiances at different distances from a light source. We have shown, that multiple scattering effects are generally low if distance to the light source is below 30 km. At large distances the multiple scattering can become important at the dark sky elements situated opposite to the light source. However, the brightness at this part of sky is several orders of magnitude smaller than that of a glowing dome of light over a city, so we do not expect that a partial increase or even doubling the radiance of otherwise dark sky elements can noticeably affect astronomical observations or living organisms (including humans). Single scattering is an appropriate approximation to the sky radiance of a night sky in the vast majority of cases.
Quantum Theory of Superresolution for Incoherent Optical Imaging
NASA Astrophysics Data System (ADS)
Tsang, Mankei
Rayleigh's criterion for resolving two incoherent point sources has been the most influential measure of optical imaging resolution for over a century. In the context of statistical image processing, violation of the criterion is especially detrimental to the estimation of the separation between the sources, and modern far-field superresolution techniques rely on suppressing the emission of close sources to enhance the localization precision. Using quantum optics, quantum metrology, and statistical analysis, here we show that, even if two close incoherent sources emit simultaneously, measurements with linear optics and photon counting can estimate their separation from the far field almost as precisely as conventional methods do for isolated sources, rendering Rayleigh's criterion irrelevant to the problem. Our results demonstrate that superresolution can be achieved not only for fluorophores but also for stars. Recent progress in generalizing our theory for multiple sources and spectroscopy will also be discussed. This work is supported by the Singapore National Research Foundation under NRF Grant No. NRF-NRFF2011-07 and the Singapore Ministry of Education Academic Research Fund Tier 1 Project R-263-000-C06-112.
A method for analyzing temporal patterns of variability of a time series from Poincare plots.
Fishman, Mikkel; Jacono, Frank J; Park, Soojin; Jamasebi, Reza; Thungtong, Anurak; Loparo, Kenneth A; Dick, Thomas E
2012-07-01
The Poincaré plot is a popular two-dimensional, time series analysis tool because of its intuitive display of dynamic system behavior. Poincaré plots have been used to visualize heart rate and respiratory pattern variabilities. However, conventional quantitative analysis relies primarily on statistical measurements of the cumulative distribution of points, making it difficult to interpret irregular or complex plots. Moreover, the plots are constructed to reflect highly correlated regions of the time series, reducing the amount of nonlinear information that is presented and thereby hiding potentially relevant features. We propose temporal Poincaré variability (TPV), a novel analysis methodology that uses standard techniques to quantify the temporal distribution of points and to detect nonlinear sources responsible for physiological variability. In addition, the analysis is applied across multiple time delays, yielding a richer insight into system dynamics than the traditional circle return plot. The method is applied to data sets of R-R intervals and to synthetic point process data extracted from the Lorenz time series. The results demonstrate that TPV complements the traditional analysis and can be applied more generally, including Poincaré plots with multiple clusters, and more consistently than the conventional measures and can address questions regarding potential structure underlying the variability of a data set.
Sitki-Green, Diane; Covington, Mary; Raab-Traub, Nancy
2003-01-01
Infection with the Epstein-Barr virus (EBV) is often subclinical in the presence of a healthy immune response; thus, asymptomatic infection is largely uncharacterized. This study analyzed the nature of EBV infection in 20 asymptomatic immunocompetent hosts over time through the identification of EBV strain variants in the peripheral blood and oral cavity. A heteroduplex tracking assay specific for the EBV gene LMP1 precisely identified the presence of multiple EBV strains in each subject. The strains present in the peripheral blood and oral cavity were often completely discordant, indicating the existence of distinct infections, and the strains present and their relative abundance changed considerably between time points. The possible transmission of strains between the oral cavity and peripheral blood compartments could be tracked within subjects, suggesting that reactivation in the oral cavity and subsequent reinfection of B lymphocytes that reenter the periphery contribute to the maintenance of persistence. In addition, distinct virus strains persisted in the oral cavity over many time points, suggesting an important role for epithelial cells in the maintenance of persistence. Asymptomatic individuals without tonsillar tissue, which is believed to be an important source of virus for the oral cavity, also exhibited multiple strains and a cyclic pattern of transmission between compartments. This study revealed that the majority of patients with infectious mononucleosis were infected with multiple strains of EBV that were also compartmentalized, suggesting that primary infection involves the transmission of multiple strains. Both the primary and carrier states of infection with EBV are more complex than previously thought. PMID:12525618
Sitki-Green, Diane; Covington, Mary; Raab-Traub, Nancy
2003-02-01
Infection with the Epstein-Barr virus (EBV) is often subclinical in the presence of a healthy immune response; thus, asymptomatic infection is largely uncharacterized. This study analyzed the nature of EBV infection in 20 asymptomatic immunocompetent hosts over time through the identification of EBV strain variants in the peripheral blood and oral cavity. A heteroduplex tracking assay specific for the EBV gene LMP1 precisely identified the presence of multiple EBV strains in each subject. The strains present in the peripheral blood and oral cavity were often completely discordant, indicating the existence of distinct infections, and the strains present and their relative abundance changed considerably between time points. The possible transmission of strains between the oral cavity and peripheral blood compartments could be tracked within subjects, suggesting that reactivation in the oral cavity and subsequent reinfection of B lymphocytes that reenter the periphery contribute to the maintenance of persistence. In addition, distinct virus strains persisted in the oral cavity over many time points, suggesting an important role for epithelial cells in the maintenance of persistence. Asymptomatic individuals without tonsillar tissue, which is believed to be an important source of virus for the oral cavity, also exhibited multiple strains and a cyclic pattern of transmission between compartments. This study revealed that the majority of patients with infectious mononucleosis were infected with multiple strains of EBV that were also compartmentalized, suggesting that primary infection involves the transmission of multiple strains. Both the primary and carrier states of infection with EBV are more complex than previously thought.
LED-driven backlights for automotive displays
NASA Astrophysics Data System (ADS)
Strauch, Frank
2007-09-01
As a light source the LED has some advantage over the traditionally used fluorescence tube such as longer life or lower space consumption. Consequently customers are asking for the LED lighting design in their products. We introduced in a company owned backlight the white LED technology. This step opens the possibility to have access to the components in the display market. Instead of having a finalized display product which needs to be integrated in the head unit of a car we assemble the backlight, the glass, own electronics and the housing. A major advantage of this concept is the better control of the heat flow generated by the LEDs to the outer side because only a common housing is used for all the components. Also the requirement for slim products can be fulfilled. As always a new technology doesn't come with advantages only. An LED represents a point source compared to the well-known tube thus requiring a mixing zone for the multiple point sources when they enter a light guide. This zone can't be used in displays because of the lack of homogeneity. It's a design goal to minimize this zone which can be helped by the right choice of the LED in terms of slimness. A step ahead is the implementation of RGB LEDs because of their higher color rendering abilities. This allows for the control of the chromaticity point under temperature change but as a drawback needs a larger mixing zone.
Aquatic exposures of chemical mixtures in urban environments: Approaches to impact assessment.
de Zwart, Dick; Adams, William; Galay Burgos, Malyka; Hollender, Juliane; Junghans, Marion; Merrington, Graham; Muir, Derek; Parkerton, Thomas; De Schamphelaere, Karel A C; Whale, Graham; Williams, Richard
2018-03-01
Urban regions of the world are expanding rapidly, placing additional stress on water resources. Urban water bodies serve many purposes, from washing and sources of drinking water to transport and conduits for storm drainage and effluent discharge. These water bodies receive chemical emissions arising from either single or multiple point sources, diffuse sources which can be continuous, intermittent, or seasonal. Thus, aquatic organisms in these water bodies are exposed to temporally and compositionally variable mixtures. We have delineated source-specific signatures of these mixtures for diffuse urban runoff and urban point source exposure scenarios to support risk assessment and management of these mixtures. The first step in a tiered approach to assessing chemical exposure has been developed based on the event mean concentration concept, with chemical concentrations in runoff defined by volumes of water leaving each surface and the chemical exposure mixture profiles for different urban scenarios. Although generalizations can be made about the chemical composition of urban sources and event mean exposure predictions for initial prioritization, such modeling needs to be complemented with biological monitoring data. It is highly unlikely that the current paradigm of routine regulatory chemical monitoring alone will provide a realistic appraisal of urban aquatic chemical mixture exposures. Future consideration is also needed of the role of nonchemical stressors in such highly modified urban water bodies. Environ Toxicol Chem 2018;37:703-714. © 2017 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC. © 2017 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC.
[A landscape ecological approach for urban non-point source pollution control].
Guo, Qinghai; Ma, Keming; Zhao, Jingzhu; Yang, Liu; Yin, Chengqing
2005-05-01
Urban non-point source pollution is a new problem appeared with the speeding development of urbanization. The particularity of urban land use and the increase of impervious surface area make urban non-point source pollution differ from agricultural non-point source pollution, and more difficult to control. Best Management Practices (BMPs) are the effective practices commonly applied in controlling urban non-point source pollution, mainly adopting local repairing practices to control the pollutants in surface runoff. Because of the close relationship between urban land use patterns and non-point source pollution, it would be rational to combine the landscape ecological planning with local BMPs to control the urban non-point source pollution, which needs, firstly, analyzing and evaluating the influence of landscape structure on water-bodies, pollution sources and pollutant removal processes to define the relationships between landscape spatial pattern and non-point source pollution and to decide the key polluted fields, and secondly, adjusting inherent landscape structures or/and joining new landscape factors to form new landscape pattern, and combining landscape planning and management through applying BMPs into planning to improve urban landscape heterogeneity and to control urban non-point source pollution.
Remote defect imaging for plate-like structures based on the scanning laser source technique
NASA Astrophysics Data System (ADS)
Hayashi, Takahiro; Maeda, Atsuya; Nakao, Shogo
2018-04-01
In defect imaging with a scanning laser source technique, the use of a fixed receiver realizes stable measurements of flexural waves generated by laser at multiple rastering points. This study discussed the defect imaging by remote measurements using a laser Doppler vibrometer as a receiver. Narrow-band burst waves were generated by modulating laser pulse trains of a fiber laser to enhance signal to noise ratio in frequency domain. Averaging three images obtained at three different frequencies suppressed spurious distributions due to resonance. The experimental system equipped with these newly-devised means enabled us to visualize defects and adhesive objects in plate-like structures such as a plate with complex geometries and a branch pipe.
NASA Astrophysics Data System (ADS)
Link, Paul Karl; Fanning, C. Mark; Beranek, Luke P.
2005-12-01
Detrital-zircon age-spectra effectively define provenance in Holocene and Neogene fluvial sands from the Snake River system of the northern Rockies, U.S.A. SHRIMP U-Pb dates have been measured for forty-six samples (about 2700 zircon grains) of fluvial and aeolian sediment. The detrital-zircon age distributions are repeatable and demonstrate predictable longitudinal variation. By lumping multiple samples to attain populations of several hundred grains, we recognize distinctive, provenance-defining zircon-age distributions or "barcodes," for fluvial sedimentary systems of several scales, within the upper and middle Snake River system. Our detrital-zircon studies effectively define the geochronology of the northern Rocky Mountains. The composite detrital-zircon grain distribution of the middle Snake River consists of major populations of Neogene, Eocene, and Cretaceous magmatic grains plus intermediate and small grain populations of multiply recycled Grenville (˜950 to 1300 Ma) grains and Yavapai-Mazatzal province grains (˜1600 to 1800 Ma) recycled through the upper Belt Supergroup and Cretaceous sandstones. A wide range of older Paleoproterozoic and Archean grains are also present. The best-case scenario for using detrital-zircon populations to isolate provenance is when there is a point-source pluton with known age, that is only found in one location or drainage. We find three such zircon age-populations in fluvial sediments downstream from the point-source plutons: Ordovician in the southern Beaverhead Mountains, Jurassic in northern Nevada, and Oligocene in the Albion Mountains core complex of southern Idaho. Large detrital-zircon age-populations derived from regionally well-defined, magmatic or recycled sedimentary, sources also serve to delimit the provenance of Neogene fluvial systems. In the Snake River system, defining populations include those derived from Cretaceous Atlanta lobe of the Idaho batholith (80 to 100 Ma), Eocene Challis Volcanic Group and associated plutons (˜45 to 52 Ma), and Neogene rhyolitic Yellowstone-Snake River Plain volcanics (˜0 to 17 Ma). For first-order drainage basins containing these zircon-rich source terranes, or containing a point-source pluton, a 60-grain random sample is sufficient to define the dominant provenance. The most difficult age-distributions to analyze are those that contain multiple small zircon age-populations and no defining large populations. Examples of these include streams draining the Proterozoic and Paleozoic Cordilleran miogeocline in eastern Idaho and Pleistocene loess on the Snake River Plain. For such systems, large sample bases of hundreds of grains, plus the use of statistical methods, may be necessary to distinguish detrital-zircon age-spectra.
Developing a system for blind acoustic source localization and separation
NASA Astrophysics Data System (ADS)
Kulkarni, Raghavendra
This dissertation presents innovate methodologies for locating, extracting, and separating multiple incoherent sound sources in three-dimensional (3D) space; and applications of the time reversal (TR) algorithm to pinpoint the hyper active neural activities inside the brain auditory structure that are correlated to the tinnitus pathology. Specifically, an acoustic modeling based method is developed for locating arbitrary and incoherent sound sources in 3D space in real time by using a minimal number of microphones, and the Point Source Separation (PSS) method is developed for extracting target signals from directly measured mixed signals. Combining these two approaches leads to a novel technology known as Blind Sources Localization and Separation (BSLS) that enables one to locate multiple incoherent sound signals in 3D space and separate original individual sources simultaneously, based on the directly measured mixed signals. These technologies have been validated through numerical simulations and experiments conducted in various non-ideal environments where there are non-negligible, unspecified sound reflections and reverberation as well as interferences from random background noise. Another innovation presented in this dissertation is concerned with applications of the TR algorithm to pinpoint the exact locations of hyper-active neurons in the brain auditory structure that are directly correlated to the tinnitus perception. Benchmark tests conducted on normal rats have confirmed the localization results provided by the TR algorithm. Results demonstrate that the spatial resolution of this source localization can be as high as the micrometer level. This high precision localization may lead to a paradigm shift in tinnitus diagnosis, which may in turn produce a more cost-effective treatment for tinnitus than any of the existing ones.
Ostrovski, Fernanda; McMahon, Richard G.; Connolly, Andrew J.; ...
2016-11-17
In this paper, we present the discovery and preliminary characterization of a gravitationally lensed quasar with a source redshift z s = 2.74 and image separation of 2.9 arcsec lensed by a foreground z l = 0.40 elliptical galaxy. Since optical observations of gravitationally lensed quasars show the lens system as a superposition of multiple point sources and a foreground lensing galaxy, we have developed a morphology-independent multi-wavelength approach to the photometric selection of lensed quasar candidates based on Gaussian Mixture Models (GMM) supervised machine learning. Using this technique and gi multicolour photometric observations from the Dark Energy Survey (DES),more » near-IR JK photometry from the VISTA Hemisphere Survey (VHS) and WISE mid-IR photometry, we have identified a candidate system with two catalogue components with i AB = 18.61 and i AB = 20.44 comprising an elliptical galaxy and two blue point sources. Spectroscopic follow-up with NTT and the use of an archival AAT spectrum show that the point sources can be identified as a lensed quasar with an emission line redshift of z = 2.739 ± 0.003 and a foreground early-type galaxy with z = 0.400 ± 0.002. We model the system as a single isothermal ellipsoid and find the Einstein radius θ E ~ 1.47 arcsec, enclosed mass M enc ~ 4 × 10 11 M ⊙ and a time delay of ~52 d. Finally, the relatively wide separation, month scale time delay duration and high redshift make this an ideal system for constraining the expansion rate beyond a redshift of 1.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ostrovski, Fernanda; McMahon, Richard G.; Connolly, Andrew J.
In this paper, we present the discovery and preliminary characterization of a gravitationally lensed quasar with a source redshift z s = 2.74 and image separation of 2.9 arcsec lensed by a foreground z l = 0.40 elliptical galaxy. Since optical observations of gravitationally lensed quasars show the lens system as a superposition of multiple point sources and a foreground lensing galaxy, we have developed a morphology-independent multi-wavelength approach to the photometric selection of lensed quasar candidates based on Gaussian Mixture Models (GMM) supervised machine learning. Using this technique and gi multicolour photometric observations from the Dark Energy Survey (DES),more » near-IR JK photometry from the VISTA Hemisphere Survey (VHS) and WISE mid-IR photometry, we have identified a candidate system with two catalogue components with i AB = 18.61 and i AB = 20.44 comprising an elliptical galaxy and two blue point sources. Spectroscopic follow-up with NTT and the use of an archival AAT spectrum show that the point sources can be identified as a lensed quasar with an emission line redshift of z = 2.739 ± 0.003 and a foreground early-type galaxy with z = 0.400 ± 0.002. We model the system as a single isothermal ellipsoid and find the Einstein radius θ E ~ 1.47 arcsec, enclosed mass M enc ~ 4 × 10 11 M ⊙ and a time delay of ~52 d. Finally, the relatively wide separation, month scale time delay duration and high redshift make this an ideal system for constraining the expansion rate beyond a redshift of 1.« less
Multiple-Point Temperature Gradient Algorithm for Ring Laser Gyroscope Bias Compensation
Li, Geng; Zhang, Pengfei; Wei, Guo; Xie, Yuanping; Yu, Xudong; Long, Xingwu
2015-01-01
To further improve ring laser gyroscope (RLG) bias stability, a multiple-point temperature gradient algorithm is proposed for RLG bias compensation in this paper. Based on the multiple-point temperature measurement system, a complete thermo-image of the RLG block is developed. Combined with the multiple-point temperature gradients between different points of the RLG block, the particle swarm optimization algorithm is used to tune the support vector machine (SVM) parameters, and an optimized design for selecting the thermometer locations is also discussed. The experimental results validate the superiority of the introduced method and enhance the precision and generalizability in the RLG bias compensation model. PMID:26633401
Leedham, S J; Preston, S L; McDonald, S A C; Elia, G; Bhandari, P; Poller, D; Harrison, R; Novelli, M R; Jankowski, J A; Wright, N A
2008-01-01
Objectives: Current models of clonal expansion in human Barrett’s oesophagus are based upon heterogenous, flow-purified biopsy analysis taken at multiple segment levels. Detection of identical mutation fingerprints from these biopsy samples led to the proposal that a mutated clone with a selective advantage can clonally expand to fill an entire Barrett’s segment at the expense of competing clones (selective sweep to fixation model). We aimed to assess clonality at a much higher resolution by microdissecting and genetically analysing individual crypts. The histogenesis of Barrett’s metaplasia and neo-squamous islands has never been demonstrated. We investigated the oesophageal gland squamous ducts as the source of both epithelial sub-types. Methods: Individual crypts across Barrett’s biopsy and oesophagectomy blocks were dissected. Determination of tumour suppressor gene loss of heterozygosity patterns, p16 and p53 point mutations were carried out on a crypt-by-crypt basis. Cases of contiguous neo-squamous islands and columnar metaplasia with oesophageal squamous ducts were identified. Tissues were isolated by laser capture microdissection and genetically analysed. Results: Individual crypt dissection revealed mutation patterns that were masked in whole biopsy analysis. Dissection across oesophagectomy specimens demonstrated marked clonal heterogeneity, with multiple independent clones present. We identified a p16 point mutation arising in the squamous epithelium of the oesophageal gland duct, which was also present in a contiguous metaplastic crypt, whereas neo-squamous islands arising from squamous ducts were wild-type with respect to surrounding Barrett’s dysplasia. Conclusions: By studying clonality at the crypt level we demonstrate that Barrett’s heterogeneity arises from multiple independent clones, in contrast to the selective sweep to fixation model of clonal expansion previously described. We suggest that the squamous gland ducts situated throughout the oesophagus are the source of a progenitor cell that may be susceptible to gene mutation resulting in conversion to Barrett’s metaplastic epithelium. Additionally, these data suggest that wild-type ducts may be the source of neo-squamous islands. PMID:18305067
NASA Astrophysics Data System (ADS)
Adams, M.; Ji, C.
2017-12-01
The November 14th 2016 MW 7.8 Kaikoura, New Zealand earthquake occurred along the east coast of the northern part of the South Island. The local tectonic setting is complicated. The central South Island is dominated by oblique continental convergence, whereas the southern part of this island experiences eastward subduction of the Australian plate. Available information (e.g., Hamling et al., 2017; Bradley et al., 2017) indicate that this earthquake involved multiple fault segments of the Marlborough fault system (MFS) as the rupture propagated northwards for more than 150 km. Additional slip might also occur on the subduction interface of the Pacific plate under the Australian plate, beneath the MFS. However, the exact number of involved fault segments as well as the temporal co-seismic rupture sequence has not been fully determined with geodetic and geological observations. Knowledge of the kinematics of complex fault interactions has important implications for our understanding of global seismic hazards, particularly to relatively unmodeled multisegment ruptures. Understanding the Kaikoura earthquake will provide insight into how one incorporates multi-fault ruptures in seismic-hazard models. We propose to apply a multiple double-couple inversion to determine the fault geometry and spatiotemporal rupture history using teleseismic and strong motion waveforms, before constraining the detailed slip history using both seismic and geodetic data. The Kaikoura earthquake will be approximated as the summation of multiple subevents—each represented as a double-couple point source, characterized by i) fault geometry (strike, dip and rake), ii) seismic moment, iii) centroid time, iv) half-duration and v) location (latitude, longitude and depth), a total of nine variables. We progressively increase the number of point sources until the additional source cannot produce significant improvement to the observations. Our preliminary results using only teleseismic data indicate that, broadly speaking, the sequence of fault planes dips towards the northwest and the motion of slip is largely to the northeast. Sequence and timing of the rupturing faults is still to be determined.
Common pitfalls in statistical analysis: The perils of multiple testing
Ranganathan, Priya; Pramesh, C. S.; Buyse, Marc
2016-01-01
Multiple testing refers to situations where a dataset is subjected to statistical testing multiple times - either at multiple time-points or through multiple subgroups or for multiple end-points. This amplifies the probability of a false-positive finding. In this article, we look at the consequences of multiple testing and explore various methods to deal with this issue. PMID:27141478
NASA Astrophysics Data System (ADS)
Wang, Qingdong; Li, Yuzhi; Ma, Qingyu; Guo, Gepu; Tu, Juan; Zhang, Dong
2018-01-01
In order to improve the capability of particle trapping close to the source plane, theoretical and experimental studies on near-field multiple traps of paraxial acoustic vortices (AVs) with a strengthened acoustic gradient force (AGF) generated by a sector transducer array were conducted. By applying the integration of point source radiation, numerical simulations for the acoustic fields generated by the sector transducer array were conducted and compared with those produced by the circular transducer array. It was proved that strengthened AGFs of near-field multiple AVs with higher peak pressures and smaller vortex radii could be produced by the sector transducer array with a small topological charge. The axial distributions of the equivalent potential gradient indicated that the AGFs of paraxial AVs in the near field were much higher than those in the far field, and the distances at the near-field vortex antinodes were also proved to be the ideal trapping positions with relatively higher AGFs. With the established 8-channel AV generation system, theoretical studies were also verified by the experimental measurements of pressure and phase for AVs with various topological charges. The formation of near-field multiple paraxial AVs was verified by the cross-sectional circular pressure distributions with perfect phase spirals around central pressure nulls, and was also proved by the vortex nodes and antinodes along the center axis. The favorable results demonstrated the feasibility of generating near-field multiple traps of paraxial AVs with strengthened AGF using the sector transducer array, and suggested the potential applications of close-range particle trapping in biomedical engineering.
Suh, D C; Manning, W G; Schondelmeyer, S; Hadsall, R S
2000-01-01
OBJECTIVE: To analyze the effect of multiple-source drug entry on price competition after patent expiration in the pharmaceutical industry. DATA SOURCES: Originators and their multiple-source drugs selected from the 35 chemical entities whose patents expired from 1984 through 1987. Data were obtained from various primary and secondary sources for the patents' expiration dates, sales volume and units sold, and characteristics of drugs in the sample markets. STUDY DESIGN: The study was designed to determine significant factors using the study model developed under the assumption that the off-patented market is an imperfectly segmented market. PRINCIPAL FINDINGS: After patent expiration, the originators' prices continued to increase, while the price of multiple-source drugs decreased significantly over time. By the fourth year after patent expiration, originators' sales had decreased 12 percent in dollars and 30 percent in quantity. Multiple-source drugs increased their sales twofold in dollars and threefold in quantity, and possessed about one-fourth (in dollars) and half (in quantity) of the total market three years after entry. CONCLUSION: After patent expiration, multiple-source drugs compete largely with other multiple-source drugs in the price-sensitive sector, but indirectly with the originator in the price-insensitive sector. Originators have first-mover advantages, and therefore have a market that is less price sensitive after multiple-source drugs enter. On the other hand, multiple-source drugs target the price-sensitive sector, using their lower-priced drugs. This trend may indicate that the off-patented market is imperfectly segmented between the price-sensitive and insensitive sector. Consumers as a whole can gain from the entry of multiple-source drugs because the average price of the market continually declines after patent expiration. PMID:10857475
Development open source microcontroller based temperature data logger
NASA Astrophysics Data System (ADS)
Abdullah, M. H.; Che Ghani, S. A.; Zaulkafilai, Z.; Tajuddin, S. N.
2017-10-01
This article discusses the development stages in designing, prototyping, testing and deploying a portable open source microcontroller based temperature data logger for use in rough industrial environment. The 5V powered prototype of data logger is equipped with open source Arduino microcontroller for integrating multiple thermocouple sensors with their module, secure digital (SD) card storage, liquid crystal display (LCD), real time clock and electronic enclosure made of acrylic. The program for the function of the datalogger is programmed so that 8 readings from the thermocouples can be acquired within 3 s interval and displayed on the LCD simultaneously. The recorded temperature readings at four different points on both hydrodistillation show similar profile pattern and highest yield of extracted oil was achieved on hydrodistillation 2 at 0.004%. From the obtained results, this study achieved the objective of developing an inexpensive, portable and robust eight channels temperature measuring module with capabilities to monitor and store real time data.
The nuclear window to the extragalactic universe
NASA Astrophysics Data System (ADS)
Erdmann, M.; Müller, G.; Urban, M.; Wirtz, M.
2016-12-01
We investigate two recent parameterizations of the galactic magnetic field with respect to their impact on cosmic nuclei traversing the field. We present a comprehensive study of the size of angular deflections, dispersion in the arrival probability distributions, multiplicity in the images of arrival on Earth, variance in field transparency, and influence of the turbulent field components. To remain restricted to ballistic deflections, a cosmic nucleus with energy E and charge Z should have a rigidity above E / Z = 6 EV. In view of the differences resulting from the two field parameterizations as a measure of current knowledge in the galactic field, this rigidity threshold may have to be increased. For a point source search with E/Z ≥ 60 EV, field uncertainties increase the required signal events for discovery moderately for sources in the northern and southern regions, but substantially for sources near the galactic disk.
Estimating discharge measurement uncertainty using the interpolated variance estimator
Cohn, T.; Kiang, J.; Mason, R.
2012-01-01
Methods for quantifying the uncertainty in discharge measurements typically identify various sources of uncertainty and then estimate the uncertainty from each of these sources by applying the results of empirical or laboratory studies. If actual measurement conditions are not consistent with those encountered in the empirical or laboratory studies, these methods may give poor estimates of discharge uncertainty. This paper presents an alternative method for estimating discharge measurement uncertainty that uses statistical techniques and at-site observations. This Interpolated Variance Estimator (IVE) estimates uncertainty based on the data collected during the streamflow measurement and therefore reflects the conditions encountered at the site. The IVE has the additional advantage of capturing all sources of random uncertainty in the velocity and depth measurements. It can be applied to velocity-area discharge measurements that use a velocity meter to measure point velocities at multiple vertical sections in a channel cross section.
Neill, Aaron James; Tetzlaff, Doerthe; Strachan, Norval James Colin; Hough, Rupert Lloyd; Avery, Lisa Marie; Watson, Helen; Soulsby, Chris
2018-01-15
An 11year dataset of concentrations of E. coli at 10 spatially-distributed sites in a mixed land-use catchment in NE Scotland (52km 2 ) revealed that concentrations were not clearly associated with flow or season. The lack of a clear flow-concentration relationship may have been due to greater water fluxes from less-contaminated headwaters during high flows diluting downstream concentrations, the importance of persistent point sources of E. coli both anthropogenic and agricultural, and possibly the temporal resolution of the dataset. Point sources and year-round grazing of livestock probably obscured clear seasonality in concentrations. Multiple linear regression models identified potential for contamination by anthropogenic point sources as a significant predictor of long-term spatial patterns of low, average and high concentrations of E. coli. Neither arable nor pasture land was significant, even when accounting for hydrological connectivity with a topographic-index method. However, this may have reflected coarse-scale land-cover data inadequately representing "point sources" of agricultural contamination (e.g. direct defecation of livestock into the stream) and temporal changes in availability of E. coli from diffuse sources. Spatial-stream-network models (SSNMs) were applied in a novel context, and had value in making more robust catchment-scale predictions of concentrations of E. coli with estimates of uncertainty, and in enabling identification of potential "hot spots" of faecal contamination. Successfully managing faecal contamination of surface waters is vital for safeguarding public health. Our finding that concentrations of E. coli could not clearly be associated with flow or season may suggest that management strategies should not necessarily target only high flow events or summer when faecal contamination risk is often assumed to be greatest. Furthermore, we identified SSNMs as valuable tools for identifying possible "hot spots" of contamination which could be targeted for management, and for highlighting areas where additional monitoring could help better constrain predictions relating to faecal contamination. Copyright © 2017 Elsevier B.V. All rights reserved.
Adaptive sampling of information in perceptual decision-making.
Cassey, Thomas C; Evens, David R; Bogacz, Rafal; Marshall, James A R; Ludwig, Casimir J H
2013-01-01
In many perceptual and cognitive decision-making problems, humans sample multiple noisy information sources serially, and integrate the sampled information to make an overall decision. We derive the optimal decision procedure for two-alternative choice tasks in which the different options are sampled one at a time, sources vary in the quality of the information they provide, and the available time is fixed. To maximize accuracy, the optimal observer allocates time to sampling different information sources in proportion to their noise levels. We tested human observers in a corresponding perceptual decision-making task. Observers compared the direction of two random dot motion patterns that were triggered only when fixated. Observers allocated more time to the noisier pattern, in a manner that correlated with their sensory uncertainty about the direction of the patterns. There were several differences between the optimal observer predictions and human behaviour. These differences point to a number of other factors, beyond the quality of the currently available sources of information, that influences the sampling strategy.
Optimal simultaneous superpositioning of multiple structures with missing data.
Theobald, Douglas L; Steindel, Phillip A
2012-08-01
Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually 'missing' from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation-maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. dtheobald@brandeis.edu Supplementary data are available at Bioinformatics online.
Isovector charges of the nucleon from 2 + 1 -flavor QCD with clover fermions
Yoon, Boram; Jang, Yong -Chull; Gupta, Rajan; ...
2017-04-13
We present high-statistics estimates of the isovector charges of the nucleon from four 2+1-flavor ensembles generated using Wilson-clover fermions with stout smearing and tree-level tadpole improved Symanzik gauge action at lattice spacingsmore » $a=0.114$ and $0.080$ fm and with $$M_\\pi \\approx 315$$ and 200 MeV. The truncated solver method with bias correction and the coherent source sequential propagator construction are used to cost-effectively achieve $O(10^5)$ measurements on each ensemble. Using these data, the analysis of two-point correlation functions is extended to include four states in the fits and of three-point functions to three states. Control over excited-state contamination in the calculation of the nucleon mass, the mass gaps between excited states, and in the matrix elements is demonstrated by the consistency of estimates using this multistate analysis of the spectral decomposition of the correlation functions and from simulations of the three-point functions at multiple values of the source-sink separation. Lastly, the results for all three charges, $$g_A$$, $$g_S$$ and $$g_T$$, are in good agreement with calculations done using the clover-on-HISQ lattice formulation with similar values of the lattice parameters.« less
Qiao, Mu; Liu, Honglin; Pang, Guanghui; Han, Shensheng
2017-08-29
Manipulating light non-invasively through inhomogeneous media is an attractive goal in many disciplines. Wavefront shaping and optical phase conjugation can focus light to a point. Transmission matrix method can control light on multiple output modes simultaneously. Here we report a non-invasive approach which enables three-dimension (3D) light control between two turbid layers. A digital optical phase conjugation mirror measured and conjugated the diffused wavefront, which originated from a quasi-point source on the front turbid layer and passed through the back turbid layer. And then, because of memory effect, the phase-conjugated wavefront could be used as a carrier wave to transport a pre-calculated wavefront through the back turbid layer. The pre-calculated wavefront could project a desired 3D light field inside the sample, which, in our experiments, consisted of two 220-grid ground glass plates spaced by a 20 mm distance. The controllable range of light, according to the memory effect, was calculated to be 80 mrad in solid angle and 16 mm on z-axis. Due to the 3D light control ability, our approach may find applications in photodynamic therapy and optogenetics. Besides, our approach can also be combined with ghost imaging or compressed sensing to achieve 3D imaging between turbid layers.
Inferring Models of Bacterial Dynamics toward Point Sources
Jashnsaz, Hossein; Nguyen, Tyler; Petrache, Horia I.; Pressé, Steve
2015-01-01
Experiments have shown that bacteria can be sensitive to small variations in chemoattractant (CA) concentrations. Motivated by these findings, our focus here is on a regime rarely studied in experiments: bacteria tracking point CA sources (such as food patches or even prey). In tracking point sources, the CA detected by bacteria may show very large spatiotemporal fluctuations which vary with distance from the source. We present a general statistical model to describe how bacteria locate point sources of food on the basis of stochastic event detection, rather than CA gradient information. We show how all model parameters can be directly inferred from single cell tracking data even in the limit of high detection noise. Once parameterized, our model recapitulates bacterial behavior around point sources such as the “volcano effect”. In addition, while the search by bacteria for point sources such as prey may appear random, our model identifies key statistical signatures of a targeted search for a point source given any arbitrary source configuration. PMID:26466373
DISENTANGLING CONFUSED STARS AT THE GALACTIC CENTER WITH LONG-BASELINE INFRARED INTERFEROMETRY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stone, Jordan M.; Eisner, J. A.; Monnier, J. D.
2012-08-01
We present simulations of Keck Interferometer ASTRA and VLTI GRAVITY observations of mock star fields in orbit within {approx}50 mas of Sgr A*. Dual-field phase referencing techniques, as implemented on ASTRA and planned for GRAVITY, will provide the sensitivity to observe Sgr A* with long-baseline infrared interferometers. Our results show an improvement in the confusion noise limit over current astrometric surveys, opening a window to study stellar sources in the region. Since the Keck Interferometer has only a single baseline, the improvement in the confusion limit depends on source position angles. The GRAVITY instrument will yield a more compact andmore » symmetric point-spread function, providing an improvement in confusion noise which will not depend as strongly on position angle. Our Keck results show the ability to characterize the star field as containing zero, few, or many bright stellar sources. We are also able to detect and track a source down to m{sub K} {approx} 18 through the least confused regions of our field of view at a precision of {approx}200 {mu}as along the baseline direction. This level of precision improves with source brightness. Our GRAVITY results show the potential to detect and track multiple sources in the field. GRAVITY will perform {approx}10 {mu}as astrometry on an m{sub K} = 16.3 source and {approx}200 {mu}as astrometry on an m{sub K} = 18.8 source in 6 hr of monitoring a crowded field. Monitoring the orbits of several stars will provide the ability to distinguish between multiple post-Newtonian orbital effects, including those due to an extended mass distribution around Sgr A* and to low-order general relativistic effects. ASTRA and GRAVITY both have the potential to detect and monitor sources very close to Sgr A*. Early characterizations of the field by ASTRA, including the possibility of a precise source detection, could provide valuable information for future GRAVITY implementation and observation.« less
Moranda, Arianna
2017-01-01
A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities. PMID:29270328
Paladino, Ombretta; Moranda, Arianna; Seyedsalehi, Mahdi
2017-01-01
A procedure for assessing harbour pollution by heavy metals and PAH and the possible sources of contamination is proposed. The procedure is based on a ratio-matching method applied to the results of principal component analysis (PCA), and it allows discrimination between point and nonpoint sources. The approach can be adopted when many sources of pollution can contribute in a very narrow coastal ecosystem, both internal and outside but close to the harbour, and was used to identify the possible point sources of contamination in a Mediterranean Harbour (Port of Vado, Savona, Italy). 235 sediment samples were collected in 81 sampling points during four monitoring campaigns and 28 chemicals were searched for within the collected samples. PCA of total samples allowed the assessment of 8 main possible point sources, while the refining ratio-matching identified 1 sampling point as a possible PAH source, 2 sampling points as Cd point sources, and 3 sampling points as C > 12 point sources. By a map analysis it was possible to assess two internal sources of pollution directly related to terminals activity. The study is the prosecution of a previous work aimed at assessing Savona-Vado Harbour pollution levels and suggested strategies to regulate the harbour activities.
Non-point source pollution is a diffuse source that is difficult to measure and is highly variable due to different rain patterns and other climatic conditions. In many areas, however, non-point source pollution is the greatest source of water quality degradation. Presently, stat...
NASA Astrophysics Data System (ADS)
Lee, Sam; Lucas, Nathan P.; Ellis, R. Darin; Pandya, Abhilash
2012-06-01
This paper presents a seamlessly controlled human multi-robot system comprised of ground and aerial robots of semiautonomous nature for source localization tasks. The system combines augmented reality interfaces capabilities with human supervisor's ability to control multiple robots. The role of this human multi-robot interface is to allow an operator to control groups of heterogeneous robots in real time in a collaborative manner. It used advanced path planning algorithms to ensure obstacles are avoided and that the operators are free for higher-level tasks. Each robot knows the environment and obstacles and can automatically generate a collision-free path to any user-selected target. It displayed sensor information from each individual robot directly on the robot in the video view. In addition, a sensor data fused AR view is displayed which helped the users pin point source information or help the operator with the goals of the mission. The paper studies a preliminary Human Factors evaluation of this system in which several interface conditions are tested for source detection tasks. Results show that the novel Augmented Reality multi-robot control (Point-and-Go and Path Planning) reduced mission completion times compared to the traditional joystick control for target detection missions. Usability tests and operator workload analysis are also investigated.
Agudelo-Calderón, Carlos A; Quiroz-Arcentales, Leonardo; García-Ubaque, Juan C; Robledo-Martínez, Rocío; García-Ubaque, Cesar A
2016-02-01
Objectives To determine concentrations of PM10, mercury and lead in indoor air of homes, water sources and soil in municipalities near mining operations. Method 6 points were evaluated in areas of influence and 2 in control areas. For measurements of indoor air, we used the NIOSH 600 method (PM10), NIOSH 6009 (mercury) and NIOSH 7300 (lead). For water analysis we used the IDEAM Guide for monitoring discharges. For soil analysis, we used the cold vapor technique (mercury) and atomic absorption (lead). Results In almost all selected households, the average PM10 and mercury concentrations in indoor air exceeded applicable air quality standards. Concentrations of lead were below standard levels. In all water sources, high concentrations of lead were found and in some places within the mining areas, high levels of iron, aluminum and mercury were also found. In soil, mercury concentrations were below the detection level and for lead, differences between the monitored points were observed. Conclusions The results do not establish causal relationships between mining and concentration of these pollutants in the evaluated areas because of the multiplicity of sources in the area. However, such studies provide important information, useful to agents of the environmental health system and researchers. Installation of networks for environmental monitoring to obtain continuous reports is suggested.
Spallation neutron production and the current intra-nuclear cascade and transport codes
NASA Astrophysics Data System (ADS)
Filges, D.; Goldenbaum, F.; Enke, M.; Galin, J.; Herbach, C.-M.; Hilscher, D.; Jahnke, U.; Letourneau, A.; Lott, B.; Neef, R.-D.; Nünighoff, K.; Paul, N.; Péghaire, A.; Pienkowski, L.; Schaal, H.; Schröder, U.; Sterzenbach, G.; Tietze, A.; Tishchenko, V.; Toke, J.; Wohlmuther, M.
A recent renascent interest in energetic proton-induced production of neutrons originates largely from the inception of projects for target stations of intense spallation neutron sources, like the planned European Spallation Source (ESS), accelerator-driven nuclear reactors, nuclear waste transmutation, and also from the application for radioactive beams. In the framework of such a neutron production, of major importance is the search for ways for the most efficient conversion of the primary beam energy into neutron production. Although the issue has been quite successfully addressed experimentally by varying the incident proton energy for various target materials and by covering a huge collection of different target geometries --providing an exhaustive matrix of benchmark data-- the ultimate challenge is to increase the predictive power of transport codes currently on the market. To scrutinize these codes, calculations of reaction cross-sections, hadronic interaction lengths, average neutron multiplicities, neutron multiplicity and energy distributions, and the development of hadronic showers are confronted with recent experimental data of the NESSI collaboration. Program packages like HERMES, LCS or MCNPX master the prevision of reaction cross-sections, hadronic interaction lengths, averaged neutron multiplicities and neutron multiplicity distributions in thick and thin targets for a wide spectrum of incident proton energies, geometrical shapes and materials of the target generally within less than 10% deviation, while production cross-section measurements for light charged particles on thin targets point out that appreciable distinctions exist within these models.
NASA Astrophysics Data System (ADS)
Elliott, Mark; MacDonald, Morgan C.; Chan, Terence; Kearton, Annika; Shields, Katherine F.; Bartram, Jamie K.; Hadwen, Wade L.
2017-11-01
Global water research and monitoring typically focus on the household's "main source of drinking-water." Use of multiple water sources to meet daily household needs has been noted in many developing countries but rarely quantified or reported in detail. We gathered self-reported data using a cross-sectional survey of 405 households in eight communities of the Republic of the Marshall Islands (RMI) and five Solomon Islands (SI) communities. Over 90% of households used multiple sources, with differences in sources and uses between wet and dry seasons. Most RMI households had large rainwater tanks and rationed stored rainwater for drinking throughout the dry season, whereas most SI households collected rainwater in small pots, precluding storage across seasons. Use of a source for cooking was strongly positively correlated with use for drinking, whereas use for cooking was negatively correlated or uncorrelated with nonconsumptive uses (e.g., bathing). Dry season water uses implied greater risk of water-borne disease, with fewer (frequently zero) handwashing sources reported and more unimproved sources consumed. Use of multiple sources is fundamental to household water management and feasible to monitor using electronic survey tools. We contend that recognizing multiple water sources can greatly improve understanding of household-level and community-level climate change resilience, that use of multiple sources confounds health impact studies of water interventions, and that incorporating multiple sources into water supply interventions can yield heretofore-unrealized benefits. We propose that failure to consider multiple sources undermines the design and effectiveness of global water monitoring, data interpretation, implementation, policy, and research.
Pointright: a system to redirect mouse and keyboard control among multiple machines
Johanson, Bradley E [Palo Alto, CA; Winograd, Terry A [Stanford, CA; Hutchins, Gregory M [Mountain View, CA
2008-09-30
The present invention provides a software system, PointRight, that allows for smooth and effortless control of pointing and input devices among multiple displays. With PointRight, a single free-floating mouse and keyboard can be used to control multiple screens. When the cursor reaches the edge of a screen it seamlessly moves to the adjacent screen and keyboard control is simultaneously redirected to the appropriate machine. Laptops may also redirect their keyboard and pointing device, and multiple pointers are supported simultaneously. The system automatically reconfigures itself as displays go on, go off, or change the machine they display.
Method and system for controlling the position of a beam of light
Steinkraus, Jr., Robert F.; Johnson, Gary W [Livermore, CA; Ruggiero, Anthony J [Livermore, CA
2011-08-09
An method and system for laser beam tracking and pointing is based on a conventional position sensing detector (PSD) or quadrant cell but with the use of amplitude-modulated light. A combination of logarithmic automatic gain control, filtering, and synchronous detection offers high angular precision with exceptional dynamic range and sensitivity, while maintaining wide bandwidth. Use of modulated light enables the tracking of multiple beams simultaneously through the use of different modulation frequencies. It also makes the system resistant to interfering light sources such as ambient light. Beam pointing is accomplished by feeding back errors in the measured beam position to a beam steering element, such as a steering mirror. Closed-loop tracking performance is superior to existing methods, especially under conditions of atmospheric scintillation.
CENTAURUS A AS A POINT SOURCE OF ULTRAHIGH ENERGY COSMIC RAYS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Hang Bae, E-mail: hbkim@hanyang.ac.kr
We probe the possibility that Centaurus A (Cen A) is a point source of ultrahigh energy cosmic rays (UHECRs) observed by Pierre Auger Observatory (PAO), through the statistical analysis of the arrival direction distribution. For this purpose, we set up the Cen A dominance model for the UHECR sources, in which Cen A contributes the fraction f {sub C} of the whole UHECR with energy above 5.5 Multiplication-Sign 10{sup 19} eV and the isotropic background contributes the remaining 1 - f {sub C} fraction. The effect of the intergalactic magnetic fields on the bending of the trajectory of Cen Amore » originated UHECRs is parameterized by the Gaussian smearing angle {theta} {sub s}. For the statistical analysis, we adopted the correlational angular distance distribution (CADD) for the reduction of the arrival direction distribution and the Kuiper test to compare the observed and the expected CADDs. We identify the excess of UHECRs in the Cen A direction and fit the CADD of the observed PAO data by varying two parameters f {sub C} and {theta} {sub s} of the Cen A dominance model. The best-fit parameter values are f {sub C} Almost-Equal-To 0.1 (the corresponding Cen A fraction observed at PAO is f {sub C,PAO} Almost-Equal-To 0.15, that is, about 10 out of 69 UHECRs) and {theta} {sub s} = 5 Degree-Sign with the maximum likelihood L {sub max} = 0.29. This result supports the existence of a point source smeared by the intergalactic magnetic fields in the direction of Cen A. If Cen A is actually the source responsible for the observed excess of UHECRs, the rms deflection angle of the excess UHECRs implies the order of 10 nG intergalactic magnetic field in the vicinity of Cen A.« less
Atom optics in the time domain
NASA Astrophysics Data System (ADS)
Arndt, M.; Szriftgiser, P.; Dalibard, J.; Steane, A. M.
1996-05-01
Atom-optics experiments are presented using a time-modulated evanescent light wave as an atomic mirror in the trampoline configuration, i.e., perpendicular to the direction of the atomic free fall. This modulated mirror is used to accelerate cesium atoms, to focus their trajectories, and to apply a ``multiple lens'' to separately focus different velocity classes of atoms originating from a point source. We form images of a simple two-slit object to show the resolution of the device. The experiments are modelled by a general treatment analogous to classical ray optics.
NASA Astrophysics Data System (ADS)
Zhang, Shou-ping; Xin, Xiao-kang
2017-07-01
Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.
Locating arbitrarily time-dependent sound sources in three dimensional space in real time.
Wu, Sean F; Zhu, Na
2010-08-01
This paper presents a method for locating arbitrarily time-dependent acoustic sources in a free field in real time by using only four microphones. This method is capable of handling a wide variety of acoustic signals, including broadband, narrowband, impulsive, and continuous sound over the entire audible frequency range, produced by multiple sources in three dimensional (3D) space. Locations of acoustic sources are indicated by the Cartesian coordinates. The underlying principle of this method is a hybrid approach that consists of modeling of acoustic radiation from a point source in a free field, triangulation, and de-noising to enhance the signal to noise ratio (SNR). Numerical simulations are conducted to study the impacts of SNR, microphone spacing, source distance and frequency on spatial resolution and accuracy of source localizations. Based on these results, a simple device that consists of four microphones mounted on three mutually orthogonal axes at an optimal distance, a four-channel signal conditioner, and a camera is fabricated. Experiments are conducted in different environments to assess its effectiveness in locating sources that produce arbitrarily time-dependent acoustic signals, regardless whether a sound source is stationary or moves in space, even toward behind measurement microphones. Practical limitations on this method are discussed.
Estimating vehicle height using homographic projections
Cunningham, Mark F; Fabris, Lorenzo; Gee, Timothy F; Ghebretati, Jr., Frezghi H; Goddard, James S; Karnowski, Thomas P; Ziock, Klaus-peter
2013-07-16
Multiple homography transformations corresponding to different heights are generated in the field of view. A group of salient points within a common estimated height range is identified in a time series of video images of a moving object. Inter-salient point distances are measured for the group of salient points under the multiple homography transformations corresponding to the different heights. Variations in the inter-salient point distances under the multiple homography transformations are compared. The height of the group of salient points is estimated to be the height corresponding to the homography transformation that minimizes the variations.
Regression Models for the Analysis of Longitudinal Gaussian Data from Multiple Sources
O’Brien, Liam M.; Fitzmaurice, Garrett M.
2006-01-01
We present a regression model for the joint analysis of longitudinal multiple source Gaussian data. Longitudinal multiple source data arise when repeated measurements are taken from two or more sources, and each source provides a measure of the same underlying variable and on the same scale. This type of data generally produces a relatively large number of observations per subject; thus estimation of an unstructured covariance matrix often may not be possible. We consider two methods by which parsimonious models for the covariance can be obtained for longitudinal multiple source data. The methods are illustrated with an example of multiple informant data arising from a longitudinal interventional trial in psychiatry. PMID:15726666
An improved DPSM technique for modelling ultrasonic fields in cracked solids
NASA Astrophysics Data System (ADS)
Banerjee, Sourav; Kundu, Tribikram; Placko, Dominique
2007-04-01
In recent years Distributed Point Source Method (DPSM) is being used for modelling various ultrasonic, electrostatic and electromagnetic field modelling problems. In conventional DPSM several point sources are placed near the transducer face, interface and anomaly boundaries. The ultrasonic or the electromagnetic field at any point is computed by superimposing the contributions of different layers of point sources strategically placed. The conventional DPSM modelling technique is modified in this paper so that the contributions of the point sources in the shadow region can be removed from the calculations. For this purpose the conventional point sources that radiate in all directions are replaced by Controlled Space Radiation (CSR) sources. CSR sources can take care of the shadow region problem to some extent. Complete removal of the shadow region problem can be achieved by introducing artificial interfaces. Numerically synthesized fields obtained by the conventional DPSM technique that does not give any special consideration to the point sources in the shadow region and the proposed modified technique that nullifies the contributions of the point sources in the shadow region are compared. One application of this research can be found in the improved modelling of the real time ultrasonic non-destructive evaluation experiments.
On the assessment of spatial resolution of PET systems with iterative image reconstruction
NASA Astrophysics Data System (ADS)
Gong, Kuang; Cherry, Simon R.; Qi, Jinyi
2016-03-01
Spatial resolution is an important metric for performance characterization in PET systems. Measuring spatial resolution is straightforward with a linear reconstruction algorithm, such as filtered backprojection, and can be performed by reconstructing a point source scan and calculating the full-width-at-half-maximum (FWHM) along the principal directions. With the widespread adoption of iterative reconstruction methods, it is desirable to quantify the spatial resolution using an iterative reconstruction algorithm. However, the task can be difficult because the reconstruction algorithms are nonlinear and the non-negativity constraint can artificially enhance the apparent spatial resolution if a point source image is reconstructed without any background. Thus, it was recommended that a background should be added to the point source data before reconstruction for resolution measurement. However, there has been no detailed study on the effect of the point source contrast on the measured spatial resolution. Here we use point source scans from a preclinical PET scanner to investigate the relationship between measured spatial resolution and the point source contrast. We also evaluate whether the reconstruction of an isolated point source is predictive of the ability of the system to resolve two adjacent point sources. Our results indicate that when the point source contrast is below a certain threshold, the measured FWHM remains stable. Once the contrast is above the threshold, the measured FWHM monotonically decreases with increasing point source contrast. In addition, the measured FWHM also monotonically decreases with iteration number for maximum likelihood estimate. Therefore, when measuring system resolution with an iterative reconstruction algorithm, we recommend using a low-contrast point source and a fixed number of iterations.
Satellite Remote Sensing of Harmful Algal Blooms (HABs) and a Potential Synthesized Framework
Shen, Li; Xu, Huiping; Guo, Xulin
2012-01-01
Harmful algal blooms (HABs) are severe ecological disasters threatening aquatic systems throughout the World, which necessitate scientific efforts in detecting and monitoring them. Compared with traditional in situ point observations, satellite remote sensing is considered as a promising technique for studying HABs due to its advantages of large-scale, real-time, and long-term monitoring. The present review summarizes the suitability of current satellite data sources and different algorithms for detecting HABs. It also discusses the spatial scale issue of HABs. Based on the major problems identified from previous literature, including the unsystematic understanding of HABs, the insufficient incorporation of satellite remote sensing, and a lack of multiple oceanographic explanations of the mechanisms causing HABs, this review also attempts to provide a comprehensive understanding of the complicated mechanism of HABs impacted by multiple oceanographic factors. A potential synthesized framework can be established by combining multiple accessible satellite remote sensing approaches including visual interpretation, spectra analysis, parameters retrieval and spatial-temporal pattern analysis. This framework aims to lead to a systematic and comprehensive monitoring of HABs based on satellite remote sensing from multiple oceanographic perspectives. PMID:22969372
Explosion localization and characterization via infrasound using numerical modeling
NASA Astrophysics Data System (ADS)
Fee, D.; Kim, K.; Iezzi, A. M.; Matoza, R. S.; Jolly, A. D.; De Angelis, S.; Diaz Moreno, A.; Szuberla, C.
2017-12-01
Numerous methods have been applied to locate, detect, and characterize volcanic and anthropogenic explosions using infrasound. Far-field localization techniques typically use back-azimuths from multiple arrays (triangulation) or Reverse Time Migration (RTM, or back-projection). At closer ranges, networks surrounding a source may use Time Difference of Arrival (TDOA), semblance, station-pair double difference, etc. However, at volcanoes and regions with topography or obstructions that block the direct path of sound, recent studies have shown that numerical modeling is necessary to provide an accurate source location. A heterogeneous and moving atmosphere (winds) may also affect the location. The time reversal mirror (TRM) application of Kim et al. (2015) back-propagates the wavefield using a Finite Difference Time Domain (FDTD) algorithm, with the source corresponding to the location of peak convergence. Although it provides high-resolution source localization and can account for complex wave propagation, TRM is computationally expensive and limited to individual events. Here we present a new technique, termed RTM-FDTD, which integrates TRM and FDTD. Travel time and transmission loss information is computed from each station to the entire potential source grid from 3-D Green's functions derived via FDTD. The wave energy is then back-projected and stacked at each grid point, with the maximum corresponding to the likely source. We apply our method to detect and characterize thousands of explosions from Yasur Volcano, Vanuatu and Etna Volcano, Italy, which both provide complex wave propagation and multiple source locations. We compare our results with those from more traditional methods (e.g. semblance), and suggest our method is preferred as it is computationally less expensive than TRM but still integrates numerical modeling. RTM-FDTD could be applied to volcanic other anthropogenic sources at a wide variety of ranges and scenarios. Kim, K., Lees, J.M., 2015. Imaging volcanic infrasound sources using time reversal mirror algorithm. Geophysical Journal International 202, 1663-1676.
Whittington, Richard J; Paul-Pont, Ika; Evans, Olivia; Hick, Paul; Dhand, Navneet K
2018-04-10
Marine herpesviruses are responsible for epizootics in economically, ecologically and culturally significant taxa. The recent emergence of microvariants of Ostreid herpesvirus 1 (OsHV-1) in Pacific oysters Crassostrea gigas has resulted in socioeconomic losses in Europe, New Zealand and Australia however, there is no information on their origin or mode of transmission. These factors need to be understood because they influence the way the disease may be prevented and controlled. Mortality data obtained from experimental populations of C. gigas during natural epizootics of OsHV-1 disease in Australia were analysed qualitatively. In addition we compared actual mortality data with those from a Reed-Frost model of direct transmission and analysed incubation periods using Sartwell's method to test for the type of epizootic, point source or propagating. We concluded that outbreaks were initiated from an unknown environmental source which is unlikely to be farmed oysters in the same estuary. While direct oyster-to-oyster transmission may occur in larger oysters if they are in close proximity (< 40 cm), it did not explain the observed epizootics, point source exposure and indirect transmission being more common and important. A conceptual model is proposed for OsHV-1 index case source and transmission, leading to endemicity with recurrent seasonal outbreaks. The findings suggest that prevention and control of OsHV-1 in C. gigas will require multiple interventions. OsHV-1 in C. gigas, which is a sedentary animal once beyond the larval stage, is an informative model when considering marine host-herpesvirus relationships.
Fontanilla, Ian Kendrich C; Sta Maria, Inna Mikaella P; Garcia, James Rainier M; Ghate, Hemant; Naggs, Fred; Wade, Christopher M
2014-01-01
The Giant African Land Snail, Achatina ( = Lissachatina) fulica Bowdich, 1822, is a tropical crop pest species with a widespread distribution across East Africa, the Indian subcontinent, Southeast Asia, the Pacific, the Caribbean, and North and South America. Its current distribution is attributed primarily to the introduction of the snail to new areas by Man within the last 200 years. This study determined the extent of genetic diversity in global A. fulica populations using the mitochondrial 16S ribosomal RNA gene. A total of 560 individuals were evaluated from 39 global populations obtained from 26 territories. Results reveal 18 distinct A. fulica haplotypes; 14 are found in East Africa and the Indian Ocean islands, but only two haplotypes from the Indian Ocean islands emerged from this region, the C haplotype, now distributed across the tropics, and the D haplotype in Ecuador and Bolivia. Haplotype E from the Philippines, F from New Caledonia and Barbados, O from India and Q from Ecuador are variants of the emergent C haplotype. For the non-native populations, the lack of genetic variation points to founder effects due to the lack of multiple introductions from the native range. Our current data could only point with certainty to the Indian Ocean islands as the earliest known common source of A. fulica across the globe, which necessitates further sampling in East Africa to determine the source populations of the emergent haplotypes.
Fontanilla, Ian Kendrich C.; Sta. Maria, Inna Mikaella P.; Garcia, James Rainier M.; Ghate, Hemant; Naggs, Fred; Wade, Christopher M.
2014-01-01
The Giant African Land Snail, Achatina ( = Lissachatina) fulica Bowdich, 1822, is a tropical crop pest species with a widespread distribution across East Africa, the Indian subcontinent, Southeast Asia, the Pacific, the Caribbean, and North and South America. Its current distribution is attributed primarily to the introduction of the snail to new areas by Man within the last 200 years. This study determined the extent of genetic diversity in global A. fulica populations using the mitochondrial 16S ribosomal RNA gene. A total of 560 individuals were evaluated from 39 global populations obtained from 26 territories. Results reveal 18 distinct A. fulica haplotypes; 14 are found in East Africa and the Indian Ocean islands, but only two haplotypes from the Indian Ocean islands emerged from this region, the C haplotype, now distributed across the tropics, and the D haplotype in Ecuador and Bolivia. Haplotype E from the Philippines, F from New Caledonia and Barbados, O from India and Q from Ecuador are variants of the emergent C haplotype. For the non-native populations, the lack of genetic variation points to founder effects due to the lack of multiple introductions from the native range. Our current data could only point with certainty to the Indian Ocean islands as the earliest known common source of A. fulica across the globe, which necessitates further sampling in East Africa to determine the source populations of the emergent haplotypes. PMID:25203830
NASA Astrophysics Data System (ADS)
Ning, Nannan; Tian, Jie; Liu, Xia; Deng, Kexin; Wu, Ping; Wang, Bo; Wang, Kun; Ma, Xibo
2014-02-01
In mathematics, optical molecular imaging including bioluminescence tomography (BLT), fluorescence tomography (FMT) and Cerenkov luminescence tomography (CLT) are concerned with a similar inverse source problem. They all involve the reconstruction of the 3D location of a single/multiple internal luminescent/fluorescent sources based on 3D surface flux distribution. To achieve that, an accurate fusion between 2D luminescent/fluorescent images and 3D structural images that may be acquired form micro-CT, MRI or beam scanning is extremely critical. However, the absence of a universal method that can effectively convert 2D optical information into 3D makes the accurate fusion challengeable. In this study, to improve the fusion accuracy, a new fusion method for dual-modality tomography (luminescence/fluorescence and micro-CT) based on natural light surface reconstruction (NLSR) and iterated closest point (ICP) was presented. It consisted of Octree structure, exact visual hull from marching cubes and ICP. Different from conventional limited projection methods, it is 360° free-space registration, and utilizes more luminescence/fluorescence distribution information from unlimited multi-orientation 2D optical images. A mouse mimicking phantom (one XPM-2 Phantom Light Source, XENOGEN Corporation) and an in-vivo BALB/C mouse with implanted one luminescent light source were used to evaluate the performance of the new fusion method. Compared with conventional fusion methods, the average error of preset markers was improved by 0.3 and 0.2 pixels from the new method, respectively. After running the same 3D internal light source reconstruction algorithm of the BALB/C mouse, the distance error between the actual and reconstructed internal source was decreased by 0.19 mm.
Rikkerink, Erik H A
2018-03-08
Organisms face stress from multiple sources simultaneously and require mechanisms to respond to these scenarios if they are to survive in the long term. This overview focuses on a series of key points that illustrate how disorder and post-translational changes can combine to play a critical role in orchestrating the response of organisms to the stress of a changing environment. Increasingly, protein complexes are thought of as dynamic multi-component molecular machines able to adapt through compositional, conformational and/or post-translational modifications to control their largely metabolic outputs. These metabolites then feed into cellular physiological homeostasis or the production of secondary metabolites with novel anti-microbial properties. The control of adaptations to stress operates at multiple levels including the proteome and the dynamic nature of proteomic changes suggests a parallel with the equally dynamic epigenetic changes at the level of nucleic acids. Given their properties, I propose that some disordered protein platforms specifically enable organisms to sense and react rapidly as the first line of response to change. Using examples from the highly dynamic host-pathogen and host-stress response, I illustrate by example how disordered proteins are key to fulfilling the need for multiple levels of integration of response at different time scales to create robust control points.
IIPImage: Large-image visualization
NASA Astrophysics Data System (ADS)
Pillay, Ruven
2014-08-01
IIPImage is an advanced high-performance feature-rich image server system that enables online access to full resolution floating point (as well as other bit depth) images at terabyte scales. Paired with the VisiOmatic (ascl:1408.010) celestial image viewer, the system can comfortably handle gigapixel size images as well as advanced image features such as both 8, 16 and 32 bit depths, CIELAB colorimetric images and scientific imagery such as multispectral images. Streaming is tile-based, which enables viewing, navigating and zooming in real-time around gigapixel size images. Source images can be in either TIFF or JPEG2000 format. Whole images or regions within images can also be rapidly and dynamically resized and exported by the server from a single source image without the need to store multiple files in various sizes.
Nonuniformity of Temperatures in Microwave Steam Heating of Lobster Tail.
Fleischman, Gregory J
2016-11-01
The biennial Conference for Food Protection provides a formal process for all interested parties to influence food safety guidance. At a recent conference, an issue was raised culminating in a formal request to the U.S. Food and Drug Administration to change its Food Code recommendation for safe cooking of seafood using microwave energy when steaming was also employed. The request was to treat microwave steam cooked seafood as a conventionally cooked raw animal product rather than a microwave cooked product, for which the safe cooking recommendation is more extensive owing to the complex temperature distributions in microwave heating. The request was motivated by a literature study that revealed a more uniform temperature distribution in microwave steam cooked whole lobster. In that study, single-point temperatures were recorded in various sections of the whole lobster, but only one temperature was recorded in the tail, although the large size of the tail could translate to multiple hot and cold points. The present study was conducted to examine lobster tail specifically, measuring temperatures at multiple points during microwave steam cooking. Large temperature differences, greater than 60°C at times, were found throughout the heating period. To compensate for such differences, the Food Code recommends a more extensive level of cooking when microwave energy, rather than conventional heat sources, is used. Therefore, a change in the Food Code regarding microwave steam heating cannot be recommended.
Saeedi, Ehsan; Kong, Yinan
2017-01-01
In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance (1Area×Time=1AT) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature. PMID:28459831
Hossain, Md Selim; Saeedi, Ehsan; Kong, Yinan
2017-01-01
In this paper, we propose a novel parallel architecture for fast hardware implementation of elliptic curve point multiplication (ECPM), which is the key operation of an elliptic curve cryptography processor. The point multiplication over binary fields is synthesized on both FPGA and ASIC technology by designing fast elliptic curve group operations in Jacobian projective coordinates. A novel combined point doubling and point addition (PDPA) architecture is proposed for group operations to achieve high speed and low hardware requirements for ECPM. It has been implemented over the binary field which is recommended by the National Institute of Standards and Technology (NIST). The proposed ECPM supports two Koblitz and random curves for the key sizes 233 and 163 bits. For group operations, a finite-field arithmetic operation, e.g. multiplication, is designed on a polynomial basis. The delay of a 233-bit point multiplication is only 3.05 and 3.56 μs, in a Xilinx Virtex-7 FPGA, for Koblitz and random curves, respectively, and 0.81 μs in an ASIC 65-nm technology, which are the fastest hardware implementation results reported in the literature to date. In addition, a 163-bit point multiplication is also implemented in FPGA and ASIC for fair comparison which takes around 0.33 and 0.46 μs, respectively. The area-time product of the proposed point multiplication is very low compared to similar designs. The performance ([Formula: see text]) and Area × Time × Energy (ATE) product of the proposed design are far better than the most significant studies found in the literature.
UNMIX Methods Applied to Characterize Sources of Volatile Organic Compounds in Toronto, Ontario
Porada, Eugeniusz; Szyszkowicz, Mieczysław
2016-01-01
UNMIX, a sensor modeling routine from the U.S. Environmental Protection Agency (EPA), was used to model volatile organic compound (VOC) receptors in four urban sites in Toronto, Ontario. VOC ambient concentration data acquired in 2000–2009 for 175 VOC species in four air quality monitoring stations were analyzed. UNMIX, by performing multiple modeling attempts upon varying VOC menus—while rejecting the results that were not reliable—allowed for discriminating sources by their most consistent chemical characteristics. The method assessed occurrences of VOCs in sources typical of the urban environment (traffic, evaporative emissions of fuels, banks of fugitive inert gases), industrial point sources (plastic-, polymer-, and metalworking manufactures), and in secondary sources (releases from water, sediments, and contaminated urban soil). The remote sensing and robust modeling used here produces chemical profiles of putative VOC sources that, if combined with known environmental fates of VOCs, can be used to assign physical sources’ shares of VOCs emissions into the atmosphere. This in turn provides a means of assessing the impact of environmental policies on one hand, and industrial activities on the other hand, on VOC air pollution. PMID:29051416
Mixture-based gatekeeping procedures in adaptive clinical trials.
Kordzakhia, George; Dmitrienko, Alex; Ishida, Eiji
2018-01-01
Clinical trials with data-driven decision rules often pursue multiple clinical objectives such as the evaluation of several endpoints or several doses of an experimental treatment. These complex analysis strategies give rise to "multivariate" multiplicity problems with several components or sources of multiplicity. A general framework for defining gatekeeping procedures in clinical trials with adaptive multistage designs is proposed in this paper. The mixture method is applied to build a gatekeeping procedure at each stage and inferences at each decision point (interim or final analysis) are performed using the combination function approach. An advantage of utilizing the mixture method is that it enables powerful gatekeeping procedures applicable to a broad class of settings with complex logical relationships among the hypotheses of interest. Further, the combination function approach supports flexible data-driven decisions such as a decision to increase the sample size or remove a treatment arm. The paper concludes with a clinical trial example that illustrates the methodology by applying it to develop an adaptive two-stage design with a mixture-based gatekeeping procedure.
The X-Ray Globular Cluster Population in NGC 1399
NASA Technical Reports Server (NTRS)
Angelini, Lorella; Loewenstein, Michael; Mushotzky, Richard F.; White, Nicholas E. (Technical Monitor)
2001-01-01
We report on X-ray sources detected in the Chandra images of the elliptical galaxy NGC 1399 and identified with globular clusters (GCs). The 8'x 8' Chandra image shows that a large fraction of the 2-10 keV X-ray emission is resolved into point sources, with a luminosity threshold of 5 x 10 (exp 37) ergs s-1. These sources are most likely Low Mass X-ray Binaries (LMXBs). More than 70% of the X-ray sources, in a region imaged by Hubble Space Telescope (HST), are located within GCs. Many of these sources have super-Eddington luminosity (for an accreting neutron star) and their average luminosity is higher than the remaining sources. This association suggests that, in giant elliptical galaxies, luminous X-ray binaries preferentially form in GCs. The spectral properties of the GC and non-GC sources are in most cases similar to those of LMXBs in our galaxy. Two of the brightest sources, one of which is in GC, have a much softer spectra as seen in the high state black hole. The "apparent" super-Eddington luminosity in many cases may be due to multiple LMXB systems within individual GC, but with some of the most extreme luminous systems containing massive black holes.
Nutrient mass balance and trends, Mobile River Basin, Alabama, Georgia, and Mississippi
Harned, D.A.; Atkins, J.B.; Harvill, J.S.
2004-01-01
A nutrient mass balance - accounting for nutrient inputs from atmospheric deposition, fertilizer, crop nitrogen fixation, and point source effluents; and nutrient outputs, including crop harvest and storage - was calculated for 18 subbasins in the Mobile River Basin, and trends (1970 to 1997) were evaluated as part of the U.S. Geological Survey National Water Quality Assessment (NAWQA) Program. Agricultural nonpoint nitrogen and phosphorus sources and urban nonpoint nitrogen sources are the most important factors associated with nutrients in this system. More than 30 percent of nitrogen yield in two basins and phosphorus yield in eight basins can be attributed to urban point source nutrient inputs. The total nitrogen yield (1.3 tons per square mile per year) for the Tombigbee River, which drains a greater percentage of agricultural (row crop) land use, was larger than the total nitrogen yield (0.99 tons per square mile per year) for the Alabama River. Decreasing trends of total nitrogen concentrations in the Tombigbee and Alabama Rivers indicate that a reduction occurred from 1975 to 1997 in the nitrogen contributions to Mobile Bay from the Mobile River. Nitrogen concentrations also decreased (1980 to 1995) in the Black Warrior River, one of the major tributaries to the Tombigbee River. Total phosphorus concentrations increased from 1970 to 1996 at three urban influenced sites on the Etowah River in Georgia. Multiple regression analysis indicates a distinct association between water quality in the streams of the Mobile River drainage basin and agricultural activities in the basin.
Nature of the Unidentified TeV Source HESS J1614-518 Revealed by Suzaku and XMM-Newton Observations
NASA Astrophysics Data System (ADS)
Sakai, M.; Yajima, Y.; Matsumoto, H.
2013-03-01
We report new results concerning HESS J1614-518, which exhibits two regions with intense γ-ray emission. The south and center regions of HESS J1614-518 were observed with Suzaku in 2008, while the north region with the 1st brightest peak was observed in 2006. No X-ray counterpart is found at the 2nd brightest peak; the upper limit of the X-ray flux is estimated as 1.6 × 10-13 erg cm-2 s-1 in the 2-10 keV band. A previously-known soft X-ray source, Suzaku J1614-5152, is detected at the center of HESS J1614-518. Analyzing the XMM-Newton archival data, we reveal that Suzaku J1614-5152 consists of multiple point sources. The X-ray spectrum of the brightest point source, XMMU J161406.0-515225, could be described by a power-law model with the photon index Γ = 5.2+0.6-0.5 or a blackbody model with the temperature kT = 0.38+0.04-0.04 {keV}. In the blackbody model, the estimated column density N H = 1.1+0.3-0.2 × 1022 {cm}-2 is almost the same as that of the hard extended X-ray emission in Suzaku J1614-5141, spatially coincident with the 1st peak position. In this case, XMMU J161406.0-515225 may be physically related to Suzaku J1614-5141 and HESS J1614-518.
The Chandra Source Catalog 2.0: Estimating Source Fluxes
NASA Astrophysics Data System (ADS)
Primini, Francis Anthony; Allen, Christopher E.; Miller, Joseph; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; D'Abrusco, Raffaele; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Juan Rafael; McCollough, Michael L.; McDowell, Jonathan C.; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael; Van Stone, David W.; Zografou, Panagoula
2018-01-01
The Second Chandra Source Catalog (CSC2.0) will provide information on approximately 316,000 point or compact extended x-ray sources, derived from over 10,000 ACIS and HRC-I imaging observations available in the public archive at the end of 2014. As in the previous catalog release (CSC1.1), fluxes for these sources will be determined separately from source detection, using a Bayesian formalism that accounts for background, spatial resolution effects, and contamination from nearby sources. However, the CSC2.0 procedure differs from that used in CSC1.1 in three important aspects. First, for sources in crowded regions in which photometric apertures overlap, fluxes are determined jointly, using an extension of the CSC1.1 algorithm, as discussed in Primini & Kashyap (2014ApJ...796…24P). Second, an MCMC procedure is used to estimate marginalized posterior probability distributions for source fluxes. Finally, for sources observed in multiple observations, a Bayesian Blocks algorithm (Scargle, et al. 2013ApJ...764..167S) is used to group observations into blocks of constant source flux.In this poster we present details of the CSC2.0 photometry algorithms and illustrate their performance in actual CSC2.0 datasets.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.
Point source emission reference materials from the Emissions Inventory Improvement Program (EIIP). Provides point source guidance on planning, emissions estimation, data collection, inventory documentation and reporting, and quality assurance/quality contr
NASA Astrophysics Data System (ADS)
Zhu, Lei; Song, JinXi; Liu, WanQing
2017-12-01
Huaxian Section is the last hydrological and water quality monitoring section of Weihe River Watershed. Weihe River Watershed above Huaxian Section is taken as the research objective in this paper and COD is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a new method to estimate pollution loads—characteristic section load(CSLD) method is suggested and point source pollution and non-point source pollution loads of Weihe River Watershed above Huaxian Section are calculated in the rainy, normal and dry season in the year 2007. The results show that the monthly point source pollution loads of Weihe River Watershed above Huaxian Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above Huaxian Section change greatly and the non-point source pollution load proportions of total pollution load of COD decrease in the normal, rainy and wet period in turn.
Calculating NH3-N pollution load of wei river watershed above Huaxian section using CSLD method
NASA Astrophysics Data System (ADS)
Zhu, Lei; Song, JinXi; Liu, WanQing
2018-02-01
Huaxian Section is the last hydrological and water quality monitoring section of Weihe River Watershed. So it is taken as the research objective in this paper and NH3-N is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a new method to estimate pollution loads—characteristic section load (CSLD)method is suggested and point source pollution and non-point source pollution loads of Weihe River Watershed above Huaxian Section are calculated in the rainy, normal and dry season in the year 2007. The results show that the monthly point source pollution loads of Weihe River Watershed above Huaxian Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above Huaxian Section change greatly. The non-point source pollution load proportions of total pollution load of NH3-N decrease in the normal, rainy and wet period in turn.
Cronkite-Ratcliff, C.; Phelps, G.A.; Boucher, A.
2012-01-01
This report provides a proof-of-concept to demonstrate the potential application of multiple-point geostatistics for characterizing geologic heterogeneity and its effect on flow and transport simulation. The study presented in this report is the result of collaboration between the U.S. Geological Survey (USGS) and Stanford University. This collaboration focused on improving the characterization of alluvial deposits by incorporating prior knowledge of geologic structure and estimating the uncertainty of the modeled geologic units. In this study, geologic heterogeneity of alluvial units is characterized as a set of stochastic realizations, and uncertainty is indicated by variability in the results of flow and transport simulations for this set of realizations. This approach is tested on a hypothetical geologic scenario developed using data from the alluvial deposits in Yucca Flat, Nevada. Yucca Flat was chosen as a data source for this test case because it includes both complex geologic and hydrologic characteristics and also contains a substantial amount of both surface and subsurface geologic data. Multiple-point geostatistics is used to model geologic heterogeneity in the subsurface. A three-dimensional (3D) model of spatial variability is developed by integrating alluvial units mapped at the surface with vertical drill-hole data. The SNESIM (Single Normal Equation Simulation) algorithm is used to represent geologic heterogeneity stochastically by generating 20 realizations, each of which represents an equally probable geologic scenario. A 3D numerical model is used to simulate groundwater flow and contaminant transport for each realization, producing a distribution of flow and transport responses to the geologic heterogeneity. From this distribution of flow and transport responses, the frequency of exceeding a given contaminant concentration threshold can be used as an indicator of uncertainty about the location of the contaminant plume boundary.
46 CFR 111.10-5 - Multiple energy sources.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 4 2010-10-01 2010-10-01 false Multiple energy sources. 111.10-5 Section 111.10-5...-GENERAL REQUIREMENTS Power Supply § 111.10-5 Multiple energy sources. Failure of any single generating set energy source such as a boiler, diesel, gas turbine, or steam turbine must not cause all generating sets...
46 CFR 111.10-5 - Multiple energy sources.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 4 2011-10-01 2011-10-01 false Multiple energy sources. 111.10-5 Section 111.10-5...-GENERAL REQUIREMENTS Power Supply § 111.10-5 Multiple energy sources. Failure of any single generating set energy source such as a boiler, diesel, gas turbine, or steam turbine must not cause all generating sets...
46 CFR 111.10-5 - Multiple energy sources.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 4 2013-10-01 2013-10-01 false Multiple energy sources. 111.10-5 Section 111.10-5...-GENERAL REQUIREMENTS Power Supply § 111.10-5 Multiple energy sources. Failure of any single generating set energy source such as a boiler, diesel, gas turbine, or steam turbine must not cause all generating sets...
46 CFR 111.10-5 - Multiple energy sources.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 4 2014-10-01 2014-10-01 false Multiple energy sources. 111.10-5 Section 111.10-5...-GENERAL REQUIREMENTS Power Supply § 111.10-5 Multiple energy sources. Failure of any single generating set energy source such as a boiler, diesel, gas turbine, or steam turbine must not cause all generating sets...
46 CFR 111.10-5 - Multiple energy sources.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 4 2012-10-01 2012-10-01 false Multiple energy sources. 111.10-5 Section 111.10-5...-GENERAL REQUIREMENTS Power Supply § 111.10-5 Multiple energy sources. Failure of any single generating set energy source such as a boiler, diesel, gas turbine, or steam turbine must not cause all generating sets...
Schmidt, Mark E; Chiao, Ping; Klein, Gregory; Matthews, Dawn; Thurfjell, Lennart; Cole, Patricia E; Margolin, Richard; Landau, Susan; Foster, Norman L; Mason, N Scott; De Santi, Susan; Suhy, Joyce; Koeppe, Robert A; Jagust, William
2015-09-01
In vivo imaging of amyloid burden with positron emission tomography (PET) provides a means for studying the pathophysiology of Alzheimer's and related diseases. Measurement of subtle changes in amyloid burden requires quantitative analysis of image data. Reliable quantitative analysis of amyloid PET scans acquired at multiple sites and over time requires rigorous standardization of acquisition protocols, subject management, tracer administration, image quality control, and image processing and analysis methods. We review critical points in the acquisition and analysis of amyloid PET, identify ways in which technical factors can contribute to measurement variability, and suggest methods for mitigating these sources of noise. Improved quantitative accuracy could reduce the sample size necessary to detect intervention effects when amyloid PET is used as a treatment end point and allow more reliable interpretation of change in amyloid burden and its relationship to clinical course. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hashemi, Seyyedhossein; Javaherian, Abdolrahim; Ataee-pour, Majid; Tahmasebi, Pejman; Khoshdel, Hossein
2014-12-01
In facies modeling, the ideal objective is to integrate different sources of data to generate a model that has the highest consistency to reality with respect to geological shapes and their facies architectures. Multiple-point (geo)statistics (MPS) is a tool that gives the opportunity of reaching this goal via defining a training image (TI). A facies modeling workflow was conducted on a carbonate reservoir located southwest Iran. Through a sequence stratigraphic correlation among the wells, it was revealed that the interval under a modeling process was deposited in a tidal flat environment. Bahamas tidal flat environment which is one of the most well studied modern carbonate tidal flats was considered to be the source of required information for modeling a TI. In parallel, a neural network probability cube was generated based on a set of attributes derived from 3D seismic cube to be applied into the MPS algorithm as a soft conditioning data. Moreover, extracted channel bodies and drilled well log facies came to the modeling as hard data. Combination of these constraints resulted to a facies model which was greatly consistent to the geological scenarios. This study showed how analogy of modern occurrences can be set as the foundation for generating a training image. Channel morphology and facies types currently being deposited, which are crucial for modeling a training image, was inferred from modern occurrences. However, there were some practical considerations concerning the MPS algorithm used for facies simulation. The main limitation was the huge amount of RAM and CPU-time needed to perform simulations.
1991-11-01
Tilted Rough Disc," Donald J. Schertler and Nicholas George "Image Deblurring for Multiple-Point Impulse Responses," Bryan J. Stossel and Nicholas George...Rough Disc Donald J. Schertler Nicholas George Image Deblurring for Multiple-Point Impulse Bryan J. Stossel Responses Nicholas George z 0 zw V) w LU 0...number of impulses present in the degradation. IMAGE DEBLURRING FOR MULTIPLE-POINT IMPULSE RESPONSESt Bryan J. Stossel Nicholas George Institute of Optics
Self-Similar Spin Images for Point Cloud Matching
NASA Astrophysics Data System (ADS)
Pulido, Daniel
The rapid growth of Light Detection And Ranging (Lidar) technologies that collect, process, and disseminate 3D point clouds have allowed for increasingly accurate spatial modeling and analysis of the real world. Lidar sensors can generate massive 3D point clouds of a collection area that provide highly detailed spatial and radiometric information. However, a Lidar collection can be expensive and time consuming. Simultaneously, the growth of crowdsourced Web 2.0 data (e.g., Flickr, OpenStreetMap) have provided researchers with a wealth of freely available data sources that cover a variety of geographic areas. Crowdsourced data can be of varying quality and density. In addition, since it is typically not collected as part of a dedicated experiment but rather volunteered, when and where the data is collected is arbitrary. The integration of these two sources of geoinformation can provide researchers the ability to generate products and derive intelligence that mitigate their respective disadvantages and combine their advantages. Therefore, this research will address the problem of fusing two point clouds from potentially different sources. Specifically, we will consider two problems: scale matching and feature matching. Scale matching consists of computing feature metrics of each point cloud and analyzing their distributions to determine scale differences. Feature matching consists of defining local descriptors that are invariant to common dataset distortions (e.g., rotation and translation). Additionally, after matching the point clouds they can be registered and processed further (e.g., change detection). The objective of this research is to develop novel methods to fuse and enhance two point clouds from potentially disparate sources (e.g., Lidar and crowdsourced Web 2.0 datasets). The scope of this research is to investigate both scale and feature matching between two point clouds. The specific focus of this research will be in developing a novel local descriptor based on the concept of self-similarity to aid in the scale and feature matching steps. An open problem in fusion is how best to extract features from two point clouds and then perform feature-based matching. The proposed approach for this matching step is the use of local self-similarity as an invariant measure to match features. In particular, the proposed approach is to combine the concept of local self-similarity with a well-known feature descriptor, Spin Images, and thereby define "Self-Similar Spin Images". This approach is then extended to the case of matching two points clouds in very different coordinate systems (e.g., a geo-referenced Lidar point cloud and stereo-image derived point cloud without geo-referencing). The use of Self-Similar Spin Images is again applied to address this problem by introducing a "Self-Similar Keyscale" that matches the spatial scales of two point clouds. Another open problem is how best to detect changes in content between two point clouds. A method is proposed to find changes between two point clouds by analyzing the order statistics of the nearest neighbors between the two clouds, and thereby define the "Nearest Neighbor Order Statistic" method. Note that the well-known Hausdorff distance is a special case as being just the maximum order statistic. Therefore, by studying the entire histogram of these nearest neighbors it is expected to yield a more robust method to detect points that are present in one cloud but not the other. This approach is applied at multiple resolutions. Therefore, changes detected at the coarsest level will yield large missing targets and at finer levels will yield smaller targets.
Analysis of Sources of Large Positioning Errors in Deterministic Fingerprinting
2017-01-01
Wi-Fi fingerprinting is widely used for indoor positioning and indoor navigation due to the ubiquity of wireless networks, high proliferation of Wi-Fi-enabled mobile devices, and its reasonable positioning accuracy. The assumption is that the position can be estimated based on the received signal strength intensity from multiple wireless access points at a given point. The positioning accuracy, within a few meters, enables the use of Wi-Fi fingerprinting in many different applications. However, it has been detected that the positioning error might be very large in a few cases, which might prevent its use in applications with high accuracy positioning requirements. Hybrid methods are the new trend in indoor positioning since they benefit from multiple diverse technologies (Wi-Fi, Bluetooth, and Inertial Sensors, among many others) and, therefore, they can provide a more robust positioning accuracy. In order to have an optimal combination of technologies, it is crucial to identify when large errors occur and prevent the use of extremely bad positioning estimations in hybrid algorithms. This paper investigates why large positioning errors occur in Wi-Fi fingerprinting and how to detect them by using the received signal strength intensities. PMID:29186921
NASA Astrophysics Data System (ADS)
Klopfer, Eric; Yoon, Susan; Perry, Judy
2005-09-01
This paper reports on teachers' perceptions of the educational affordances of a handheld application called Participatory Simulations. It presents evidence from five cases representing each of the populations who work with these computational tools. Evidence across multiple data sources yield similar results to previous research evaluations of handheld activities with respect to enhancing motivation, engagement and self-directed learning. Three additional themes are discussed that provide insight into understanding curricular applicability of Participatory Simulations that suggest a new take on ubiquitous and accessible mobile computing. These themes generally point to the multiple layers of social and cognitive flexibility intrinsic to their design: ease of adaptation to subject-matter content knowledge and curricular integration; facility in attending to teacher-individualized goals; and encouraging the adoption of learner-centered strategies.
Optimal simultaneous superpositioning of multiple structures with missing data
Theobald, Douglas L.; Steindel, Phillip A.
2012-01-01
Motivation: Superpositioning is an essential technique in structural biology that facilitates the comparison and analysis of conformational differences among topologically similar structures. Performing a superposition requires a one-to-one correspondence, or alignment, of the point sets in the different structures. However, in practice, some points are usually ‘missing’ from several structures, for example, when the alignment contains gaps. Current superposition methods deal with missing data simply by superpositioning a subset of points that are shared among all the structures. This practice is inefficient, as it ignores important data, and it fails to satisfy the common least-squares criterion. In the extreme, disregarding missing positions prohibits the calculation of a superposition altogether. Results: Here, we present a general solution for determining an optimal superposition when some of the data are missing. We use the expectation–maximization algorithm, a classic statistical technique for dealing with incomplete data, to find both maximum-likelihood solutions and the optimal least-squares solution as a special case. Availability and implementation: The methods presented here are implemented in THESEUS 2.0, a program for superpositioning macromolecular structures. ANSI C source code and selected compiled binaries for various computing platforms are freely available under the GNU open source license from http://www.theseus3d.org. Contact: dtheobald@brandeis.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22543369
Response of an oscillating superleak transducer to a pointlike heat source
NASA Astrophysics Data System (ADS)
Quadt, A.; Schröder, B.; Uhrmacher, M.; Weingarten, J.; Willenberg, B.; Vennekate, H.
2012-03-01
A new technique of superconducting cavity diagnostics has been introduced by D. L. Hartill at Cornell University, Ithaca, New York. It uses oscillating superleak transducers (OST) which detect the heat transferred from a cavity’s quench point via Second Sound through the superfluid He bath, needed to cool the superconducting cavity. The localization of the quench point is done by triangulation. The observed response of an OST is a nontrivial, but reproducible pattern of oscillations. A small helium evaporation cryostat was built which allows the investigation of the response of an OST in greater detail. The distance between a pointlike electrical heater and the OST can be varied. The OST can be mounted either parallel or perpendicular to the plate that houses the heat source. If the artificial quench point releases an amount of energy compatible to a real quench spot on a cavity’s surface, the OST signal starts with a negative pulse, which is usually strong enough to allow automatic detection. Furthermore, the reflection of the Second Sound on the wall is observed. A reflection coefficient R=0.39±0.05 of the glass wall is measured. This excludes a strong influence of multiple reflections in the complex OST response. Fourier analyses show three main frequencies, found in all OST spectra. They can be interpreted as modes of an oscillating circular membrane.
Preliminary calibration of the ACP safeguards neutron counter
NASA Astrophysics Data System (ADS)
Lee, T. H.; Kim, H. D.; Yoon, J. S.; Lee, S. Y.; Swinhoe, M.; Menlove, H. O.
2007-10-01
The Advanced Spent Fuel Conditioning Process (ACP), a kind of pyroprocess, has been developed at the Korea Atomic Energy Research Institute (KAERI). Since there is no IAEA safeguards criteria for this process, KAERI has developed a neutron coincidence counter to make it possible to perform a material control and accounting (MC&A) for its ACP materials for the purpose of a transparency in the peaceful uses of nuclear materials at KAERI. The test results of the ACP Safeguards Neutron Counter (ASNC) show a satisfactory performance for the Doubles count measurement with a low measurement error for its cylindrical sample cavity. The neutron detection efficiency is about 21% with an error of ±1.32% along the axial direction of the cavity. Using two 252Cf neutron sources, we obtained various parameters for the Singles and Doubles rates for the ASNC. The Singles, Doubles, and Triples rates for a 252Cf point source were obtained by using the MCNPX code and the results for the ft8 cap multiplicity tally option with the values of ɛ, fd, and ft measured with a strong source most closely match the measurement results to within a 1% error. A preliminary calibration curve for the ASNC was generated by using the point model equation relationship between 244Cm and 252Cf and the calibration coefficient for the non-multiplying sample is 2.78×10 5 (Doubles counts/s/g 244Cm). The preliminary calibration curves for the ACP samples were also obtained by using an MCNPX simulation. A neutron multiplication influence on an increase of the Doubles rate for a metal ingot and UO2 powder is clearly observed. These calibration curves will be modified and complemented, when hot calibration samples become available. To verify the validity of this calibration curve, a measurement of spent fuel standards for a known 244Cm mass will be performed in the near future.
Shibata, Tomoyuki; Solo-Gabriele, Helena M.; Fleming, Lora E.; Elmir, Samir
2008-01-01
The microbial water quality at two beaches, Hobie Beach and Crandon Beach, in Miami-Dade County, Florida, USA was measured using multiple microbial indicators for the purpose of evaluating correlations between microbes and for identifying possible sources of contamination. The indicator microbes chosen for this study (enterococci, Escherichia coli, fecal coliform, total coliform and C. perfringens) were evaluated through three different sampling efforts. These efforts included daily measurements at four locations during a wet season month and a dry season month, spatially intensive water sampling during low- and high-tide periods, and a sand sampling effort. Results indicated that concentrations did not vary in a consistent fashion between one indicator microbe and another. Daily water quality frequently exceeded guideline levels at Hobie Beach for all indicator microbes except for fecal coliform, which never exceeded the guideline. Except for total coliform, the concentrations of microbes did not change significantly between seasons in spite of the fact that the physical–chemical parameters (rainfall, temperature, pH, and salinity) changed significantly between the two monitoring periods. Spatially intense water sampling showed that the concentrations of microbes were significantly different with distance from the shoreline. The highest concentrations were observed at shoreline points and decreased at offshore points. Furthermore, the highest concentrations of indicator microbe concentrations were observed at high tide, when the wash zone area of the beach was submerged. Beach sands within the wash zone tested positive for all indicator microbes, thereby suggesting that this zone may serve as the source of indicator microbes. Ultimate sources of indicator microbes to this zone may include humans, animals, and possibly the survival and regrowth of indicator microbes due to the unique environmental conditions found within this zone. Overall, the results of this study indicated that the concentrations of indicator microbes do not necessarily correlate with one another. Exceedence of water quality guidelines, and thus the frequency of beach advisories, depends upon which indicator microbe is chosen. PMID:15261551
Cope, J.R.; Prosser, A.; Nowicki, S.; Roberts, M.W.; Scheer, D.; Anderson, C.; Longsworth, A.; Parsons, C.; Goldschmidt, D.; Johnston, S.; Bishop, H.; Xiao, L.; Hill, V.; Beach, M.; Hlavsa, M.C.
2015-01-01
Summary The incidence of recreational water–associated outbreaks in the United States has significantly increased, driven, at least in part, by outbreaks both caused by Cryptosporidium and associated with treated recreational water venues. Because of the parasite's extreme chlorine tolerance, transmission can occur even in well-maintained treated recreational water venues, (e.g., pools) and a focal cryptosporidiosis outbreak can evolve into a community-wide outbreak associated with multiple recreational water venues and settings (e.g., child care facilities). In August 2004 in Auglaize County, Ohio, multiple cryptosporidiosis cases were identified and anecdotally linked to Pool A. Within 5 days of the first case being reported, Pool A was hyperchlorinated to achieve 99.9% Cryptosporidium inactivition. A case-control study was launched to epidemiologically ascertain the outbreak source 11 days later. A total of 150 confirmed and probable cases were identified; the temporal distribution of illness onset was peaked, indicating a point-source exposure. Cryptosporidiosis was significantly associated with swimming in Pool A (matched odds ratio 121.7, 95% confidence interval 27.4–∞) but not with another venue or setting. The findings of this investigation suggest that proactive implementation of control measures, when increased Cryptosporidium transmission is detected but before an outbreak source is epidemiologically ascertained, might prevent a focal cryptosporidiosis outbreak from evolving into a community-wide outbreak. PMID:25907106
Jia, Mengyu; Chen, Xueying; Zhao, Huijuan; Cui, Shanshan; Liu, Ming; Liu, Lingling; Gao, Feng
2015-01-26
Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we herein report on an improved explicit model for a semi-infinite geometry, referred to as "Virtual Source" (VS) diffuse approximation (DA), to fit for low-albedo medium and short source-detector separation. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the near-field to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. This parameterized scheme is proved to inherit the mathematical simplicity of the DA approximation while considerably extending its validity in modeling the near-field photon migration in low-albedo medium. The superiority of the proposed VS-DA method to the established ones is demonstrated in comparison with Monte-Carlo simulations over wide ranges of the source-detector separation and the medium optical properties.
A multiple pointing-mount control strategy for space platforms
NASA Technical Reports Server (NTRS)
Johnson, C. D.
1992-01-01
A new disturbance-adaptive control strategy for multiple pointing-mount space platforms is proposed and illustrated by consideration of a simplified 3-link dynamic model of a multiple pointing-mount space platform. Simulation results demonstrate the effectiveness of the new platform control strategy. The simulation results also reveal a system 'destabilization phenomena' that can occur if the set of individual platform-mounted experiment controllers are 'too responsive.'
Vana, Kimberly D; Silva, Graciela E; Muzyka, Diann; Hirani, Lorraine M
2011-06-01
It has been proposed that students' use of an audience response system, commonly called clickers, may promote comprehension and retention of didactic material. Whether this method actually improves students' grades, however, is still not determined. The purpose of this study was to evaluate whether a lecture format utilizing multiple-choice PowerPoint slides and an audience response system was more effective than a lecture format using only multiple-choice PowerPoint slides in the comprehension and retention of pharmacological knowledge in baccalaureate nursing students. The study also assessed whether the additional use of clickers positively affected students' satisfaction with their learning. Results from 78 students who attended lecture classes with multiple-choice PowerPoint slides plus clickers were compared with those of 55 students who utilized multiple-choice PowerPoint slides only. Test scores between these two groups were not significantly different. A satisfaction questionnaire showed that 72.2% of the control students did not desire the opportunity to use clickers. Of the group utilizing the clickers, 92.3% recommend the use of this system in future courses. The use of multiple-choice PowerPoint slides and an audience response system did not seem to improve the students' comprehension or retention of pharmacological knowledge as compared with those who used solely multiple-choice PowerPoint slides.
NASA Astrophysics Data System (ADS)
Li, Jia; Shen, Hua; Zhu, Rihong; Gao, Jinming; Sun, Yue; Wang, Jinsong; Li, Bo
2018-06-01
The precision of the measurements of aspheric and freeform surfaces remains the primary factor restrict their manufacture and application. One effective means of measuring such surfaces involves using reference or probe beams with angle modulation, such as tilted-wave-interferometer (TWI). It is necessary to improve the measurement efficiency by obtaining the optimum point source array for different pieces before TWI measurements. For purpose of forming a point source array based on the gradients of different surfaces under test, we established a mathematical model describing the relationship between the point source array and the test surface. However, the optimal point sources are irregularly distributed. In order to achieve a flexible point source array according to the gradient of test surface, a novel interference setup using fiber array is proposed in which every point source can be independently controlled on and off. Simulations and the actual measurement examples of two different surfaces are given in this paper to verify the mathematical model. Finally, we performed an experiment of testing an off-axis ellipsoidal surface that proved the validity of the proposed interference system.
Jiang, Jheng Jie; Lee, Chon Lin; Fang, Meng Der; Boyd, Kenneth G.; Gibb, Stuart W.
2015-01-01
This paper presents a methodology based on multivariate data analysis for characterizing potential source contributions of emerging contaminants (ECs) detected in 26 river water samples across multi-scape regions during dry and wet seasons. Based on this methodology, we unveil an approach toward potential source contributions of ECs, a concept we refer to as the “Pharmaco-signature.” Exploratory analysis of data points has been carried out by unsupervised pattern recognition (hierarchical cluster analysis, HCA) and receptor model (principal component analysis-multiple linear regression, PCA-MLR) in an attempt to demonstrate significant source contributions of ECs in different land-use zone. Robust cluster solutions grouped the database according to different EC profiles. PCA-MLR identified that 58.9% of the mean summed ECs were contributed by domestic impact, 9.7% by antibiotics application, and 31.4% by drug abuse. Diclofenac, ibuprofen, codeine, ampicillin, tetracycline, and erythromycin-H2O have significant pollution risk quotients (RQ>1), indicating potentially high risk to aquatic organisms in Taiwan. PMID:25874375
NASA Astrophysics Data System (ADS)
Goix, Sylvaine; Resongles, Eléonore; Point, David; Oliva, Priscia; Duprey, Jean Louis; de la Galvez, Erika; Ugarte, Lincy; Huayta, Carlos; Prunier, Jonathan; Zouiten, Cyril; Gardon, Jacques
2013-12-01
Monitoring atmospheric trace elements (TE) levels and tracing their source origin is essential for exposure assessment and human health studies. Epiphytic Tillandsia capillaris plants were used as bioaccumulator of TE in a complex polymetallic mining/smelting urban context (Oruro, Bolivia). Specimens collected from a pristine reference site were transplanted at a high spatial resolution (˜1 sample/km2) throughout the urban area. About twenty-seven elements were measured after a 4-month exposure, also providing new information values for reference material BCR482. Statistical power analysis for this biomonitoring mapping approach against classical aerosols surveys performed on the same site showed the better aptitude of T. Capillaris to detect geographical trend, and to deconvolute multiple contamination sources using geostatistical principal component analysis. Transplanted specimens in the vicinity of the mining and smelting areas were characterized by extreme TE accumulation (Sn > Ag > Sb > Pb > Cd > As > W > Cu > Zn). Three contamination sources were identified: mining (Ag, Pb, Sb), smelting (As, Sn) and road traffic (Zn) emissions, confirming results of previous aerosol survey.
ERIC Educational Resources Information Center
Shih, Ching-Hsiang; Cheng, Hsiao-Fen; Li, Chia-Chun; Shih, Ching-Tien; Chiang, Ming-Shan
2010-01-01
This study evaluated whether four persons (two groups) with developmental disabilities would be able to improve their collaborative pointing performance through a Multiple Cursor Automatic Pointing Assistive Program (MCAPAP) with a newly developed mouse driver (i.e., a new mouse driver replaces standard mouse driver, and is able to…
Development of a High-Average-Power Compton Gamma Source for Lepton Colliders
NASA Astrophysics Data System (ADS)
Pogorelsky, Igor; Polyanskiy, Mikhail N.; Yakimenko, Vitaliy; Platonenko, Viktor T.
2009-01-01
Gamma- (γ-) ray beams of high average power and peak brightness are of demand for a number of applications in high-energy physics, material processing, medicine, etc. One of such examples is gamma conversion into polarized positrons and muons that is under consideration for projected lepton colliders. A γ-source based on the Compton backscattering from the relativistic electron beam is a promising candidate for this application. Our approach to the high-repetition γ-source assumes placing the Compton interaction point inside a CO2 laser cavity. A laser pulse interacts with periodical electron bunches on each round-trip inside the laser cavity producing the corresponding train of γ-pulses. The round-trip optical losses can be compensated by amplification in the active laser medium. The major challenge for this approach is in maintaining stable amplification rate for a picosecond CO2-laser pulse during multiple resonator round-trips without significant deterioration of its temporal and transverse profiles. Addressing this task, we elaborated on a computer code that allows identifying the directions and priorities in the development of such a multi-pass picosecond CO2 laser. Proof-of-principle experiments help to verify the model and show the viability of the concept. In these tests we demonstrated extended trains of picosecond CO2 laser pulses circulating inside the cavity that incorporates the Compton interaction point.
Magnetic Topology of Coronal Hole Linkages
NASA Technical Reports Server (NTRS)
Titov, V. S.; Mikic, Z.; Linker, J. A.; Lionello, R.; Antiochos, S. K.
2010-01-01
In recent work, Antiochos and coworkers argued that the boundary between the open and closed field regions on the Sun can be extremely complex with narrow corridors of open ux connecting seemingly disconnected coronal holes from the main polar holes, and that these corridors may be the sources of the slow solar wind. We examine, in detail, the topology of such magnetic configurations using an analytical source surface model that allows for analysis of the eld with arbitrary resolution. Our analysis reveals three important new results: First, a coronal hole boundary can join stably to the separatrix boundary of a parasitic polarity region. Second, a single parasitic polarity region can produce multiple null points in the corona and, more important, separator lines connecting these points. Such topologies are extremely favorable for magnetic reconnection, because it can now occur over the entire length of the separators rather than being con ned to a small region around the nulls. Finally, the coronal holes are not connected by an open- eld corridor of finite width, but instead are linked by a singular line that coincides with the separatrix footprint of the parasitic polarity. We investigate how the topological features described above evolve in response to motion of the parasitic polarity region. The implications of our results for the sources of the slow solar wind and for coronal and heliospheric observations are discussed.
Knox, N C; Weedmark, K A; Conly, J; Ensminger, A W; Hosein, F S; Drews, S J
2017-01-01
An outbreak of Legionnaires' disease occurred in an inner city district in Calgary, Canada. This outbreak spanned a 3-week period in November-December 2012, and a total of eight cases were identified. Four of these cases were critically ill requiring intensive care admission but there was no associated mortality. All cases tested positive for Legionella pneumophila serogroup 1 (LP1) by urinary antigen testing. Five of the eight patients were culture positive for LP1 from respiratory specimens. These isolates were further identified as Knoxville monoclonal subtype and sequence subtype ST222. Whole-genome sequencing revealed that the isolates differed by no more than a single vertically acquired single nucleotide variant, supporting a single point-source outbreak. Hypothesis-based environmental investigation and sampling was conducted; however, a definitive source was not identified. Geomapping of case movements within the affected urban sector revealed a 1·0 km common area of potential exposure, which coincided with multiple active construction sites that used water spray to minimize transient dust. This community point-source Legionnaires' disease outbreak is unique due to its ST222 subtype and occurrence in a relatively dry and cold weather setting in Western Canada. This report suggests community outbreaks of Legionella should not be overlooked as a possibility during late autumn and winter months in the Northern Hemisphere.
Changing Regulations of COD Pollution Load of Weihe River Watershed above TongGuan Section, China
NASA Astrophysics Data System (ADS)
Zhu, Lei; Liu, WanQing
2018-02-01
TongGuan Section of Weihe River Watershed is a provincial section between Shaanxi Province and Henan Province, China. Weihe River Watershed above TongGuan Section is taken as the research objective in this paper and COD is chosen as the water quality parameter. According to the discharge characteristics of point source pollutions and non-point source pollutions, a method—characteristic section load (CSLD) method is suggested and point and non-point source pollution loads of Weihe River Watershed above TongGuan Section are calculated in the rainy, normal and dry season in 2013. The results show that the monthly point source pollution loads of Weihe River Watershed above TongGuan Section discharge stably and the monthly non-point source pollution loads of Weihe River Watershed above TongGuan Section change greatly and the non-point source pollution load proportions of total pollution load of COD decrease in the rainy, wet and normal period in turn.
GARLIC, A SHIELDING PROGRAM FOR GAMMA RADIATION FROM LINE- AND CYLINDER- SOURCES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roos, M.
1959-06-01
GARLlC is a program for computing the gamma ray flux or dose rate at a shielded isotropic point detector, due to a line source or the line equivalent of a cylindrical source. The source strength distribution along the line must be either uniform or an arbitrary part of the positive half-cycle of a cosine function The line source can be orierted arbitrarily with respect to the main shield and the detector, except that the detector must not be located on the line source or on its extensionThe main source is a homogeneous plane slab in which scattered radiation is accountedmore » for by multiplying each point element of the line source by a point source buildup factor inside the integral over the point elements. Between the main shield and the line source additional shields can be introduced, which are either plane slabs, parallel to the main shield, or cylindrical rings, coaxial with the line source. Scattered radiation in the additional shields can only be accounted for by constant build-up factors outside the integral. GARLlC-xyz is an extended version particularly suited for the frequently met problem of shielding a room containing a large number of line sources in diHerent positions. The program computes the angles and linear dimensions of a problem for GARLIC when the positions of the detector point and the end points of the line source are given as points in an arbitrary rectangular coordinate system. As an example the isodose curves in water are presented for a monoenergetic cosine-distributed line source at several source energies and for an operating fuel element of the Swedish reactor R3, (auth)« less
ERIC Educational Resources Information Center
Shih, Ching-Hsiang; Shih, Ching-Tien; Wu, Hsiao-Ling
2010-01-01
The latest research adopted software technology to redesign the mouse driver, and turned a mouse into a useful pointing assistive device for people with multiple disabilities who cannot easily or possibly use a standard mouse, to improve their pointing performance through a new operation method, Extended Dynamic Pointing Assistive Program (EDPAP),…
Understanding and Using the Fermi Science Tools
NASA Astrophysics Data System (ADS)
Asercion, Joseph
2018-01-01
The Fermi Science Support Center (FSSC) provides information, documentation, and tools for the analysis of Fermi science data, including both the Large-Area Telescope (LAT) and the Gamma-ray Burst Monitor (GBM). Source and binary versions of the Fermi Science Tools can be downloaded from the FSSC website, and are supported on multiple platforms. An overview document, the Cicerone, provides details of the Fermi mission, the science instruments and their response functions, the science data preparation and analysis process, and interpretation of the results. Analysis Threads and a reference manual available on the FSSC website provide the user with step-by-step instructions for many different types of data analysis: point source analysis - generating maps, spectra, and light curves, pulsar timing analysis, source identification, and the use of python for scripting customized analysis chains. We present an overview of the structure of the Fermi science tools and documentation, and how to acquire them. We also provide examples of standard analyses, including tips and tricks for improving Fermi science analysis.
Parse, simulation, and prediction of NOx emission across the Midwestern United States
NASA Astrophysics Data System (ADS)
Fang, H.; Michalski, G. M.; Spak, S.
2017-12-01
Accurately constraining N emissions in space and time has been a challenge for atmospheric scientists. It has been suggested that 15N isotopes may be a way of tracking N emission sources across various spatial and temporal scales. However, the complexity of multiple N sources that can quickly change in intensity has made this a difficult problem. We have used a SMOKE emission model to parse NOx emission across the Midwestern United States for a one-year simulation. An isotope mass balance methods was used to assign 15N values to road, non-road, point, and area sources. The SMOKE emissions and isotope mass balance were then combined to predict the 15N of NOx emissions (Figure 1). This ^15N of NOx emissions model was then incorporated into CMAQ to assess the role of transport and chemistry would impact the 15N value of NOx due to mixing and removal processes. The predicted 15N value of NOx was compared to those in recent measurements of NOx and atmospheric nitrate.
Working Memory Capacity as a Dynamic Process
Simmering, Vanessa R.; Perone, Sammy
2013-01-01
A well-known characteristic of working memory (WM) is its limited capacity. The source of such limitations, however, is a continued point of debate. Developmental research is positioned to address this debate by jointly identifying the source(s) of limitations and the mechanism(s) underlying capacity increases. Here we provide a cross-domain survey of studies and theories of WM capacity development, which reveals a complex picture: dozens of studies from 50 papers show nearly universal increases in capacity estimates with age, but marked variation across studies, tasks, and domains. We argue that the full pattern of performance cannot be captured through traditional approaches emphasizing single causes, or even multiple separable causes, underlying capacity development. Rather, we consider WM capacity as a dynamic process that emerges from a unified cognitive system flexibly adapting to the context and demands of each task. We conclude by enumerating specific challenges for researchers and theorists that will need to be met in order to move our understanding forward. PMID:23335902
Benschop, Jackie; Biggs, Patrick J.; Marshall, Jonathan C.; Hayman, David T.S.; Carter, Philip E.; Midwinter, Anne C.; Mather, Alison E.; French, Nigel P.
2017-01-01
During 1998–2012, an extended outbreak of Salmonella enterica serovar Typhimurium definitive type 160 (DT160) affected >3,000 humans and killed wild birds in New Zealand. However, the relationship between DT160 within these 2 host groups and the origin of the outbreak are unknown. Whole-genome sequencing was used to compare 109 Salmonella Typhimurium DT160 isolates from sources throughout New Zealand. We provide evidence that DT160 was introduced into New Zealand around 1997 and rapidly propagated throughout the country, becoming more genetically diverse over time. The genetic heterogeneity was evenly distributed across multiple predicted functional protein groups, and we found no evidence of host group differentiation between isolates collected from human, poultry, bovid, and wild bird sources, indicating ongoing transmission between these host groups. Our findings demonstrate how a comparative genomic approach can be used to gain insight into outbreaks, disease transmission, and the evolution of a multihost pathogen after a probable point-source introduction. PMID:28516864
PSD Applicability Determination for Multiple Owner/Operator Point Sources Within a Single Facility
This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Policy and Guidance Database available at www2.epa.gov/title-v-operating-permits/title-v-operating-permit-policy-and-guidance-document-index. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Imaging objects behind small obstacles using axicon lens
NASA Astrophysics Data System (ADS)
Perinchery, Sandeep M.; Shinde, Anant; Murukeshan, V. M.
2017-06-01
Axicon lenses are conical prisms, which are known to focus a light source to a line comprising of multiple points along the optical axis. In this study, we analyze the potential of axicon lenses to view, image and record the object behind opaque obstacles in free space. The advantage of an axicon lens over a regular lens is demonstrated experimentally. Parameters such as obstacle size, object and the obstacle position in the context of imaging behind obstacles are tested using Zemax optical simulation. This proposed concept can be easily adapted to most of the optical imaging methods and microscopy modalities.
NASA Astrophysics Data System (ADS)
Rogulina, L. I.; Moiseenko, V. G.; Odarichenko, E. G.; Voropayeva, E. N.
2018-03-01
The S isotopic composition in the ore-forming minerals galena and sphalerite was studied in different Ag-Pb-Zn deposits of the region. It was pointed out that the δ34S modal values range from-1.2 to +6.7‰ in the minerals with a positive value for the skarn mineralization. In the flyschoid formation, the vein-type mineralization is characterized by negative and positive values. The narrow range of δ34S values indicates the marginal-continental type of the mineralization and the multiple origins of its sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.
This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. Themore » code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C.« less
Effect of distance-related heterogeneity on population size estimates from point counts
Efford, Murray G.; Dawson, Deanna K.
2009-01-01
Point counts are used widely to index bird populations. Variation in the proportion of birds counted is a known source of error, and for robust inference it has been advocated that counts be converted to estimates of absolute population size. We used simulation to assess nine methods for the conduct and analysis of point counts when the data included distance-related heterogeneity of individual detection probability. Distance from the observer is a ubiquitous source of heterogeneity, because nearby birds are more easily detected than distant ones. Several recent methods (dependent double-observer, time of first detection, time of detection, independent multiple-observer, and repeated counts) do not account for distance-related heterogeneity, at least in their simpler forms. We assessed bias in estimates of population size by simulating counts with fixed radius w over four time intervals (occasions). Detection probability per occasion was modeled as a half-normal function of distance with scale parameter sigma and intercept g(0) = 1.0. Bias varied with sigma/w; values of sigma inferred from published studies were often 50% for a 100-m fixed-radius count. More critically, the bias of adjusted counts sometimes varied more than that of unadjusted counts, and inference from adjusted counts would be less robust. The problem was not solved by using mixture models or including distance as a covariate. Conventional distance sampling performed well in simulations, but its assumptions are difficult to meet in the field. We conclude that no existing method allows effective estimation of population size from point counts.
Nie, Xiaobing; Zheng, Wei Xing; Cao, Jinde
2016-12-01
In this paper, the coexistence and dynamical behaviors of multiple equilibrium points are discussed for a class of memristive neural networks (MNNs) with unbounded time-varying delays and nonmonotonic piecewise linear activation functions. By means of the fixed point theorem, nonsmooth analysis theory and rigorous mathematical analysis, it is proven that under some conditions, such n-neuron MNNs can have 5 n equilibrium points located in ℜ n , and 3 n of them are locally μ-stable. As a direct application, some criteria are also obtained on the multiple exponential stability, multiple power stability, multiple log-stability and multiple log-log-stability. All these results reveal that the addressed neural networks with activation functions introduced in this paper can generate greater storage capacity than the ones with Mexican-hat-type activation function. Numerical simulations are presented to substantiate the theoretical results. Copyright © 2016 Elsevier Ltd. All rights reserved.
Halfon, Philippe; Ouzan, Denis; Khiri, Hacène; Pénaranda, Guillaume; Castellani, Paul; Oulès, Valerie; Kahloun, Asma; Amrani, Nolwenn; Fanteria, Lise; Martineau, Agnès; Naldi, Lou; Bourlière, Marc
2012-01-01
Background & Aims Point mutations in the coding region of the interleukin 28 gene (rs12979860) have recently been identified for predicting the outcome of treatment of hepatitis C virus infection. This polymorphism detection was based on whole blood DNA extraction. Alternatively, DNA for genetic diagnosis has been derived from buccal epithelial cells (BEC), dried blood spots (DBS), and genomic DNA from serum. The aim of the study was to investigate the reliability and accuracy of alternative routes of testing for single nucleotide polymorphism allele rs12979860CC. Methods Blood, plasma, and sera samples from 200 patients were extracted (400 µL). Buccal smears were tested using an FTA card. To simulate postal delay, we tested the influence of storage at ambient temperature on the different sources of DNA at five time points (baseline, 48 h, 6 days, 9 days, and 12 days) Results There was 100% concordance between blood, plasma, sera, and BEC, validating the use of DNA extracted from BEC collected on cytology brushes for genetic testing. Genetic variations in HPTR1 gene were detected using smear technique in blood smear (3620 copies) as well as in buccal smears (5870 copies). These results are similar to those for whole blood diluted at 1/10. A minimum of 0.04 µL, 4 µL, and 40 µL was necessary to obtain exploitable results respectively for whole blood, sera, and plasma. No significant variation between each time point was observed for the different sources of DNA. IL28B SNPs analysis at these different time points showed the same results using the four sources of DNA. Conclusion We demonstrated that genomic DNA extraction from buccal cells, small amounts of serum, and dried blood spots is an alternative to DNA extracted from peripheral blood cells and is helpful in retrospective and prospective studies for multiple genetic markers, specifically in hard-to-reach individuals. PMID:22412970
Information Foraging for Perceptual Decisions
2016-01-01
We tested an information foraging framework to characterize the mechanisms that drive active (visual) sampling behavior in decision problems that involve multiple sources of information. Experiments 1 through 3 involved participants making an absolute judgment about the direction of motion of a single random dot motion pattern. In Experiment 4, participants made a relative comparison between 2 motion patterns that could only be sampled sequentially. Our results show that: (a) Information (about noisy motion information) grows to an asymptotic level that depends on the quality of the information source; (b) The limited growth is attributable to unequal weighting of the incoming sensory evidence, with early samples being weighted more heavily; (c) Little information is lost once a new source of information is being sampled; and (d) The point at which the observer switches from 1 source to another is governed by online monitoring of his or her degree of (un)certainty about the sampled source. These findings demonstrate that the sampling strategy in perceptual decision-making is under some direct control by ongoing cognitive processing. More specifically, participants are able to track a measure of (un)certainty and use this information to guide their sampling behavior. PMID:27819455
NASA Astrophysics Data System (ADS)
Jia, Mengyu; Wang, Shuang; Chen, Xueying; Gao, Feng; Zhao, Huijuan
2016-03-01
Most analytical methods for describing light propagation in turbid medium exhibit low effectiveness in the near-field of a collimated source. Motivated by the Charge Simulation Method in electromagnetic theory as well as the established discrete source based modeling, we have reported on an improved explicit model, referred to as "Virtual Source" (VS) diffuse approximation (DA), to inherit the mathematical simplicity of the DA while considerably extend its validity in modeling the near-field photon migration in low-albedo medium. In this model, the collimated light in the standard DA is analogously approximated as multiple isotropic point sources (VS) distributed along the incident direction. For performance enhancement, a fitting procedure between the calculated and realistic reflectances is adopted in the nearfield to optimize the VS parameters (intensities and locations). To be practically applicable, an explicit 2VS-DA model is established based on close-form derivations of the VS parameters for the typical ranges of the optical parameters. The proposed VS-DA model is validated by comparing with the Monte Carlo simulations, and further introduced in the image reconstruction of the Laminar Optical Tomography system.
Mapping of chlorophyll a distributions in coastal zones
NASA Technical Reports Server (NTRS)
Johnson, R. W.
1978-01-01
It is pointed out that chlorophyll a is an important environmental parameter for monitoring water quality, nutrient loads, and pollution effects in coastal zones. High chlorophyll a concentrations occur in areas which have high nutrient inflows from sources such as sewage treatment plants and industrial wastes. Low chlorophyll a concentrations may be due to the addition of toxic substances from industrial wastes or other sources. Remote sensing provides an opportunity to assess distributions of water quality parameters, such as chlorophyll a. A description is presented of the chlorophyll a analysis and a quantitative mapping of the James River, Virginia. An approach considered by Johnson (1977) was used in the analysis. An application of the multiple regression analysis technique to a data set collected over the New York Bight, an environmentally different area of the coastal zone, is also discussed.
A Multi-Camera System for Bioluminescence Tomography in Preclinical Oncology Research
Lewis, Matthew A.; Richer, Edmond; Slavine, Nikolai V.; Kodibagkar, Vikram D.; Soesbe, Todd C.; Antich, Peter P.; Mason, Ralph P.
2013-01-01
Bioluminescent imaging (BLI) of cells expressing luciferase is a valuable noninvasive technique for investigating molecular events and tumor dynamics in the living animal. Current usage is often limited to planar imaging, but tomographic imaging can enhance the usefulness of this technique in quantitative biomedical studies by allowing accurate determination of tumor size and attribution of the emitted light to a specific organ or tissue. Bioluminescence tomography based on a single camera with source rotation or mirrors to provide additional views has previously been reported. We report here in vivo studies using a novel approach with multiple rotating cameras that, when combined with image reconstruction software, provides the desired representation of point source metastases and other small lesions. Comparison with MRI validated the ability to detect lung tumor colonization in mouse lung. PMID:26824926
Laezza, Antonio; Iadonisi, Alfonso; Castro, Cristina De; De Rosa, Mario; Schiraldi, Chiara; Parrilli, Michelangelo; Bedini, Emiliano
2015-07-13
Chemical O-glycosylation of polysaccharides is an almost unexplored reaction. This is mainly due to the difficulties in derivatizing such complex biomacromolecules in a quantitative manner and with a fine control of the obtained structural parameters. In this work, chondroitin raw material from a microbial source was chemo- and regioselectively protected to give two polysaccharide intermediates, that acted in turn as glycosyl acceptors in fucosylation reactions. Further manipulations on the fucosylated polysaccharides, including multiple de-O-benzylation and sulfation, furnished for the first time nonanimal sourced fucosylated chondroitin sulfates (fCSs)-polysaccharides obtained so far exclusively from sea cucumbers (Echinoidea, Holothuroidea) and showing several very interesting biological activities. A semisynthetic fCS was characterized from a structural point of view by means of 2D-NMR techniques, and preliminarily assayed in an anticoagulant test.
A programmable metasurface with dynamic polarization, scattering and focusing control
NASA Astrophysics Data System (ADS)
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-10-01
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.
A programmable metasurface with dynamic polarization, scattering and focusing control
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-01-01
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications. PMID:27774997
A programmable metasurface with dynamic polarization, scattering and focusing control.
Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia
2016-10-24
Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.
Experimental demonstration of interferometric imaging using photonic integrated circuits.
Su, Tiehui; Scott, Ryan P; Ogden, Chad; Thurman, Samuel T; Kendrick, Richard L; Duncan, Alan; Yu, Runxiang; Yoo, S J B
2017-05-29
This paper reports design, fabrication, and demonstration of a silica photonic integrated circuit (PIC) capable of conducting interferometric imaging with multiple baselines around λ = 1550 nm. The PIC consists of four sets of five waveguides (total of twenty waveguides), each leading to a three-band spectrometer (total of sixty waveguides), after which a tunable Mach-Zehnder interferometer (MZI) constructs interferograms from each pair of the waveguides. A total of thirty sets of interferograms (ten pairs of three spectral bands) is collected by the detector array at the output of the PIC. The optical path difference (OPD) of each interferometer baseline is kept to within 1 µm to maximize the visibility of the interference measurement. We constructed an experiment to utilize the two baselines for complex visibility measurement on a point source and a variable width slit. We used the point source to demonstrate near unity value of the PIC instrumental visibility, and used the variable slit to demonstrate visibility measurement for a simple extended object. The experimental result demonstrates the visibility of baseline 5 and 20 mm for a slit width of 0 to 500 µm in good agreement with theoretical predictions.
Computer simulation of reconstructed image for computer-generated holograms
NASA Astrophysics Data System (ADS)
Yasuda, Tomoki; Kitamura, Mitsuru; Watanabe, Masachika; Tsumuta, Masato; Yamaguchi, Takeshi; Yoshikawa, Hiroshi
2009-02-01
This report presents the results of computer simulation images for image-type Computer-Generated Holograms (CGHs) observable under white light fabricated with an electron beam lithography system. The simulated image is obtained by calculating wavelength and intensity of diffracted light traveling toward the viewing point from the CGH. Wavelength and intensity of the diffracted light are calculated using FFT image generated from interference fringe data. Parallax image of CGH corresponding to the viewing point can be easily obtained using this simulation method. Simulated image from interference fringe data was compared with reconstructed image of real CGH with an Electron Beam (EB) lithography system. According to the result, the simulated image resembled the reconstructed image of the CGH closely in shape, parallax, coloring and shade. And, in accordance with the shape of the light sources the simulated images which were changed in chroma saturation and blur by using two kinds of simulations: the several light sources method and smoothing method. In addition, as the applications of the CGH, full-color CGH and CGH with multiple images were simulated. The result was that the simulated images of those CGHs closely resembled the reconstructed image of real CGHs.
NASA Astrophysics Data System (ADS)
Tong, Daniel Quansong; Kang, Daiwen; Aneja, Viney P.; Ray, John D.
2005-01-01
We present in this study both measurement-based and modeling analyses for elucidation of source attribution, influence areas, and process budget of reactive nitrogen oxides at two rural southeast United States sites (Great Smoky Mountains national park (GRSM) and Mammoth Cave national park (MACA)). Availability of nitrogen oxides is considered as the limiting factor to ozone production in these areas and the relative source contribution of reactive nitrogen oxides from point or mobile sources is important in understanding why these areas have high ozone. Using two independent observation-based techniques, multiple linear regression analysis and emission inventory analysis, we demonstrate that point sources contribute a minimum of 23% of total NOy at GRSM and 27% at MACA. The influence areas for these two sites, or origins of nitrogen oxides, are investigated using trajectory-cluster analysis. The result shows that air masses from the West and Southwest sweep over GRSM most frequently, while pollutants transported from the eastern half (i.e., East, Northeast, and Southeast) have limited influence (<10% out of all air masses) on air quality at GRSM. The processes responsible for formation and removal of reactive nitrogen oxides are investigated using a comprehensive 3-D air quality model (Multiscale Air Quality SImulation Platform (MAQSIP)). The NOy contribution associated with chemical transformations to NOz and O3, based on process budget analysis, is as follows: 32% and 84% for NOz, and 26% and 80% for O3 at GRSM and MACA, respectively. The similarity between NOz and O3 process budgets suggests a close association between nitrogen oxides and effective O3 production at these rural locations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Jin-Won; Lee, Yun-Seong, E-mail: leeeeys@kaist.ac.kr; Chang, Hong-Young
2014-08-15
In this study, we attempted to determine the possibility of multiple inductively coupled plasma (ICP) and helicon plasma sources for large-area processes. Experiments were performed with the one and two coils to measure plasma and electrical parameters, and a circuit simulation was performed to measure the current at each coil in the 2-coil experiment. Based on the result, we could determine the possibility of multiple ICP sources due to a direct change of impedance due to current and saturation of impedance due to the skin-depth effect. However, a helicon plasma source is difficult to adapt to the multiple sources duemore » to the consistent change of real impedance due to mode transition and the low uniformity of the B-field confinement. As a result, it is expected that ICP can be adapted to multiple sources for large-area processes.« less
Diamond, Kevin R; Farrell, Thomas J; Patterson, Michael S
2003-12-21
Steady-state diffusion theory models of fluorescence in tissue have been investigated for recovering fluorophore concentrations and fluorescence quantum yield. Spatially resolved fluorescence, excitation and emission reflectance Carlo simulations, and measured using a multi-fibre probe on tissue-simulating phantoms containing either aluminium phthalocyanine tetrasulfonate (AlPcS4), Photofrin meso-tetra-(4-sulfonatophenyl)-porphine dihydrochloride The accuracy of the fluorophore concentration and fluorescence quantum yield recovered by three different models of spatially resolved fluorescence were compared. The models were based on: (a) weighted difference of the excitation and emission reflectance, (b) fluorescence due to a point excitation source or (c) fluorescence due to a pencil beam excitation source. When literature values for the fluorescence quantum yield were used for each of the fluorophores, the fluorophore absorption coefficient (and hence concentration) at the excitation wavelength (mu(a,x,f)) was recovered with a root-mean-square accuracy of 11.4% using the point source model of fluorescence and 8.0% using the more complicated pencil beam excitation model. The accuracy was calculated over a broad range of optical properties and fluorophore concentrations. The weighted difference of reflectance model performed poorly, with a root-mean-square error in concentration of about 50%. Monte Carlo simulations suggest that there are some situations where the weighted difference of reflectance is as accurate as the other two models, although this was not confirmed experimentally. Estimates of the fluorescence quantum yield in multiple scattering media were also made by determining mu(a,x,f) independently from the fitted absorption spectrum and applying the various diffusion theory models. The fluorescence quantum yields for AlPcS4 and TPPS4 were calculated to be 0.59 +/- 0.03 and 0.121 +/- 0.001 respectively using the point source model, and 0.63 +/- 0.03 and 0.129 +/- 0.002 using the pencil beam excitation model. These results are consistent with published values.
40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?
Code of Federal Regulations, 2012 CFR
2012-07-01
... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...
40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?
Code of Federal Regulations, 2010 CFR
2010-07-01
... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...
40 CFR 51.35 - How can my state equalize the emission inventory effort from year to year?
Code of Federal Regulations, 2014 CFR
2014-07-01
... approach: (1) Each year, collect and report data for all Type A (large) point sources (this is required for all Type A point sources). (2) Each year, collect data for one-third of your sources that are not Type... save 3 years of data and then report all emissions from the sources that are not Type A point sources...
Evaluation of Electroencephalography Source Localization Algorithms with Multiple Cortical Sources.
Bradley, Allison; Yao, Jun; Dewald, Jules; Richter, Claus-Peter
2016-01-01
Source localization algorithms often show multiple active cortical areas as the source of electroencephalography (EEG). Yet, there is little data quantifying the accuracy of these results. In this paper, the performance of current source density source localization algorithms for the detection of multiple cortical sources of EEG data has been characterized. EEG data were generated by simulating multiple cortical sources (2-4) with the same strength or two sources with relative strength ratios of 1:1 to 4:1, and adding noise. These data were used to reconstruct the cortical sources using current source density (CSD) algorithms: sLORETA, MNLS, and LORETA using a p-norm with p equal to 1, 1.5 and 2. Precision (percentage of the reconstructed activity corresponding to simulated activity) and Recall (percentage of the simulated sources reconstructed) of each of the CSD algorithms were calculated. While sLORETA has the best performance when only one source is present, when two or more sources are present LORETA with p equal to 1.5 performs better. When the relative strength of one of the sources is decreased, all algorithms have more difficulty reconstructing that source. However, LORETA 1.5 continues to outperform other algorithms. If only the strongest source is of interest sLORETA is recommended, while LORETA with p equal to 1.5 is recommended if two or more of the cortical sources are of interest. These results provide guidance for choosing a CSD algorithm to locate multiple cortical sources of EEG and for interpreting the results of these algorithms.
Evaluation of Electroencephalography Source Localization Algorithms with Multiple Cortical Sources
Bradley, Allison; Yao, Jun; Dewald, Jules; Richter, Claus-Peter
2016-01-01
Background Source localization algorithms often show multiple active cortical areas as the source of electroencephalography (EEG). Yet, there is little data quantifying the accuracy of these results. In this paper, the performance of current source density source localization algorithms for the detection of multiple cortical sources of EEG data has been characterized. Methods EEG data were generated by simulating multiple cortical sources (2–4) with the same strength or two sources with relative strength ratios of 1:1 to 4:1, and adding noise. These data were used to reconstruct the cortical sources using current source density (CSD) algorithms: sLORETA, MNLS, and LORETA using a p-norm with p equal to 1, 1.5 and 2. Precision (percentage of the reconstructed activity corresponding to simulated activity) and Recall (percentage of the simulated sources reconstructed) of each of the CSD algorithms were calculated. Results While sLORETA has the best performance when only one source is present, when two or more sources are present LORETA with p equal to 1.5 performs better. When the relative strength of one of the sources is decreased, all algorithms have more difficulty reconstructing that source. However, LORETA 1.5 continues to outperform other algorithms. If only the strongest source is of interest sLORETA is recommended, while LORETA with p equal to 1.5 is recommended if two or more of the cortical sources are of interest. These results provide guidance for choosing a CSD algorithm to locate multiple cortical sources of EEG and for interpreting the results of these algorithms. PMID:26809000
Multiple Sources of Prescription Payment and Risky Opioid Therapy Among Veterans.
Becker, William C; Fenton, Brenda T; Brandt, Cynthia A; Doyle, Erin L; Francis, Joseph; Goulet, Joseph L; Moore, Brent A; Torrise, Virginia; Kerns, Robert D; Kreiner, Peter W
2017-07-01
Opioid overdose and other related harms are a major source of morbidity and mortality among US Veterans, in part due to high-risk opioid prescribing. We sought to determine whether having multiple sources of payment for opioids-as a marker for out-of-system access-is associated with risky opioid therapy among veterans. Cross-sectional study examining the association between multiple sources of payment and risky opioid therapy among all individuals with Veterans Health Administration (VHA) payment for opioid analgesic prescriptions in Kentucky during fiscal year 2014-2015. Source of payment categories: (1) VHA only source of payment (sole source); (2) sources of payment were VHA and at least 1 cash payment [VHA+cash payment(s)] whether or not there was a third source of payment; and (3) at least one other noncash source: Medicare, Medicaid, or private insurance [VHA+noncash source(s)]. Our outcomes were 2 risky opioid therapies: combination opioid/benzodiazepine therapy and high-dose opioid therapy, defined as morphine equivalent daily dose ≥90 mg. Of the 14,795 individuals in the analytic sample, there were 81.9% in the sole source category, 6.6% in the VHA+cash payment(s) category, and 11.5% in the VHA+noncash source(s) category. In logistic regression, controlling for age and sex, persons with multiple payment sources had significantly higher odds of each risky opioid therapy, with those in the VHA+cash having significantly higher odds than those in the VHA+noncash source(s) group. Prescribers should examine the prescription monitoring program as multiple payment sources increase the odds of risky opioid therapy.
The USA Nr Inventory: Dominant Sources and Primary Transport Pathways
NASA Astrophysics Data System (ADS)
Sabo, R. D.; Clark, C.; Sobota, D. J.; Compton, J.; Cooter, E. J.; Schwede, D. B.; Bash, J. O.; Rea, A.; Dobrowolski, J. P.
2016-12-01
Efforts to mitigate the deleterious effects of excess reactive nitrogen (Nr) on human health and ecosystem goods and service while ensuring food, biofuel, and fiber availability, is one of the most pressing environmental management challenges of this century. Effective management of Nr requires up to date inventories that quantitatively characterize the sources, transport, and transformation of Nr through the environment. The inherent complexity of the nitrogen cycle, however, through multiple exchange points across air, water, and terrestrial media, renders such inventories difficult to compile and manage. Previous Nr Inventories are for 2002 and 2007, and used data sources that have since been improved. Thus, this recent inventory will substantially advance the methodology across many sectors of the inventory (e.g. deposition and biological fixation in crops and natural systems) and create a recent snapshot that is sorely needed for policy planning and trends analysis. Here we use a simple mass balance approach to estimate the input-output budgets for all United States Geologic Survey Hydrologic Unit Code-8 watersheds. We focus on a recent year (i.e. 2012) to update the Nr Inventory, but apply the analytical approach for multiple years where possible to assess trends through time. We also compare various sector estimates using multiple methodologies. Assembling datasets that account for new Nr inputs into watersheds (e.g., atmospheric NOy deposition, food imports, biologic N fixation) and internal fluxes of recycled Nr (e.g., manure, Nr emmissions/volatilization) provide an unprecedented, data driven computation of N flux. Input-output budgets will offer insight into 1) the dominant sources of Nr in a watershed (e.g., food imports, atmospheric N deposition, or fertilizer), 2) the primary loss pathways for Nr (e.g., crop N harvest, volatilization/emissions), and 3) what watersheds are net sources versus sinks of Nr. These insights will provide needed clarity for managers looking to minimize the loss of Nr to atmospheric and aquatic compartments, while also providing a foundational database for researchers assessing the dominant controls of N retention and loss in natural and anthropogenically dominated ecosystems. Disclaimer: Views expressed are the authors' and not views or polices of the U.S.EPA.
Design of system calibration for effective imaging
NASA Astrophysics Data System (ADS)
Varaprasad Babu, G.; Rao, K. M. M.
2006-12-01
A CCD based characterization setup comprising of a light source, CCD linear array, Electronics for signal conditioning/ amplification, PC interface has been developed to generate images at varying densities and at multiple view angles. This arrangement is used to simulate and evaluate images by Super Resolution technique with multiple overlaps and yaw rotated images at different view angles. This setup also generates images at different densities to analyze the response of the detector port wise separately. The light intensity produced by the source needs to be calibrated for proper imaging by the high sensitive CCD detector over the FOV. One approach is to design a complex integrating sphere arrangement which costs higher for such applications. Another approach is to provide a suitable intensity feed back correction wherein the current through the lamp is controlled in a closed loop arrangement. This method is generally used in the applications where the light source is a point source. The third method is to control the time of exposure inversely to the lamp variations where lamp intensity is not possible to control. In this method, light intensity during the start of each line is sampled and the correction factor is applied for the full line. The fourth method is to provide correction through Look Up Table where the response of all the detectors are normalized through the digital transfer function. The fifth method is to have a light line arrangement where the light through multiple fiber optic cables are derived from a single source and arranged them in line. This is generally applicable and economical for low width cases. In our applications, a new method wherein an inverse multi density filter is designed which provides an effective calibration for the full swath even at low light intensities. The light intensity along the length is measured, an inverse density is computed, a correction filter is generated and implemented in the CCD based Characterization setup. This paper describes certain novel techniques of design and implementation of system calibration for effective Imaging to produce better quality data product especially while handling high resolution data.
Integrating Low-Cost Mems Accelerometer Mini-Arrays (mama) in Earthquake Early Warning Systems
NASA Astrophysics Data System (ADS)
Nof, R. N.; Chung, A. I.; Rademacher, H.; Allen, R. M.
2016-12-01
Current operational Earthquake Early Warning Systems (EEWS) acquire data with networks of single seismic stations, and compute source parameters assuming earthquakes to be point sources. For large events, the point-source assumption leads to an underestimation of magnitude, and the use of single stations leads to large uncertainties in the locations of events outside the network. We propose the use of mini-arrays to improve EEWS. Mini-arrays have the potential to: (a) estimate reliable hypocentral locations by beam forming (FK-analysis) techniques; (b) characterize the rupture dimensions and account for finite-source effects, leading to more reliable estimates for large magnitudes. Previously, the high price of multiple seismometers has made creating arrays cost-prohibitive. However, we propose setting up mini-arrays of a new seismometer based on low-cost (<$150), high-performance MEMS accelerometer around conventional seismic stations. The expected benefits of such an approach include decreasing alert-times, improving real-time shaking predictions and mitigating false alarms. We use low-resolution 14-bit Quake Catcher Network (QCN) data collected during Rapid Aftershock Mobilization Program (RAMP) in Christchurch, NZ following the M7.1 Darfield earthquake in September 2010. As the QCN network was so dense, we were able to use small sub-array of up to ten sensors spread along a maximum area of 1.7x2.2 km2 to demonstrate our approach and to solve for the BAZ of two events (Mw4.7 and Mw5.1) with less than ±10° error. We will also present the new 24-bit device details, benchmarks, and real-time measurements.
Organic contaminant transport and fate in the subsurface: Evolution of knowledge and understanding
NASA Astrophysics Data System (ADS)
Essaid, Hedeff I.; Bekins, Barbara A.; Cozzarelli, Isabelle M.
2015-07-01
Toxic organic contaminants may enter the subsurface as slightly soluble and volatile nonaqueous phase liquids (NAPLs) or as dissolved solutes resulting in contaminant plumes emanating from the source zone. A large body of research published in Water Resources Research has been devoted to characterizing and understanding processes controlling the transport and fate of these organic contaminants and the effectiveness of natural attenuation, bioremediation, and other remedial technologies. These contributions include studies of NAPL flow, entrapment, and interphase mass transfer that have advanced from the analysis of simple systems with uniform properties and equilibrium contaminant phase partitioning to complex systems with pore-scale and macroscale heterogeneity and rate-limited interphase mass transfer. Understanding of the fate of dissolved organic plumes has advanced from when biodegradation was thought to require oxygen to recognition of the importance of anaerobic biodegradation, multiple redox zones, microbial enzyme kinetics, and mixing of organic contaminants and electron acceptors at plume fringes. Challenges remain in understanding the impacts of physical, chemical, biological, and hydrogeological heterogeneity, pore-scale interactions, and mixing on the fate of organic contaminants. Further effort is needed to successfully incorporate these processes into field-scale predictions of transport and fate. Regulations have greatly reduced the frequency of new point-source contamination problems; however, remediation at many legacy plumes remains challenging. A number of fields of current relevance are benefiting from research advances from point-source contaminant research. These include geologic carbon sequestration, nonpoint-source contamination, aquifer storage and recovery, the fate of contaminants from oil and gas development, and enhanced bioremediation.
NASA Astrophysics Data System (ADS)
Zackay, Barak; Ofek, Eran O.
2017-02-01
Stacks of digital astronomical images are combined in order to increase image depth. The variable seeing conditions, sky background, and transparency of ground-based observations make the coaddition process nontrivial. We present image coaddition methods that maximize the signal-to-noise ratio (S/N) and optimized for source detection and flux measurement. We show that for these purposes, the best way to combine images is to apply a matched filter to each image using its own point-spread function (PSF) and only then to sum the images with the appropriate weights. Methods that either match the filter after coaddition or perform PSF homogenization prior to coaddition will result in loss of sensitivity. We argue that our method provides an increase of between a few and 25% in the survey speed of deep ground-based imaging surveys compared with weighted coaddition techniques. We demonstrate this claim using simulated data as well as data from the Palomar Transient Factory data release 2. We present a variant of this coaddition method, which is optimal for PSF or aperture photometry. We also provide an analytic formula for calculating the S/N for PSF photometry on single or multiple observations. In the next paper in this series, we present a method for image coaddition in the limit of background-dominated noise, which is optimal for any statistical test or measurement on the constant-in-time image (e.g., source detection, shape or flux measurement, or star-galaxy separation), making the original data redundant. We provide an implementation of these algorithms in MATLAB.
Organic contaminant transport and fate in the subsurface: evolution of knowledge and understanding
Essaid, Hedeff I.; Bekins, Barbara A.; Cozzarelli, Isabelle M.
2015-01-01
Toxic organic contaminants may enter the subsurface as slightly soluble and volatile nonaqueous phase liquids (NAPLs) or as dissolved solutes resulting in contaminant plumes emanating from the source zone. A large body of research published in Water Resources Research has been devoted to characterizing and understanding processes controlling the transport and fate of these organic contaminants and the effectiveness of natural attenuation, bioremediation, and other remedial technologies. These contributions include studies of NAPL flow, entrapment, and interphase mass transfer that have advanced from the analysis of simple systems with uniform properties and equilibrium contaminant phase partitioning to complex systems with pore-scale and macroscale heterogeneity and rate-limited interphase mass transfer. Understanding of the fate of dissolved organic plumes has advanced from when biodegradation was thought to require oxygen to recognition of the importance of anaerobic biodegradation, multiple redox zones, microbial enzyme kinetics, and mixing of organic contaminants and electron acceptors at plume fringes. Challenges remain in understanding the impacts of physical, chemical, biological, and hydrogeological heterogeneity, pore-scale interactions, and mixing on the fate of organic contaminants. Further effort is needed to successfully incorporate these processes into field-scale predictions of transport and fate. Regulations have greatly reduced the frequency of new point-source contamination problems; however, remediation at many legacy plumes remains challenging. A number of fields of current relevance are benefiting from research advances from point-source contaminant research. These include geologic carbon sequestration, nonpoint-source contamination, aquifer storage and recovery, the fate of contaminants from oil and gas development, and enhanced bioremediation.
NASA Astrophysics Data System (ADS)
Dupas, Rémi; Tittel, Jörg; Jordan, Phil; Musolff, Andreas; Rode, Michael
2018-05-01
A common assumption in phosphorus (P) load apportionment studies is that P loads in rivers consist of flow independent point source emissions (mainly from domestic and industrial origins) and flow dependent diffuse source emissions (mainly from agricultural origin). Hence, rivers dominated by point sources will exhibit highest P concentration during low-flow, when flow dilution capacity is minimal, whereas rivers dominated by diffuse sources will exhibit highest P concentration during high-flow, when land-to-river hydrological connectivity is maximal. Here, we show that Soluble Reactive P (SRP) concentrations in three forested catchments free of point sources exhibited seasonal maxima during the summer low-flow period, i.e. a pattern expected in point source dominated areas. A load apportionment model (LAM) is used to show how point sources contribution may have been overestimated in previous studies, because of a biogeochemical process mimicking a point source signal. Almost twenty-two years (March 1995-September 2016) of monthly monitoring data of SRP, dissolved iron (Fe) and nitrate-N (NO3) were used to investigate the underlying mechanisms: SRP and Fe exhibited similar seasonal patterns and opposite to that of NO3. We hypothesise that Fe oxyhydroxide reductive dissolution might be the cause of SRP release during the summer period, and that NO3 might act as a redox buffer, controlling the seasonality of SRP release. We conclude that LAMs may overestimate the contribution of P point sources, especially during the summer low-flow period, when eutrophication risk is maximal.
NASA Astrophysics Data System (ADS)
Zhang, S.; Tang, L.
2007-05-01
Panjiakou Reservoir is an important drinking water resource in Haihe River Basin, Hebei Province, People's Republic of China. The upstream watershed area is about 35,000 square kilometers. Recently, the water pollution in the reservoir is becoming more serious owing to the non-point pollution as well as point source pollution on the upstream watershed. To effectively manage the reservoir and watershed and develop a plan to reduce pollutant loads, the loading of non-point and point pollution and their distribution on the upstream watershed must be understood fully. The SWAT model is used to simulate the production and transportation of the non-point source pollutants in the upstream watershed of the Panjiakou Reservoir. The loadings of non-point source pollutants are calculated for different hydrologic years and the spatial and temporal characteristics of non-point source pollution are studied. The stream network and topographic characteristics of the stream network and sub-basins are all derived from the DEM by ArcGIS software. The soil and land use data are reclassified and the soil physical properties database file is created for the model. The SWAT model was calibrated with observed data of several hydrologic monitoring stations in the study area. The results of the calibration show that the model performs fairly well. Then the calibrated model was used to calculate the loadings of non-point source pollutants for a wet year, a normal year and a dry year respectively. The time and space distribution of flow, sediment and non-point source pollution were analyzed depending on the simulated results. The comparison of different hydrologic years on calculation results is dramatic. The loading of non-point source pollution in the wet year is relatively larger but smaller in the dry year since the non-point source pollutants are mainly transported through the runoff. The pollution loading within a year is mainly produced in the flood season. Because SWAT is a distributed model, it is possible to view model output as it varies across the basin, so the critical areas and reaches can be found in the study area. According to the simulation results, it is found that different land uses can yield different results and fertilization in rainy season has an important impact on the non- point source pollution. The limitations of the SWAT model are also discussed and the measures of the control and prevention of non- point source pollution for Panjiakou Reservoir are presented according to the analysis of model calculation results.
NASA Astrophysics Data System (ADS)
Nishiura, Takanobu; Nakamura, Satoshi
2002-11-01
It is very important to capture distant-talking speech for a hands-free speech interface with high quality. A microphone array is an ideal candidate for this purpose. However, this approach requires localizing the target talker. Conventional talker localization algorithms in multiple sound source environments not only have difficulty localizing the multiple sound sources accurately, but also have difficulty localizing the target talker among known multiple sound source positions. To cope with these problems, we propose a new talker localization algorithm consisting of two algorithms. One is DOA (direction of arrival) estimation algorithm for multiple sound source localization based on CSP (cross-power spectrum phase) coefficient addition method. The other is statistical sound source identification algorithm based on GMM (Gaussian mixture model) for localizing the target talker position among localized multiple sound sources. In this paper, we particularly focus on the talker localization performance based on the combination of these two algorithms with a microphone array. We conducted evaluation experiments in real noisy reverberant environments. As a result, we confirmed that multiple sound signals can be identified accurately between ''speech'' or ''non-speech'' by the proposed algorithm. [Work supported by ATR, and MEXT of Japan.
THE TOP 10 SPITZER YOUNG STELLAR OBJECTS IN 30 DORADUS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walborn, Nolan R.; Barba, Rodolfo H.; Sewilo, Marta M., E-mail: walborn@stsci.edu, E-mail: rbarba@dfuls.cl, E-mail: mmsewilo@pha.jhu.edu
2013-04-15
The most luminous Spitzer point sources in the 30 Doradus triggered second generation are investigated coherently in the 3-8 {mu}m region. Remarkable diversity and complexity in their natures are revealed. Some are also among the brightest JHK sources, while others are not. Several of them are multiple when examined at higher angular resolutions with Hubble Space Telescope NICMOS and WFPC2/WFC3 as available, or with VISTA/VMC otherwise. One is a dusty compact H II region near the far northwestern edge of the complex, containing a half-dozen bright I-band sources. Three others appear closely associated with luminous WN stars and causal connectionsmore » are suggested. Some are in the heads of dust pillars oriented toward R136, as previously discussed from the NICMOS data. One resides in a compact cluster of much fainter sources, while another appears monolithic at the highest resolutions. Surprisingly, one is the brighter of the two extended ''mystery spots'' associated with Knot 2 of Walborn et al. Masses are derived from young stellar object models for unresolved sources and lie in the 10-30 M{sub Sun} range. Further analysis of the IR sources in this unique region will advance understanding of triggered massive star formation, perhaps in some unexpected and unprecedented ways.« less
NASA Astrophysics Data System (ADS)
Gerardy, I.; Rodenas, J.; Van Dycke, M.; Gallardo, S.; Tondeur, F.
2008-02-01
Brachytherapy is a radiotherapy treatment where encapsulated radioactive sources are introduced within a patient. Depending on the technique used, such sources can produce high, medium or low local dose rates. The Monte Carlo method is a powerful tool to simulate sources and devices in order to help physicists in treatment planning. In multiple types of gynaecological cancer, intracavitary brachytherapy (HDR Ir-192 source) is used combined with other therapy treatment to give an additional local dose to the tumour. Different types of applicators are used in order to increase the dose imparted to the tumour and to limit the effect on healthy surrounding tissues. The aim of this work is to model both applicator and HDR source in order to evaluate the dose at a reference point as well as the effect of the materials constituting the applicators on the near field dose. The MCNP5 code based on the Monte Carlo method has been used for the simulation. Dose calculations have been performed with *F8 energy deposition tally, taking into account photons and electrons. Results from simulation have been compared with experimental in-phantom dose measurements. Differences between calculations and measurements are lower than 5%.The importance of the source position has been underlined.
Sound source localization inspired by the ears of the Ormia ochracea
NASA Astrophysics Data System (ADS)
Kuntzman, Michael L.; Hall, Neal A.
2014-07-01
The parasitoid fly Ormia ochracea has the remarkable ability to locate crickets using audible sound. This ability is, in fact, remarkable as the fly's hearing mechanism spans only 1.5 mm which is 50× smaller than the wavelength of sound emitted by the cricket. The hearing mechanism is, for all practical purposes, a point in space with no significant interaural time or level differences to draw from. It has been discovered that evolution has empowered the fly with a hearing mechanism that utilizes multiple vibration modes to amplify interaural time and level differences. Here, we present a fully integrated, man-made mimic of the Ormia's hearing mechanism capable of replicating the remarkable sound localization ability of the special fly. A silicon-micromachined prototype is presented which uses multiple piezoelectric sensing ports to simultaneously transduce two orthogonal vibration modes of the sensing structure, thereby enabling simultaneous measurement of sound pressure and pressure gradient.
Exciton localization in polar and semipolar (112̅2) In0.2Ga0.8N/GaN multiple quantum wells
NASA Astrophysics Data System (ADS)
Dinh, Duc V.; Presa, Silvino; Maaskant, Pleun P.; Corbett, Brian; Parbrook, Peter J.
2016-08-01
The exciton localization (ELZ) in polar (0001) and semipolar (112̅2) In{}0.2Ga{}0.8{{N}} multiple-quantum-well (MQW) structures has been studied by excitation power density and temperature dependent photoluminescence. The ELZ in the (112̅2) MQW was found to be much stronger (ELZ degree σ E ˜ 40 -70 meV) compared to the (0001) MQW (σ E ˜ 5-11 meV) that was attributed to the anisotropic growth on the (112̅2) surface. This strong ELZ was found to cause a blue-shift of the (112̅2) MQW exciton emission with rising temperature from 200 to 340 K, irrespective of excitation source used. A lower luminescence efficiency of the (112̅2) MQW was attributed to their anisotropic growth, and higher concentrations of unintentional impurities and point defects than the (0001) MQW.
A management and optimisation model for water supply planning in water deficit areas
NASA Astrophysics Data System (ADS)
Molinos-Senante, María; Hernández-Sancho, Francesc; Mocholí-Arce, Manuel; Sala-Garrido, Ramón
2014-07-01
The integrated water resources management approach has proven to be a suitable option for efficient, equitable and sustainable water management. In water-poor regions experiencing acute and/or chronic shortages, optimisation techniques are a useful tool for supporting the decision process of water allocation. In order to maximise the value of water use, an optimisation model was developed which involves multiple supply sources (conventional and non-conventional) and multiple users. Penalties, representing monetary losses in the event of an unfulfilled water demand, have been incorporated into the objective function. This model represents a novel approach which considers water distribution efficiency and the physical connections between water supply and demand points. Subsequent empirical testing using data from a Spanish Mediterranean river basin demonstrated the usefulness of the global optimisation model to solve existing water imbalances at the river basin level.
NASA Astrophysics Data System (ADS)
Hanyu, Ryosuke; Tsuji, Toshiaki
This paper proposes a whole-body haptic sensing system that has multiple supporting points between the body frame and the end-effector. The system consists of an end-effector and multiple force sensors. Using this mechanism, the position of a contact force on the surface can be calculated without any sensor array. A haptic sensing system with a single supporting point structure has previously been developed by the present authors. However, the system has drawbacks such as low stiffness and low strength. Therefore, in this study, a mechanism with multiple supporting points was proposed and its performance was verified. In this paper, the basic concept of the mechanism is first introduced. Next, an evaluation of the proposed method, performed by conducting some experiments, is presented.
Hong, Peilong; Li, Liming; Liu, Jianji; Zhang, Guoquan
2016-03-29
Young's double-slit or two-beam interference is of fundamental importance to understand various interference effects, in which the stationary phase difference between two beams plays the key role in the first-order coherence. Different from the case of first-order coherence, in the high-order optical coherence the statistic behavior of the optical phase will play the key role. In this article, by employing a fundamental interfering configuration with two classical point sources, we showed that the high- order optical coherence between two classical point sources can be actively designed by controlling the statistic behavior of the relative phase difference between two point sources. Synchronous position Nth-order subwavelength interference with an effective wavelength of λ/M was demonstrated, in which λ is the wavelength of point sources and M is an integer not larger than N. Interestingly, we found that the synchronous position Nth-order interference fringe fingerprints the statistic trace of random phase fluctuation of two classical point sources, therefore, it provides an effective way to characterize the statistic properties of phase fluctuation for incoherent light sources.
MCNP-REN - A Monte Carlo Tool for Neutron Detector Design Without Using the Point Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abhold, M.E.; Baker, M.C.
1999-07-25
The development of neutron detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model fails to accurately predict detector response in common applications. For this reason, the general Monte Carlo N-Particle code (MCNP) was modified to simulate the pulse streams that would be generated by a neutron detector and normally analyzed by a shift register. This modified code, MCNP - Random Exponentially Distributed Neutron Source (MCNP-REN), along with the Time Analysis Program (TAP) predict neutron detector response without using the pointmore » reactor model, making it unnecessary for the user to decide whether or not the assumptions of the point model are met for their application. MCNP-REN is capable of simulating standard neutron coincidence counting as well as neutron multiplicity counting. Measurements of MOX fresh fuel made using the Underwater Coincidence Counter (UWCC) as well as measurements of HEU reactor fuel using the active neutron Research Reactor Fuel Counter (RRFC) are compared with calculations. The method used in MCNP-REN is demonstrated to be fundamentally sound and shown to eliminate the need to use the point model for detector performance predictions.« less
Development of a Multi-Point Microwave Interferometry (MPMI) Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Specht, Paul Elliott; Cooper, Marcia A.; Jilek, Brook Anton
2015-09-01
A multi-point microwave interferometer (MPMI) concept was developed for non-invasively tracking a shock, reaction, or detonation front in energetic media. Initially, a single-point, heterodyne microwave interferometry capability was established. The design, construction, and verification of the single-point interferometer provided a knowledge base for the creation of the MPMI concept. The MPMI concept uses an electro-optic (EO) crystal to impart a time-varying phase lag onto a laser at the microwave frequency. Polarization optics converts this phase lag into an amplitude modulation, which is analyzed in a heterodyne interfer- ometer to detect Doppler shifts in the microwave frequency. A version of themore » MPMI was constructed to experimentally measure the frequency of a microwave source through the EO modulation of a laser. The successful extraction of the microwave frequency proved the underlying physical concept of the MPMI design, and highlighted the challenges associated with the longer microwave wavelength. The frequency measurements made with the current equipment contained too much uncertainty for an accurate velocity measurement. Potential alterations to the current construction are presented to improve the quality of the measured signal and enable multiple accurate velocity measurements.« less
Comparison of two stand-alone CADe systems at multiple operating points
NASA Astrophysics Data System (ADS)
Sahiner, Berkman; Chen, Weijie; Pezeshk, Aria; Petrick, Nicholas
2015-03-01
Computer-aided detection (CADe) systems are typically designed to work at a given operating point: The device displays a mark if and only if the level of suspiciousness of a region of interest is above a fixed threshold. To compare the standalone performances of two systems, one approach is to select the parameters of the systems to yield a target false-positive rate that defines the operating point, and to compare the sensitivities at that operating point. Increasingly, CADe developers offer multiple operating points, which necessitates the comparison of two CADe systems involving multiple comparisons. To control the Type I error, multiple-comparison correction is needed for keeping the family-wise error rate (FWER) less than a given alpha-level. The sensitivities of a single modality at different operating points are correlated. In addition, the sensitivities of the two modalities at the same or different operating points are also likely to be correlated. It has been shown in the literature that when test statistics are correlated, well-known methods for controlling the FWER are conservative. In this study, we compared the FWER and power of three methods, namely the Bonferroni, step-up, and adjusted step-up methods in comparing the sensitivities of two CADe systems at multiple operating points, where the adjusted step-up method uses the estimated correlations. Our results indicate that the adjusted step-up method has a substantial advantage over other the two methods both in terms of the FWER and power.
We evaluate the influence of multiple sources of faecal indicator bacteria in recreational water bodies on potential human health risk by considering waters impacted by human and animal sources, human and non-pathogenic sources, and animal and non-pathogenic sources. We illustrat...
Incomplete Multisource Transfer Learning.
Ding, Zhengming; Shao, Ming; Fu, Yun
2018-02-01
Transfer learning is generally exploited to adapt well-established source knowledge for learning tasks in weakly labeled or unlabeled target domain. Nowadays, it is common to see multiple sources available for knowledge transfer, each of which, however, may not include complete classes information of the target domain. Naively merging multiple sources together would lead to inferior results due to the large divergence among multiple sources. In this paper, we attempt to utilize incomplete multiple sources for effective knowledge transfer to facilitate the learning task in target domain. To this end, we propose an incomplete multisource transfer learning through two directional knowledge transfer, i.e., cross-domain transfer from each source to target, and cross-source transfer. In particular, in cross-domain direction, we deploy latent low-rank transfer learning guided by iterative structure learning to transfer knowledge from each single source to target domain. This practice reinforces to compensate for any missing data in each source by the complete target data. While in cross-source direction, unsupervised manifold regularizer and effective multisource alignment are explored to jointly compensate for missing data from one portion of source to another. In this way, both marginal and conditional distribution discrepancy in two directions would be mitigated. Experimental results on standard cross-domain benchmarks and synthetic data sets demonstrate the effectiveness of our proposed model in knowledge transfer from incomplete multiple sources.
Scattering of focused ultrasonic beams by cavities in a solid half-space.
Rahni, Ehsan Kabiri; Hajzargarbashi, Talieh; Kundu, Tribikram
2012-08-01
The ultrasonic field generated by a point focused acoustic lens placed in a fluid medium adjacent to a solid half-space, containing one or more spherical cavities, is modeled. The semi-analytical distributed point source method (DPSM) is followed for the modeling. This technique properly takes into account the interaction effect between the cavities placed in the focused ultrasonic field, fluid-solid interface and the lens surface. The approximate analytical solution that is available in the literature for the single cavity geometry is very restrictive and cannot handle multiple cavity problems. Finite element solutions for such problems are also prohibitively time consuming at high frequencies. Solution of this problem is necessary to predict when two cavities placed in close proximity inside a solid can be distinguished by an acoustic lens placed outside the solid medium and when such distinction is not possible.
High-speed spatial scanning pyrometer
NASA Technical Reports Server (NTRS)
Cezairliyan, A.; Chang, R. F.; Foley, G. M.; Miller, A. P.
1993-01-01
A high-speed spatial scanning pyrometer has been designed and developed to measure spectral radiance temperatures at multiple target points along the length of a rapidly heating/cooling specimen in dynamic thermophysical experiments at high temperatures (above about 1800 K). The design, which is based on a self-scanning linear silicon array containing 1024 elements, enables the pyrometer to measure spectral radiance temperatures (nominally at 650 nm) at 1024 equally spaced points along a 25-mm target length. The elements of the array are sampled consecutively every 1 microsec, thereby permitting one cycle of measurements to be completed in approximately 1 msec. Procedures for calibration and temperature measurement as well as the characteristics and performance of the pyrometer are described. The details of sources and estimated magnitudes of possible errors are given. An example of measurements of radiance temperatures along the length of a tungsten rod, during its cooling following rapid resistive pulse heating, is presented.
Composite analysis for Escherichia coli at coastal beaches
Bertke, E.E.
2007-01-01
At some coastal beaches, concentrations of fecal-indicator bacteria can differ substantially between multiple points at the same beach at the same time. Because of this spatial variability, the recreational water quality at beaches is sometimes determined by stratifying a beach into several areas and collecting a sample from each area to analyze for the concentration of fecal-indicator bacteria. The average concentration of bacteria from those points is often used to compare to the recreational standard for advisory postings. Alternatively, if funds are limited, a single sample is collected to represent the beach. Compositing the samples collected from each section of the beach may yield equally accurate data as averaging concentrations from multiple points, at a reduced cost. In the study described herein, water samples were collected at multiple points from three Lake Erie beaches and analyzed for Escherichia coli on modified mTEC agar (EPA Method 1603). From the multiple-point samples, a composite sample (n = 116) was formed at each beach by combining equal aliquots of well-mixed water from each point. Results from this study indicate that E. coli concentrations from the arithmetic average of multiple-point samples and from composited samples are not significantly different (t = 1.59, p = 0.1139) and yield similar measures of recreational water quality; additionally, composite samples could result in a significant cost savings.
Hunt, D C; Tanioka, Kenkichi; Rowlands, J A
2007-12-01
The flat-panel detector (FPD) is the state-of-the-art detector for digital radiography. The FPD can acquire images in real-time, has superior spatial resolution, and is free of the problems of x-ray image intensifiers-veiling glare, pin-cushion and magnetic distortion. However, FPDs suffer from poor signal to noise ratio performance at typical fluoroscopic exposure rates where the quantum noise is reduced to the point that it becomes comparable to the fixed electronic noise. It has been shown previously that avalanche multiplication gain in amorphous selenium (a-Se) can provide the necessary amplification to overcome the electronic noise of the FPD. Avalanche multiplication, however, comes with its own intrinsic contribution to the noise in the form of gain fluctuation noise. In this article a cascaded systems analysis is used to present a modified metric related to the detective quantum efficiency. The modified metric is used to study a diagnostic x-ray imaging system in the presence of intrinsic avalanche multiplication noise independently from other noise sources, such as electronic noise. An indirect conversion imaging system is considered to make the study independent of other avalanche multiplication related noise sources, such as the fluctuations arising from the depth of x-ray absorption. In this case all the avalanche events are initiated at the surface of the avalanche layer, and there are no fluctuations in the depth of absorption. Experiments on an indirect conversion x-ray imaging system using avalanche multiplication in a layer of a-Se are also presented. The cascaded systems analysis shows that intrinsic noise of avalanche multiplication will not have any deleterious influence on detector performance at zero spatial frequency in x-ray imaging provided the product of conversion gain, coupling efficiency, and optical quantum efficiency are much greater than a factor of 2. The experimental results show that avalanche multiplication in a-Se behaves as an intrinsic noise free avalanche multiplication, in accordance with our theory. Provided good coupling efficiency and high optical quantum efficiency are maintained, avalanche multiplication in a-Se has the potential to increase the gain and make negligible contribution to the noise, thereby improving the performance of indirect FPDs in fluoroscopy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunt, D. C.; Tanioka, Kenkichi; Rowlands, J. A.
2007-12-15
The flat-panel detector (FPD) is the state-of-the-art detector for digital radiography. The FPD can acquire images in real-time, has superior spatial resolution, and is free of the problems of x-ray image intensifiers--veiling glare, pin-cushion and magnetic distortion. However, FPDs suffer from poor signal to noise ratio performance at typical fluoroscopic exposure rates where the quantum noise is reduced to the point that it becomes comparable to the fixed electronic noise. It has been shown previously that avalanche multiplication gain in amorphous selenium (a-Se) can provide the necessary amplification to overcome the electronic noise of the FPD. Avalanche multiplication, however, comesmore » with its own intrinsic contribution to the noise in the form of gain fluctuation noise. In this article a cascaded systems analysis is used to present a modified metric related to the detective quantum efficiency. The modified metric is used to study a diagnostic x-ray imaging system in the presence of intrinsic avalanche multiplication noise independently from other noise sources, such as electronic noise. An indirect conversion imaging system is considered to make the study independent of other avalanche multiplication related noise sources, such as the fluctuations arising from the depth of x-ray absorption. In this case all the avalanche events are initiated at the surface of the avalanche layer, and there are no fluctuations in the depth of absorption. Experiments on an indirect conversion x-ray imaging system using avalanche multiplication in a layer of a-Se are also presented. The cascaded systems analysis shows that intrinsic noise of avalanche multiplication will not have any deleterious influence on detector performance at zero spatial frequency in x-ray imaging provided the product of conversion gain, coupling efficiency, and optical quantum efficiency are much greater than a factor of 2. The experimental results show that avalanche multiplication in a-Se behaves as an intrinsic noise free avalanche multiplication, in accordance with our theory. Provided good coupling efficiency and high optical quantum efficiency are maintained, avalanche multiplication in a-Se has the potential to increase the gain and make negligible contribution to the noise, thereby improving the performance of indirect FPDs in fluoroscopy.« less
Ghost imaging with bucket detection and point detection
NASA Astrophysics Data System (ADS)
Zhang, De-Jian; Yin, Rao; Wang, Tong-Biao; Liao, Qing-Hua; Li, Hong-Guo; Liao, Qinghong; Liu, Jiang-Tao
2018-04-01
We experimentally investigate ghost imaging with bucket detection and point detection in which three types of illuminating sources are applied: (a) pseudo-thermal light source; (b) amplitude modulated true thermal light source; (c) amplitude modulated laser source. Experimental results show that the quality of ghost images reconstructed with true thermal light or laser beam is insensitive to the usage of bucket or point detector, however, the quality of ghost images reconstructed with pseudo-thermal light in bucket detector case is better than that in point detector case. Our theoretical analysis shows that the reason for this is due to the first order transverse coherence of the illuminating source.
Distinguishing dark matter from unresolved point sources in the Inner Galaxy with photon statistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Samuel K.; Lisanti, Mariangela; Safdi, Benjamin R., E-mail: samuelkl@princeton.edu, E-mail: mlisanti@princeton.edu, E-mail: bsafdi@princeton.edu
2015-05-01
Data from the Fermi Large Area Telescope suggests that there is an extended excess of GeV gamma-ray photons in the Inner Galaxy. Identifying potential astrophysical sources that contribute to this excess is an important step in verifying whether the signal originates from annihilating dark matter. In this paper, we focus on the potential contribution of unresolved point sources, such as millisecond pulsars (MSPs). We propose that the statistics of the photons—in particular, the flux probability density function (PDF) of the photon counts below the point-source detection threshold—can potentially distinguish between the dark-matter and point-source interpretations. We calculate the flux PDFmore » via the method of generating functions for these two models of the excess. Working in the framework of Bayesian model comparison, we then demonstrate that the flux PDF can potentially provide evidence for an unresolved MSP-like point-source population.« less
Pointing with Power or Creating with Chalk
ERIC Educational Resources Information Center
Rudow, Sasha R.; Finck, Joseph E.
2015-01-01
This study examines the attitudes of students on the use of PowerPoint and chalk/white boards in college science lecture classes. Students were asked to complete a survey regarding their experiences with PowerPoint and chalk/white boards in their science classes. Both multiple-choice and short answer questions were used. The multiple-choice…
NASA Astrophysics Data System (ADS)
Akita, Manabu; Yoshida, Satoru; Nakamura, Yoshitaka; Morimoto, Takeshi; Ushio, Tomoo; Kawasaki, Zen-Ichiro; Wang, Daohong
Lightning Research Group of Osaka University (LRG-OU) has been developing and improving the VHF broadband digital interferometer (DITF) for thunderstorm observations. It enables us to locate the impulsive VHF radiation sources caused by lightning discharges with extremely high resolutions. As a result of the VHF observations during the 2007-2008 winter season in the Japan Sea coastal area, cloud-to-ground (CG) flashes that neutralize multiple charge regions inside thunderclouds are visualized by the VHF broadband DITF. The first flash is the positive CG flash that neutralizes multiple positive charge regions in a flash. The second flash is the bipolar lightning flash that neutralizes both positive and negative charge inside thunderclouds. In the case of bipolar lightning flashes, some tens millisecond after the return strokes, the subsequent negative breakdowns initiate from the proximities of the initiation points of the preceding negative stepped leaders. It was also found that the altitudes of negative charge regions are lower than 2km. The bipolar lightning flashes observed in this campaign neutralize positive charge after lowering the negative charge to the ground.
NASA Astrophysics Data System (ADS)
Yang, X.; Luo, X.; Zheng, Z.
2012-04-01
It is increasingly realized that non-point pollution sources contribute significantly to water environment deterioration in China. Compared to developed countries, non-point source pollution in China has the unique characteristics of strong intensity and composition complexity due to its special socioeconomic conditions. First, more than 50% of its 1.3 billion people are rural. Sewage from the majority of the rural households is discharged either without or only with minimal treatment. The large amount of erratic rural sewage discharge is a significant source of water pollution. Second, China is plagued with serious agricultural pollution due to widespread improper application of fertilizers and pesticides. Finally, there lack sufficient disposal and recycling of rural wastes such as livestock manure and crop straws. Pollutant loads from various sources have far exceeded environmental assimilation capacity in many parts of China. The Lake Tai basin is one typical example. Lake Tai is the third largest freshwater lake in China. The basin is located in the highly developed and densely populated Yangtze River Delta. While accounting for 0.4% of its land area and 2.9% of its population, the Lake Tai basin generates more than 14% of China's Gross Domestic Production (GDP), and the basin's GDP per capita is 3.5 times as much as the state average. Lake Tai is vital to the basin's socio-economic development, providing multiple services including water supply for municipal, industrial, and agricultural needs, navigation, flood control, fishery, and tourism. Unfortunately, accompanied with the fast economic development is serious water environment deterioration in the Lake Tai basin. The lake is becoming increasingly eutrophied and has frequently suffered from cyanobacterial blooms in recent decades. Chinese government has made tremendous investment in order to mitigate water pollution conditions in the basin. Nevertheless, the trend of deteriorating water quality has yet to be reversed. At least two factors contribute to the dichotomy between huge investment and limited results. First, the majority of the efforts have been limited to engineering approaches to water pollution control, ignoring the important roles of non-engineering approaches and stakeholder participation. Second, the complex hydrological regime of the basin may aggravate the impacts of various pollutant sources. Using the Yincungang canal, one major tributary to the Lake Tai, as an example, we discuss our work on both hydrological and socio-economic factors affecting the water quality of the canal, as well as the grand challenges of coupling hydrological systems and socio-economic systems in the region. Keywords non-point source pollution, rural sewage, agricultural pollution, spatio-temporal pattern, stakeholder participation
STATISTICS OF GAMMA-RAY POINT SOURCES BELOW THE FERMI DETECTION LIMIT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malyshev, Dmitry; Hogg, David W., E-mail: dm137@nyu.edu
2011-09-10
An analytic relation between the statistics of photons in pixels and the number counts of multi-photon point sources is used to constrain the distribution of gamma-ray point sources below the Fermi detection limit at energies above 1 GeV and at latitudes below and above 30 deg. The derived source-count distribution is consistent with the distribution found by the Fermi Collaboration based on the first Fermi point-source catalog. In particular, we find that the contribution of resolved and unresolved active galactic nuclei (AGNs) to the total gamma-ray flux is below 20%-25%. In the best-fit model, the AGN-like point-source fraction is 17%more » {+-} 2%. Using the fact that the Galactic emission varies across the sky while the extragalactic diffuse emission is isotropic, we put a lower limit of 51% on Galactic diffuse emission and an upper limit of 32% on the contribution from extragalactic weak sources, such as star-forming galaxies. Possible systematic uncertainties are discussed.« less
Casper, Andrew; Liu, Dalong; Ebbini, Emad S
2012-01-01
A system for the realtime generation and control of multiple-focus ultrasound phased-array heating patterns is presented. The system employs a 1-MHz, 64-element array and driving electronics capable of fine spatial and temporal control of the heating pattern. The driver is integrated with a realtime 2-D temperature imaging system implemented on a commercial scanner. The coordinates of the temperature control points are defined on B-mode guidance images from the scanner, together with the temperature set points and controller parameters. The temperature at each point is controlled by an independent proportional, integral, and derivative controller that determines the focal intensity at that point. Optimal multiple-focus synthesis is applied to generate the desired heating pattern at the control points. The controller dynamically reallocates the power available among the foci from the shared power supply upon reaching the desired temperature at each control point. Furthermore, anti-windup compensation is implemented at each control point to improve the system dynamics. In vitro experiments in tissue-mimicking phantom demonstrate the robustness of the controllers for short (2-5 s) and longer multiple-focus high-intensity focused ultrasound exposures. Thermocouple measurements in the vicinity of the control points confirm the dynamics of the temperature variations obtained through noninvasive feedback. © 2011 IEEE
MODELING PHOTOCHEMISTRY AND AEROSOL FORMATION IN POINT SOURCE PLUMES WITH THE CMAQ PLUME-IN-GRID
Emissions of nitrogen oxides and sulfur oxides from the tall stacks of major point sources are important precursors of a variety of photochemical oxidants and secondary aerosol species. Plumes released from point sources exhibit rather limited dimensions and their growth is gradu...
X-ray Point Source Populations in Spiral and Elliptical Galaxies
NASA Astrophysics Data System (ADS)
Colbert, E.; Heckman, T.; Weaver, K.; Ptak, A.; Strickland, D.
2001-12-01
In the years of the Einstein and ASCA satellites, it was known that the total hard X-ray luminosity from non-AGN galaxies was fairly well correlated with the total blue luminosity. However, the origin of this hard component was not well understood. Some possibilities that were considered included X-ray binaries, extended upscattered far-infrared light via the inverse-Compton process, extended hot 107 K gas (especially in ellipitical galaxies), or even an active nucleus. Now, for the first time, we know from Chandra images that a significant amount of the total hard X-ray emission comes from individual X-ray point sources. We present here spatial and spectral analyses of Chandra data for X-ray point sources in a sample of ~40 galaxies, including both spiral galaxies (starbursts and non-starbursts) and elliptical galaxies. We shall discuss the relationship between the X-ray point source population and the properties of the host galaxies. We show that the slopes of the point-source X-ray luminosity functions are different for different host galaxy types and discuss possible reasons why. We also present detailed X-ray spectral analyses of several of the most luminous X-ray point sources (i.e., IXOs, a.k.a. ULXs), and discuss various scenarios for the origin of the X-ray point sources.
Aloisio, Kathryn M.; Swanson, Sonja A.; Micali, Nadia; Field, Alison; Horton, Nicholas J.
2015-01-01
Clustered data arise in many settings, particularly within the social and biomedical sciences. As an example, multiple–source reports are commonly collected in child and adolescent psychiatric epidemiologic studies where researchers use various informants (e.g. parent and adolescent) to provide a holistic view of a subject’s symptomatology. Fitzmaurice et al. (1995) have described estimation of multiple source models using a standard generalized estimating equation (GEE) framework. However, these studies often have missing data due to additional stages of consent and assent required. The usual GEE is unbiased when missingness is Missing Completely at Random (MCAR) in the sense of Little and Rubin (2002). This is a strong assumption that may not be tenable. Other options such as weighted generalized estimating equations (WEEs) are computationally challenging when missingness is non–monotone. Multiple imputation is an attractive method to fit incomplete data models while only requiring the less restrictive Missing at Random (MAR) assumption. Previously estimation of partially observed clustered data was computationally challenging however recent developments in Stata have facilitated their use in practice. We demonstrate how to utilize multiple imputation in conjunction with a GEE to investigate the prevalence of disordered eating symptoms in adolescents reported by parents and adolescents as well as factors associated with concordance and prevalence. The methods are motivated by the Avon Longitudinal Study of Parents and their Children (ALSPAC), a cohort study that enrolled more than 14,000 pregnant mothers in 1991–92 and has followed the health and development of their children at regular intervals. While point estimates were fairly similar to the GEE under MCAR, the MAR model had smaller standard errors, while requiring less stringent assumptions regarding missingness. PMID:25642154
NASA Astrophysics Data System (ADS)
Sarangapani, R.; Jose, M. T.; Srinivasan, T. K.; Venkatraman, B.
2017-07-01
Methods for the determination of efficiency of an aged high purity germanium (HPGe) detector for gaseous sources have been presented in the paper. X-ray radiography of the detector has been performed to get detector dimensions for computational purposes. The dead layer thickness of HPGe detector has been ascertained from experiments and Monte Carlo computations. Experimental work with standard point and liquid sources in several cylindrical geometries has been undertaken for obtaining energy dependant efficiency. Monte Carlo simulations have been performed for computing efficiencies for point, liquid and gaseous sources. Self absorption correction factors have been obtained using mathematical equations for volume sources and MCNP simulations. Self-absorption correction and point source methods have been used to estimate the efficiency for gaseous sources. The efficiencies determined from the present work have been used to estimate activity of cover gas sample of a fast reactor.
Rogers, Geoffrey
2018-06-01
The Yule-Nielsen effect is an influence on halftone color caused by the diffusion of light within the paper upon which the halftone ink is printed. The diffusion can be characterized by a point spread function. In this paper, a point spread function for paper is derived using the multiple-path model of reflection. This model treats the interaction of light with turbid media as a random walk. Using the multiple-path point spread function, a general expression is derived for the average reflectance of light from a frequency-modulated halftone, in which dot size is constant and the number of dots is varied, with the arrangement of dots random. It is also shown that the line spread function derived from the multiple-path model has the form of a Lorentzian function.
Purdy, P H; Tharp, N; Stewart, T; Spiller, S F; Blackburn, H D
2010-10-15
Boar semen is typically collected, diluted and cooled for AI use over numerous days, or frozen immediately after shipping to capable laboratories. The storage temperature and pH of the diluted, cooled boar semen could influence the fertility of boar sperm. Therefore, the purpose of this study was to determine the effects of pH and storage temperature on fresh and frozen-thawed boar sperm motility end points. Semen samples (n = 199) were collected, diluted, cooled and shipped overnight to the National Animal Germplasm Program laboratory for freezing and analysis from four boar stud facilities. The temperature, pH and motility characteristics, determined using computer automated semen analysis, were measured at arrival. Samples were then cryopreserved and post-thaw motility determined. The commercial stud was a significant source of variation for mean semen temperature and pH, as well as total and progressive motility, and numerous other sperm motility characteristics. Based on multiple regression analysis, pH was not a significant source of variation for fresh or frozen-thawed boar sperm motility end points. However, significant models were derived which demonstrated that storage temperature, boar, and the commercial stud influenced sperm motility end points and the potential success for surviving cryopreservation. We inferred that maintaining cooled boar semen at approximately 16 °C during storage will result in higher fresh and frozen-thawed boar sperm quality, which should result in greater fertility. Copyright © 2010 Elsevier Inc. All rights reserved.
Volumetric Acoustic Vector Intensity Probe
NASA Technical Reports Server (NTRS)
Klos, Jacob
2006-01-01
A new measurement tool capable of imaging the acoustic intensity vector throughout a large volume is discussed. This tool consists of an array of fifty microphones that form a spherical surface of radius 0.2m. A simultaneous measurement of the pressure field across all the microphones provides time-domain near-field holograms. Near-field acoustical holography is used to convert the measured pressure into a volumetric vector intensity field as a function of frequency on a grid of points ranging from the center of the spherical surface to a radius of 0.4m. The volumetric intensity is displayed on three-dimensional plots that are used to locate noise sources outside the volume. There is no restriction on the type of noise source that can be studied. The sphere is mobile and can be moved from location to location to hunt for unidentified noise sources. An experiment inside a Boeing 757 aircraft in flight successfully tested the ability of the array to locate low-noise-excited sources on the fuselage. Reference transducers located on suspected noise source locations can also be used to increase the ability of this device to separate and identify multiple noise sources at a given frequency by using the theory of partial field decomposition. The frequency range of operation is 0 to 1400Hz. This device is ideal for the study of noise sources in commercial and military transportation vehicles in air, on land and underwater.
Pan, Huiyun; Lu, Xinwei; Lei, Kai
2017-12-31
A detailed investigation was conducted to study heavy metal contamination in road dust from four regions of Xi'an, Northwest China. The concentrations of eight heavy metals Co, Cr, Cu, Mn, Ni, Pb, Zn and V were determined by X-Ray Fluorescence. The mean concentrations of these elements were: 30.9mgkg -1 Co, 145.0mgkg -1 Cr, 54.7mgkg -1 Cu, 510.5mgkg -1 Mn, 30.8mgkg -1 Ni, 124.5mgkg -1 Pb, 69.6mgkg -1 V and 268.6mgkg -1 Zn. There was significant enrichment of Pb, Zn, Co, Cu and Cr based on geo-accumulation index value. Multivariate statistical analysis showed that levels of Cu, Pb, Zn, Co and Cr were controlled by anthropogenic activities, while levels of Mn, Ni and V were associated with natural sources. Principle component analysis and multiple linear regression were applied to determine the source apportionment. The results showed that traffic was the main source with a percent contribution of 53.4%. Natural sources contributed 26.5%, and other anthropogenic pollution sources contributed 20.1%. Clear heavy metal pollution hotspots were identified by GIS mapping. The location of point pollution sources and prevailing wind direction were found to be important factors in the spatial distribution of heavy metals. Copyright © 2017 Elsevier B.V. All rights reserved.
Wang, R; Li, X A
2001-02-01
The dose parameters for the beta-particle emitting 90Sr/90Y source for intravascular brachytherapy (IVBT) have been calculated by different investigators. At a distant distance from the source, noticeable differences are seen in these parameters calculated using different Monte Carlo codes. The purpose of this work is to quantify as well as to understand these differences. We have compared a series of calculations using an EGS4, an EGSnrc, and the MCNP Monte Carlo codes. Data calculated and compared include the depth dose curve for a broad parallel beam of electrons, and radial dose distributions for point electron sources (monoenergetic or polyenergetic) and for a real 90Sr/90Y source. For the 90Sr/90Y source, the doses at the reference position (2 mm radial distance) calculated by the three code agree within 2%. However, the differences between the dose calculated by the three codes can be over 20% in the radial distance range interested in IVBT. The difference increases with radial distance from source, and reaches 30% at the tail of dose curve. These differences may be partially attributed to the different multiple scattering theories and Monte Carlo models for electron transport adopted in these three codes. Doses calculated by the EGSnrc code are more accurate than those by the EGS4. The two calculations agree within 5% for radial distance <6 mm.
Discrimination between diffuse and point sources of arsenic at Zimapán, Hidalgo state, Mexico.
Sracek, Ondra; Armienta, María Aurora; Rodríguez, Ramiro; Villaseñor, Guadalupe
2010-01-01
There are two principal sources of arsenic in Zimapán. Point sources are linked to mining and smelting activities and especially to mine tailings. Diffuse sources are not well defined and are linked to regional flow systems in carbonate rocks. Both sources are caused by the oxidation of arsenic-rich sulfidic mineralization. Point sources are characterized by Ca-SO(4)-HCO(3) ground water type and relatively enriched values of deltaD, delta(18)O, and delta(34)S(SO(4)). Diffuse sources are characterized by Ca-Na-HCO(3) type of ground water and more depleted values of deltaD, delta(18)O, and delta(34)S(SO(4)). Values of deltaD and delta(18)O indicate similar altitude of recharge for both arsenic sources and stronger impact of evaporation for point sources in mine tailings. There are also different values of delta(34)S(SO(4)) for both sources, presumably due to different types of mineralization or isotopic zonality in deposits. In Principal Component Analysis (PCA), the principal component 1 (PC1), which describes the impact of sulfide oxidation and neutralization by the dissolution of carbonates, has higher values in samples from point sources. In spite of similar concentrations of As in ground water affected by diffuse sources and point sources (mean values 0.21 mg L(-1) and 0.31 mg L(-1), respectively, in the years from 2003 to 2008), the diffuse sources have more impact on the health of population in Zimapán. This is caused by the extraction of ground water from wells tapping regional flow system. In contrast, wells located in the proximity of mine tailings are not generally used for water supply.
NASA Astrophysics Data System (ADS)
Chhetri, R.; Ekers, R. D.; Morgan, J.; Macquart, J.-P.; Franzen, T. M. O.
2018-06-01
We use Murchison Widefield Array observations of interplanetary scintillation (IPS) to determine the source counts of point (<0.3 arcsecond extent) sources and of all sources with some subarcsecond structure, at 162 MHz. We have developed the methodology to derive these counts directly from the IPS observables, while taking into account changes in sensitivity across the survey area. The counts of sources with compact structure follow the behaviour of the dominant source population above ˜3 Jy but below this they show Euclidean behaviour. We compare our counts to those predicted by simulations and find a good agreement for our counts of sources with compact structure, but significant disagreement for point source counts. Using low radio frequency SEDs from the GLEAM survey, we classify point sources as Compact Steep-Spectrum (CSS), flat spectrum, or peaked. If we consider the CSS sources to be the more evolved counterparts of the peaked sources, the two categories combined comprise approximately 80% of the point source population. We calculate densities of potential calibrators brighter than 0.4 Jy at low frequencies and find 0.2 sources per square degrees for point sources, rising to 0.7 sources per square degree if sources with more complex arcsecond structure are included. We extrapolate to estimate 4.6 sources per square degrees at 0.04 Jy. We find that a peaked spectrum is an excellent predictor for compactness at low frequencies, increasing the number of good calibrators by a factor of three compared to the usual flat spectrum criterion.
Zhang, Mingyuan; Fiol, Guilherme Del; Grout, Randall W.; Jonnalagadda, Siddhartha; Medlin, Richard; Mishra, Rashmi; Weir, Charlene; Liu, Hongfang; Mostafa, Javed; Fiszman, Marcelo
2014-01-01
Online knowledge resources such as Medline can address most clinicians’ patient care information needs. Yet, significant barriers, notably lack of time, limit the use of these sources at the point of care. The most common information needs raised by clinicians are treatment-related. Comparative effectiveness studies allow clinicians to consider multiple treatment alternatives for a particular problem. Still, solutions are needed to enable efficient and effective consumption of comparative effectiveness research at the point of care. Objective Design and assess an algorithm for automatically identifying comparative effectiveness studies and extracting the interventions investigated in these studies. Methods The algorithm combines semantic natural language processing, Medline citation metadata, and machine learning techniques. We assessed the algorithm in a case study of treatment alternatives for depression. Results Both precision and recall for identifying comparative studies was 0.83. A total of 86% of the interventions extracted perfectly or partially matched the gold standard. Conclusion Overall, the algorithm achieved reasonable performance. The method provides building blocks for the automatic summarization of comparative effectiveness research to inform point of care decision-making. PMID:23920677
An experimental evaluation of monochromatic x-ray beam position monitors at diamond light source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bloomer, Chris, E-mail: chris.bloomer@diamond.ac.uk; Rehm, Guenther; Dolbnya, Igor P.
Maintaining the stability of the X-ray beam relative to the sample point is of paramount importance for beamlines and users wanting to perform cutting-edge experiments. The ability to detect, and subsequently compensate for, variations in X-ray beam position with effective diagnostics has multiple benefits: a reduction in commissioning and start-up time, less ‘down-time’, and an improvement in the quality of acquired data. At Diamond Light Source a methodical evaluation of a selection of monochromatic X-ray Beam Position Monitors (XBPMs), using a range of position detection techniques, and from a range of suppliers, was carried out. The results of these experimentsmore » are presented, showing the measured RMS noise on the position measurement of each device for a given flux, energy, beam size, and bandwidth. A discussion of the benefits and drawbacks of each of the various devices and techniques is also included.« less
Social care and support for elderly men and women in an urban and a rural area of Nepal.
Kshetri, Dan Bahadur Baidwar; Smith, Cairns S; Khadka, Mira
2012-09-01
This study has aimed to describe the care and support for urban and rural elderly people of Bhaktapur district, Nepal. Efforts were made to identify the feeling of some features of general well-beings associated to mental health, person responsible for care and support, capability to perform daily routine activities, sources of finance and ownership to the property. More than half of the respondents were found having single or multiple features of loneliness, anxiety, depression and insomnia. The rate of point prevalence loneliness was found higher in the above 80 years of age, urban respondents. Almost 9 in 10 respondents were capable themselves to dress, walk and maintain personal hygiene and majority of them were assisted by spouse, son/daughter-in-laws. Family support was common sources of income and ownership to the property was absolutely high.
Sensitivity Kernels for the Cross-Convolution Measure: Eliminate the Source in Waveform Tomography
NASA Astrophysics Data System (ADS)
Menke, W. H.
2017-12-01
We use the adjoint method to derive sensitivity kernels for the cross-convolution measure, a goodness-of-fit criterion that is applicable to seismic data containing closely-spaced multiple arrivals, such as reverberating compressional waves and split shear waves. In addition to a general formulation, specific expressions for sensitivity with respect to density, Lamé parameter and shear modulus are derived for a isotropic elastic solid. As is typical of adjoint methods, the kernels depend upon an adjoint field, the source of which, in this case, is the reference displacement field, pre-multiplied by a matrix of cross-correlations of components of the observed field. We use a numerical simulation to evaluate the resolving power of a topographic inversion that employs the cross-convolution measure. The estimated resolving kernel shows is point-like, indicating that the cross-convolution measure will perform well in waveform tomography settings.
[Medical image compression: a review].
Noreña, Tatiana; Romero, Eduardo
2013-01-01
Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings.
McAllister, Jane; Casino, Carmela; Davidson, Fiona; Power, Joan; Lawlor, Emer; Yap, Peng Lee; Simmonds, Peter; Smith, Donald B.
1998-01-01
The long-term evolution of the hepatitis C virus hypervariable region (HVR) and flanking regions of the E1 and E2 envelope proteins have been studied in a cohort of women infected from a common source of anti-D immunoglobulin. Whereas virus sequences in the infectious source were relatively homogeneous, distinct HVR variants were observed in each anti-D recipient, indicating that this region can evolve in multiple directions from the same point. Where HVR variants with dissimilar sequences were present in a single individual, the frequency of synonymous substitution in the flanking regions suggested that the lineages diverged more than a decade previously. Even where a single major HVR variant was present in an infected individual, this lineage was usually several years old. Multiple lineages can therefore coexist during long periods of chronic infection without replacement. The characteristics of amino acid substitution in the HVR were not consistent with the random accumulation of mutations and imply that amino acid replacement in the HVR was strongly constrained. Another variable region of E2 centered on codon 60 shows similar constraints, while HVR2 was relatively unconstrained. Several of these features are difficult to explain if a neutralizing immune response against the HVR is the only selective force operating on E2. The impact of PCR artifacts such as nucleotide misincorporation and the shuffling of dissimilar templates is discussed. PMID:9573256
Iwai, Kosuke; Shih, Kuan Cheng; Lin, Xiao; Brubaker, Thomas A; Sochol, Ryan D; Lin, Liwei
2014-10-07
Point-of-care (POC) and disposable biomedical applications demand low-power microfluidic systems with pumping components that provide controlled pressure sources. Unfortunately, external pumps have hindered the implementation of such microfluidic systems due to limitations associated with portability and power requirements. Here, we propose and demonstrate a 'finger-powered' integrated pumping system as a modular element to provide pressure head for a variety of advanced microfluidic applications, including finger-powered on-chip microdroplet generation. By utilizing a human finger for the actuation force, electrical power sources that are typically needed to generate pressure head were obviated. Passive fluidic diodes were designed and implemented to enable distinct fluids from multiple inlet ports to be pumped using a single actuation source. Both multilayer soft lithography and injection molding processes were investigated for device fabrication and performance. Experimental results revealed that the pressure head generated from a human finger could be tuned based on the geometric characteristics of the pumping system, with a maximum observed pressure of 7.6 ± 0.1 kPa. In addition to the delivery of multiple, distinct fluids into microfluidic channels, we also employed the finger-powered pumping system to achieve the rapid formation of both water-in-oil droplets (106.9 ± 4.3 μm in diameter) and oil-in-water droplets (75.3 ± 12.6 μm in diameter) as well as the encapsulation of endothelial cells in droplets without using any external or electrical controllers.
Binary Sources and Binary Lenses in Microlensing Surveys of MACHOs
NASA Astrophysics Data System (ADS)
Petrovic, N.; Di Stefano, R.; Perna, R.
2003-12-01
Microlensing is an intriguing phenomenon which may yield information about the nature of dark matter. Early observational searches identified hundreds of microlensing light curves. The data set consisted mainly of point-lens light curves and binary-lens events in which the light curves exhibit caustic crossings. Very few mildly perturbed light curves were observed, although this latter type should constitute the majority of binary lens light curves. Di Stefano (2001) has suggested that the failure to take binary effects into account may have influenced the estimates of optical depth derived from microlensing surveys. The work we report on here is the first step in a systematic analysis of binary lenses and binary sources and their impact on the results of statistical microlensing surveys. In order to asses the problem, we ran Monte-Carlo simulations of various microlensing events involving binary stars (both as the source and as the lens). For each event with peak magnification > 1.34, we sampled the characteristic light curve and recorded the chi squared value when fitting the curve with a point lens model; we used this to asses the perturbation rate. We also recorded the parameters of each system, the maximum magnification, the times at which each light curve started and ended and the number of caustic crossings. We found that both the binarity of sources and the binarity of lenses increased the lensing rate. While the binarity of sources had a negligible effect on the perturbation rates of the light curves, the binarity of lenses had a notable effect. The combination of binary sources and binary lenses produces an observable rate of interesting events exhibiting multiple "repeats" in which the magnification rises above and dips below 1.34 several times. Finally, the binarity of lenses impacted both the durations of the events and the maximum magnifications. This work was supported in part by the SAO intern program (NSF grant AST-9731923) and NASA contracts NAS8-39073 and NAS8-38248 (CXC).
Uncertainty Analyses for Back Projection Methods
NASA Astrophysics Data System (ADS)
Zeng, H.; Wei, S.; Wu, W.
2017-12-01
So far few comprehensive error analyses for back projection methods have been conducted, although it is evident that high frequency seismic waves can be easily affected by earthquake depth, focal mechanisms and the Earth's 3D structures. Here we perform 1D and 3D synthetic tests for two back projection methods, MUltiple SIgnal Classification (MUSIC) (Meng et al., 2011) and Compressive Sensing (CS) (Yao et al., 2011). We generate synthetics for both point sources and finite rupture sources with different depths, focal mechanisms, as well as 1D and 3D structures in the source region. The 3D synthetics are generated through a hybrid scheme of Direct Solution Method and Spectral Element Method. Then we back project the synthetic data using MUSIC and CS. The synthetic tests show that the depth phases can be back projected as artificial sources both in space and time. For instance, for a source depth of 10km, back projection gives a strong signal 8km away from the true source. Such bias increases with depth, e.g., the error of horizontal location could be larger than 20km for a depth of 40km. If the array is located around the nodal direction of direct P-waves the teleseismic P-waves are dominated by the depth phases. Therefore, back projections are actually imaging the reflection points of depth phases more than the rupture front. Besides depth phases, the strong and long lasted coda waves due to 3D effects near trench can lead to additional complexities tested here. The strength contrast of different frequency contents in the rupture models also produces some variations to the back projection results. In the synthetic tests, MUSIC and CS derive consistent results. While MUSIC is more computationally efficient, CS works better for sparse arrays. In summary, our analyses indicate that the impact of various factors mentioned above should be taken into consideration when interpreting back projection images, before we can use them to infer the earthquake rupture physics.
Alberti, Luca; Colombo, Loris; Formentin, Giovanni
2018-04-15
The Lombardy Region in Italy is one of the most urbanized and industrialized areas in Europe. The presence of countless sources of groundwater pollution is therefore a matter of environmental concern. The sources of groundwater contamination can be classified into two different categories: 1) Point Sources (PS), which correspond to areas releasing plumes of high concentrations (i.e. hot-spots) and 2) Multiple-Point Sources (MPS) consisting in a series of unidentifiable small sources clustered within large areas, generating an anthropogenic diffuse contamination. The latter category frequently predominates in European Functional Urban Areas (FUA) and cannot be managed through standard remediation techniques, mainly because detecting the many different source areas releasing small contaminant mass in groundwater is unfeasible. A specific legislative action has been recently enacted at Regional level (DGR IX/3510-2012), in order to identify areas prone to anthropogenic diffuse pollution and their level of contamination. With a view to defining a management plan, it is necessary to find where MPS are most likely positioned. This paper describes a methodology devised to identify the areas with the highest likelihood to host potential MPS. A groundwater flow model was implemented for a pilot area located in the Milan FUA and through the PEST code, a Null-Space Monte Carlo method was applied in order to generate a suite of several hundred hydraulic conductivity field realizations, each maintaining the model in a calibrated state and each consistent with the modelers' expert-knowledge. Thereafter, the MODPATH code was applied to generate back-traced advective flowpaths for each of the models built using the conductivity field realizations. Maps were then created displaying the number of backtracked particles that crossed each model cell in each stochastic calibrated model. The result is considered to be representative of the FUAs areas with the highest likelihood to host MPS responsible for diffuse contamination. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aartsen, M. G.; Abraham, K.; Ackermann, M.
Observation of a point source of astrophysical neutrinos would be a “smoking gun” signature of a cosmic-ray accelerator. While IceCube has recently discovered a diffuse flux of astrophysical neutrinos, no localized point source has been observed. Previous IceCube searches for point sources in the southern sky were restricted by either an energy threshold above a few hundred TeV or poor neutrino angular resolution. Here we present a search for southern sky point sources with greatly improved sensitivities to neutrinos with energies below 100 TeV. By selecting charged-current ν{sub μ} interacting inside the detector, we reduce the atmospheric background while retainingmore » efficiency for astrophysical neutrino-induced events reconstructed with sub-degree angular resolution. The new event sample covers three years of detector data and leads to a factor of 10 improvement in sensitivity to point sources emitting below 100 TeV in the southern sky. No statistically significant evidence of point sources was found, and upper limits are set on neutrino emission from individual sources. A posteriori analysis of the highest-energy (∼100 TeV) starting event in the sample found that this event alone represents a 2.8 σ deviation from the hypothesis that the data consists only of atmospheric background.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, S.; Barua, A.; Zhou, M., E-mail: min.zhou@me.gatech.edu
2014-05-07
Accounting for the combined effect of multiple sources of stochasticity in material attributes, we develop an approach that computationally predicts the probability of ignition of polymer-bonded explosives (PBXs) under impact loading. The probabilistic nature of the specific ignition processes is assumed to arise from two sources of stochasticity. The first source involves random variations in material microstructural morphology; the second source involves random fluctuations in grain-binder interfacial bonding strength. The effect of the first source of stochasticity is analyzed with multiple sets of statistically similar microstructures and constant interfacial bonding strength. Subsequently, each of the microstructures in the multiple setsmore » is assigned multiple instantiations of randomly varying grain-binder interfacial strengths to analyze the effect of the second source of stochasticity. Critical hotspot size-temperature states reaching the threshold for ignition are calculated through finite element simulations that explicitly account for microstructure and bulk and interfacial dissipation to quantify the time to criticality (t{sub c}) of individual samples, allowing the probability distribution of the time to criticality that results from each source of stochastic variation for a material to be analyzed. Two probability superposition models are considered to combine the effects of the multiple sources of stochasticity. The first is a parallel and series combination model, and the second is a nested probability function model. Results show that the nested Weibull distribution provides an accurate description of the combined ignition probability. The approach developed here represents a general framework for analyzing the stochasticity in the material behavior that arises out of multiple types of uncertainty associated with the structure, design, synthesis and processing of materials.« less
Sources of fine particle composition in the northeastern US
NASA Astrophysics Data System (ADS)
Song, Xin-Hua; Polissar, Alexandr V.; Hopke, Philip K.
Fine particle composition data obtained at three sampling sites in the northeastern US were studied using a relatively new type of factor analysis, positive matrix factorization (PMF). The three sites are Washington, DC, Brigantine, NJ and Underhill, VT. The PMF method uses the estimates of the error in the data to provide optimal point-by-point weighting and permits efficient treatment of missing and below detection limit values. It also imposes the non-negativity constraint on the factors. Eight, nine and 11 sources were resolved from the Washington, Brigantine and Underhill data, respectively. The factors were normalized by using aerosol fine mass concentration data through multiple linear regression so that the quantitative source contributions for each resolved factor were obtained. Among the sources resolved at the three sites, six are common. These six sources exhibit not only similar chemical compositions, but also similar seasonal variations at all three sites. They are secondary sulfate with a high concentration of S and strong seasonal variation trend peaking in summer time; coal combustion with the presence of S and Se and its seasonal variation peaking in winter time; oil combustion characterized by Ni and V; soil represented by Al, Ca, Fe, K, Si and Ti; incinerator with the presence of Pb and Zn; sea salt with the high concentrations of Na and S. Among the other sources, nitrate (dominated by NO 3-) and motor vehicle (with high concentrations of organic carbon (OC) and elemental carbon (EC), and with the presence of some soil dust components) were obtained for the Washington data, while the three additional sources for the Brigantine data were nitrate, motor vehicle and wood smoke (OC, EC, K). At the Underhill site, five other sources were resolved. They are wood smoke, Canadian Mn, Canadian Cu smelter, Canadian Ni smelter, and another salt source with high concentrations of Cl and Na. A nitrate source similar to that found at the other sites could not be obtained at Underhill since NO 3- was not measured at this site. Generally, most of the sources at the three sites showed similar chemical composition profiles and seasonal variation patterns. The study indicated that PMF was a powerful factor analysis method to extract sources from the ambient aerosol concentration data.
WASS: An open-source pipeline for 3D stereo reconstruction of ocean waves
NASA Astrophysics Data System (ADS)
Bergamasco, Filippo; Torsello, Andrea; Sclavo, Mauro; Barbariol, Francesco; Benetazzo, Alvise
2017-10-01
Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community and industry. Indeed, recent advances of both computer vision algorithms and computer processing power now allow the study of the spatio-temporal wave field with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner, so that the implementation of a sea-waves 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well tested software package that automates the reconstruction process from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS (http://www.dais.unive.it/wass), an Open-Source stereo processing pipeline for sea waves 3D reconstruction. Our tool completely automates all the steps required to estimate dense point clouds from stereo images. Namely, it computes the extrinsic parameters of the stereo rig so that no delicate calibration has to be performed on the field. It implements a fast 3D dense stereo reconstruction procedure based on the consolidated OpenCV library and, lastly, it includes set of filtering techniques both on the disparity map and the produced point cloud to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface. In this paper, we describe the architecture of WASS and the internal algorithms involved. The pipeline workflow is shown step-by-step and demonstrated on real datasets acquired at sea.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...
Code of Federal Regulations, 2013 CFR
2013-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...
Code of Federal Regulations, 2014 CFR
2014-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...
Code of Federal Regulations, 2012 CFR
2012-07-01
... (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS SUGAR PROCESSING POINT SOURCE CATEGORY Beet Sugar Processing... a point source where the sugar beet processing capacity of the point source does not exceed 1090 kkg... results, in whole or in part, from barometric condensing operations and any other beet sugar processing...
Theory of Parabolic Arcs in Interstellar Scintillation Spectra
NASA Astrophysics Data System (ADS)
Cordes, James M.; Rickett, Barney J.; Stinebring, Daniel R.; Coles, William A.
2006-01-01
Interstellar scintillation (ISS), observed as time variation in the intensity of a compact radio source, is caused by small-scale structure in the electron density of the interstellar plasma. Dynamic spectra of ISS show modulation in radio frequency and time. Here we relate the (two-dimensional) power spectrum of the dynamic spectrum-the secondary spectrum-to the scattered image of the source. Recent work has identified remarkable parabolic arcs in secondary spectra. Each point in a secondary spectrum corresponds to interference between points in the scattered image with a certain Doppler shift and a certain delay. The parabolic arc corresponds to the quadratic relation between differential Doppler shift and delay through their common dependence on scattering angle. We show that arcs will occur in all media that scatter significant power at angles larger than the rms angle. Thus, effects such as source diameter, steep spectra, and dissipation scales, which truncate high angle scattering, also truncate arcs. Arcs are equally visible in simulations of nondispersive scattering. They are enhanced by anisotropic scattering when the spatial structure is elongated perpendicular to the velocity. In weak scattering the secondary spectrum is directly mapped from the scattered image, and this mapping can be inverted. We discuss additional observed phenomena including multiple arcs and reverse arclets oriented oppositely to the main arc. These phenomena persist for many refractive scattering times, suggesting that they are due to large-scale density structures, rather than low-frequency components of Kolmogorov turbulence.
Electrophysiological correlates of cocktail-party listening.
Lewald, Jörg; Getzmann, Stephan
2015-10-01
Detecting, localizing, and selectively attending to a particular sound source of interest in complex auditory scenes composed of multiple competing sources is a remarkable capacity of the human auditory system. The neural basis of this so-called "cocktail-party effect" has remained largely unknown. Here, we studied the cortical network engaged in solving the "cocktail-party" problem, using event-related potentials (ERPs) in combination with two tasks demanding horizontal localization of a naturalistic target sound presented either in silence or in the presence of multiple competing sound sources. Presentation of multiple sound sources, as compared to single sources, induced an increased P1 amplitude, a reduction in N1, and a strong N2 component, resulting in a pronounced negativity in the ERP difference waveform (N2d) around 260 ms after stimulus onset. About 100 ms later, the anterior contralateral N2 subcomponent (N2ac) occurred in the multiple-sources condition, as computed from the amplitude difference for targets in the left minus right hemispaces. Cortical source analyses of the ERP modulation, resulting from the contrast of multiple vs. single sources, generally revealed an initial enhancement of electrical activity in right temporo-parietal areas, including auditory cortex, by multiple sources (at P1) that is followed by a reduction, with the primary sources shifting from right inferior parietal lobule (at N1) to left dorso-frontal cortex (at N2d). Thus, cocktail-party listening, as compared to single-source localization, appears to be based on a complex chronology of successive electrical activities within a specific cortical network involved in spatial hearing in complex situations. Copyright © 2015 Elsevier B.V. All rights reserved.
The Multiple Control of Verbal Behavior
Michael, Jack; Palmer, David C; Sundberg, Mark L
2011-01-01
Amid the novel terms and original analyses in Skinner's Verbal Behavior, the importance of his discussion of multiple control is easily missed, but multiple control of verbal responses is the rule rather than the exception. In this paper we summarize and illustrate Skinner's analysis of multiple control and introduce the terms convergent multiple control and divergent multiple control. We point out some implications for applied work and discuss examples of the role of multiple control in humor, poetry, problem solving, and recall. Joint control and conditional discrimination are discussed as special cases of multiple control. We suggest that multiple control is a useful analytic tool for interpreting virtually all complex behavior, and we consider the concepts of derived relations and naming as cases in point. PMID:22532752
Illumination-based synchronization of high-speed vision sensors.
Hou, Lei; Kagami, Shingo; Hashimoto, Koichi
2010-01-01
To acquire images of dynamic scenes from multiple points of view simultaneously, the acquisition time of vision sensors should be synchronized. This paper describes an illumination-based synchronization method derived from the phase-locked loop (PLL) algorithm. Incident light to a vision sensor from an intensity-modulated illumination source serves as the reference signal for synchronization. Analog and digital computation within the vision sensor forms a PLL to regulate the output signal, which corresponds to the vision frame timing, to be synchronized with the reference. Simulated and experimental results show that a 1,000 Hz frame rate vision sensor was successfully synchronized with 32 μs jitters.
Dong, Jianghu J; Wang, Liangliang; Gill, Jagbir; Cao, Jiguo
2017-01-01
This article is motivated by some longitudinal clinical data of kidney transplant recipients, where kidney function progression is recorded as the estimated glomerular filtration rates at multiple time points post kidney transplantation. We propose to use the functional principal component analysis method to explore the major source of variations of glomerular filtration rate curves. We find that the estimated functional principal component scores can be used to cluster glomerular filtration rate curves. Ordering functional principal component scores can detect abnormal glomerular filtration rate curves. Finally, functional principal component analysis can effectively estimate missing glomerular filtration rate values and predict future glomerular filtration rate values.
An Investigation of the Cold Interstellar Medium of the Outer Galaxy
NASA Technical Reports Server (NTRS)
Heyer, Mark H.
1997-01-01
The primary objective of this proposal was to determine the relationship between the molecular gas and dust components of the interstellar medium of the Outer Galaxy. It made use of the High Resolution IRAS Galaxy Atlas and the FCRAO CO Survey of the Outer Galaxy. These HIRES images greatly augment the spatial dynamic range of the IRAS Survey data and the ability to discriminate multiple point sources within a compact region. Additionally, the HIRES far infrared images allow for more direct comparisons with molecular line data observed at 45 sec resolution. From funding of this proposal, we have completed two papers for publication in a refereed journal.
Occurrence of Surface Water Contaminations: An Overview
NASA Astrophysics Data System (ADS)
Shahabudin, M. M.; Musa, S.
2018-04-01
Water is a part of our life and needed by all organisms. As time goes by, the needs by human increased transforming water quality into bad conditions. Surface water contaminated in various ways which is pointed sources and non-pointed sources. Pointed sources means the source are distinguished from the source such from drains or factory but the non-pointed always occurred in mixed of elements of pollutants. This paper is reviewing the occurrence of the contaminations with effects that occurred around us. Pollutant factors from natural or anthropology factors such nutrients, pathogens, and chemical elements contributed to contaminations. Most of the effects from contaminated surface water contributed to the public health effects also to the environments.
2011 Radioactive Materials Usage Survey for Unmonitored Point Sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sturgeon, Richard W.
This report provides the results of the 2011 Radioactive Materials Usage Survey for Unmonitored Point Sources (RMUS), which was updated by the Environmental Protection (ENV) Division's Environmental Stewardship (ES) at Los Alamos National Laboratory (LANL). ES classifies LANL emission sources into one of four Tiers, based on the potential effective dose equivalent (PEDE) calculated for each point source. Detailed descriptions of these tiers are provided in Section 3. The usage survey is conducted annually; in odd-numbered years the survey addresses all monitored and unmonitored point sources and in even-numbered years it addresses all Tier III and various selected other sources.more » This graded approach was designed to ensure that the appropriate emphasis is placed on point sources that have higher potential emissions to the environment. For calendar year (CY) 2011, ES has divided the usage survey into two distinct reports, one covering the monitored point sources (to be completed later this year) and this report covering all unmonitored point sources. This usage survey includes the following release points: (1) all unmonitored sources identified in the 2010 usage survey, (2) any new release points identified through the new project review (NPR) process, and (3) other release points as designated by the Rad-NESHAP Team Leader. Data for all unmonitored point sources at LANL is stored in the survey files at ES. LANL uses this survey data to help demonstrate compliance with Clean Air Act radioactive air emissions regulations (40 CFR 61, Subpart H). The remainder of this introduction provides a brief description of the information contained in each section. Section 2 of this report describes the methods that were employed for gathering usage survey data and for calculating usage, emissions, and dose for these point sources. It also references the appropriate ES procedures for further information. Section 3 describes the RMUS and explains how the survey results are organized. The RMUS Interview Form with the attached RMUS Process Form(s) provides the radioactive materials survey data by technical area (TA) and building number. The survey data for each release point includes information such as: exhaust stack identification number, room number, radioactive material source type (i.e., potential source or future potential source of air emissions), radionuclide, usage (in curies) and usage basis, physical state (gas, liquid, particulate, solid, or custom), release fraction (from Appendix D to 40 CFR 61, Subpart H), and process descriptions. In addition, the interview form also calculates emissions (in curies), lists mrem/Ci factors, calculates PEDEs, and states the location of the critical receptor for that release point. [The critical receptor is the maximum exposed off-site member of the public, specific to each individual facility.] Each of these data fields is described in this section. The Tier classification of release points, which was first introduced with the 1999 usage survey, is also described in detail in this section. Section 4 includes a brief discussion of the dose estimate methodology, and includes a discussion of several release points of particular interest in the CY 2011 usage survey report. It also includes a table of the calculated PEDEs for each release point at its critical receptor. Section 5 describes ES's approach to Quality Assurance (QA) for the usage survey. Satisfactory completion of the survey requires that team members responsible for Rad-NESHAP (National Emissions Standard for Hazardous Air Pollutants) compliance accurately collect and process several types of information, including radioactive materials usage data, process information, and supporting information. They must also perform and document the QA reviews outlined in Section 5.2.6 (Process Verification and Peer Review) of ES-RN, 'Quality Assurance Project Plan for the Rad-NESHAP Compliance Project' to verify that all information is complete and correct.« less
Cold Atom Source Containing Multiple Magneto-Optical Traps
NASA Technical Reports Server (NTRS)
Ramirez-Serrano, Jaime; Kohel, James; Kellogg, James; Lim, Lawrence; Yu, Nan; Maleki, Lute
2007-01-01
An apparatus that serves as a source of a cold beam of atoms contains multiple two-dimensional (2D) magneto-optical traps (MOTs). (Cold beams of atoms are used in atomic clocks and in diverse scientific experiments and applications.) The multiple-2D-MOT design of this cold atom source stands in contrast to single-2D-MOT designs of prior cold atom sources of the same type. The advantages afforded by the present design are that this apparatus is smaller than prior designs.
Mainhagu, Jon; Morrison, C.; Truex, Michael J.; ...
2014-08-05
A method termed vapor-phase tomography has recently been proposed to characterize the distribution of volatile organic contaminant mass in vadose-zone source areas, and to measure associated three-dimensional distributions of local contaminant mass discharge. The method is based on measuring the spatial variability of vapor flux, and thus inherent to its effectiveness is the premise that the magnitudes and temporal variability of vapor concentrations measured at different monitoring points within the interrogated area will be a function of the geospatial positions of the points relative to the source location. A series of flow-cell experiments was conducted to evaluate this premise. Amore » well-defined source zone was created by injection and extraction of a non-reactive gas (SF6). Spatial and temporal concentration distributions obtained from the tests were compared to simulations produced with a mathematical model describing advective and diffusive transport. Tests were conducted to characterize both areal and vertical components of the application. Decreases in concentration over time were observed for monitoring points located on the opposite side of the source zone from the local–extraction point, whereas increases were observed for monitoring points located between the local–extraction point and the source zone. We found that the results illustrate that comparison of temporal concentration profiles obtained at various monitoring points gives a general indication of the source location with respect to the extraction and monitoring points.« less
New fast DCT algorithms based on Loeffler's factorization
NASA Astrophysics Data System (ADS)
Hong, Yoon Mi; Kim, Il-Koo; Lee, Tammy; Cheon, Min-Su; Alshina, Elena; Han, Woo-Jin; Park, Jeong-Hoon
2012-10-01
This paper proposes a new 32-point fast discrete cosine transform (DCT) algorithm based on the Loeffler's 16-point transform. Fast integer realizations of 16-point and 32-point transforms are also provided based on the proposed transform. For the recent development of High Efficiency Video Coding (HEVC), simplified quanti-zation and de-quantization process are proposed. Three different forms of implementation with the essentially same performance, namely matrix multiplication, partial butterfly, and full factorization can be chosen accord-ing to the given platform. In terms of the number of multiplications required for the realization, our proposed full-factorization is 3~4 times faster than a partial butterfly, and about 10 times faster than direct matrix multiplication.
NASA Technical Reports Server (NTRS)
Deepak, A.; Fluellen, A.
1978-01-01
An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.
Point and Condensed Hα Sources in the Interior of M33
NASA Astrophysics Data System (ADS)
Moody, J. Ward; Hintz, Eric G.; Roming, Peter; Joner, Michael D.; Bucklein, Brian
2017-01-01
A variety of interesting objects such as Wolf-Rayet stars, tight OB associations, planetary nebula, x-ray binaries, etc. can be discovered as point or condensed sources in Hα surveys. How these objects distribute through a galaxy sheds light on the galaxy star formation rate and history, mass distribution, and dynamics. The nearby galaxy M33 is an excellent place to study the distribution of Hα-bright point sources in a flocculant spiral galaxy. We have reprocessed an archived WIYN continuum-subtracted Hα image of the inner 6.5' of the nearby galaxy M33 and, employing both eye and machine searches, have tabulated sources with a flux greater than 1 x 10-15 erg cm-2sec-1. We have identified 152 unresolved point sources and 122 marginally resolved condensed sources, 38 of which have not been previously cataloged. We present a map of these sources and discuss their probable identifications.
UNDERSTANDING THE SOURCES OF DIABETES DISTRESS IN ADULTS WITH TYPE 1 DIABETES
Fisher, Lawrence; Polonsky, William H.; Hessler, Danielle M.; Masharani, Umesh; Blumer, Ian; Peters, Anne L.; Strycker, Lisa A.; Bowyer, Vicky
2015-01-01
Aims To identify the unique sources of diabetes distress (DD) for adults with type 1 diabetes (T1D). Methods Sources of DD were developed from qualitative interviews with 25 T1D adults and 10 diabetes health care providers. Survey items were then developed and analyzed using both exploratory (EFA) and confirmatory CFA) analyses on two patient samples. Construct validity was assessed by correlations with depressive symptoms (PHQ8), complications, HbA1C, BMI, and hypoglycemia worry scale (HWS). Scale cut-points were created using multiple regression. Results An EFA with 305 U.S. participants yielded 7 coherent, reliable sources of distress that were replicated by a CFA with 109 Canadian participants: Powerlessness, Negative Social Perceptions, Physician Distress, Friend/Family Distress, Hypoglycemia Distress, Management Distress, Eating Distress. Prevalence of DD was high with 41.6% reporting at least moderate DD. Higher DD was reported for women, those with complications, poor glycemic control, younger age, without a partner, and non-White patients. Conclusions We identified a profile of seven major sources of DD among T1D using a newly developed assessment instrument. The prevalence of DD is high and is related to glycemic control and several patient demographic and disease-related patient characteristics, arguing for a need to address DD in clinical care. PMID:25765489
A guide to differences between stochastic point-source and stochastic finite-fault simulations
Atkinson, G.M.; Assatourians, K.; Boore, D.M.; Campbell, K.; Motazedian, D.
2009-01-01
Why do stochastic point-source and finite-fault simulation models not agree on the predicted ground motions for moderate earthquakes at large distances? This question was posed by Ken Campbell, who attempted to reproduce the Atkinson and Boore (2006) ground-motion prediction equations for eastern North America using the stochastic point-source program SMSIM (Boore, 2005) in place of the finite-source stochastic program EXSIM (Motazedian and Atkinson, 2005) that was used by Atkinson and Boore (2006) in their model. His comparisons suggested that a higher stress drop is needed in the context of SMSIM to produce an average match, at larger distances, with the model predictions of Atkinson and Boore (2006) based on EXSIM; this is so even for moderate magnitudes, which should be well-represented by a point-source model. Why? The answer to this question is rooted in significant differences between point-source and finite-source stochastic simulation methodologies, specifically as implemented in SMSIM (Boore, 2005) and EXSIM (Motazedian and Atkinson, 2005) to date. Point-source and finite-fault methodologies differ in general in several important ways: (1) the geometry of the source; (2) the definition and application of duration; and (3) the normalization of finite-source subsource summations. Furthermore, the specific implementation of the methods may differ in their details. The purpose of this article is to provide a brief overview of these differences, their origins, and implications. This sets the stage for a more detailed companion article, "Comparing Stochastic Point-Source and Finite-Source Ground-Motion Simulations: SMSIM and EXSIM," in which Boore (2009) provides modifications and improvements in the implementations of both programs that narrow the gap and result in closer agreement. These issues are important because both SMSIM and EXSIM have been widely used in the development of ground-motion prediction equations and in modeling the parameters that control observed ground motions.
X-ray Point Source Populations in Spiral and Elliptical Galaxies
NASA Astrophysics Data System (ADS)
Colbert, E.; Heckman, T.; Weaver, K.; Strickland, D.
2002-01-01
The hard-X-ray luminosity of non-active galaxies has been known to be fairly well correlated with the total blue luminosity since the days of the Einstein satellite. However, the origin of this hard component was not well understood. Some possibilities that were considered included X-ray binaries, extended upscattered far-infrared light via the inverse-Compton process, extended hot 107 K gas (especially in ellipitical galaxies), or even an active nucleus. Chandra images of normal, elliptical and starburst galaxies now show that a significant amount of the total hard X-ray emission comes from individual point sources. We present here spatial and spectral analyses of the point sources in a small sample of Chandra obervations of starburst galaxies, and compare with Chandra point source analyses from comparison galaxies (elliptical, Seyfert and normal galaxies). We discuss possible relationships between the number and total hard luminosity of the X-ray point sources and various measures of the galaxy star formation rate, and discuss possible options for the numerous compact sources that are observed.
NASA Technical Reports Server (NTRS)
Fares, Nabil; Li, Victor C.
1986-01-01
An image method algorithm is presented for the derivation of elastostatic solutions for point sources in bonded halfspaces assuming the infinite space point source is known. Specific cases were worked out and shown to coincide with well known solutions in the literature.
Code of Federal Regulations, 2010 CFR
2010-07-01
... subcategory of direct discharge point sources that do not use end-of-pipe biological treatment. 414.100... AND STANDARDS ORGANIC CHEMICALS, PLASTICS, AND SYNTHETIC FIBERS Direct Discharge Point Sources That Do Not Use End-of-Pipe Biological Treatment § 414.100 Applicability; description of the subcategory of...
Better Assessment Science Integrating Point and Non-point Sources (BASINS)
Better Assessment Science Integrating Point and Nonpoint Sources (BASINS) is a multipurpose environmental analysis system designed to help regional, state, and local agencies perform watershed- and water quality-based studies.
Xie, Qing; Tao, Junhan; Wang, Yongqiang; Geng, Jianghai; Cheng, Shuyi; Lü, Fangcheng
2014-08-01
Fast and accurate positioning of partial discharge (PD) sources in transformer oil is very important for the safe, stable operation of power systems because it allows timely elimination of insulation faults. There is usually more than one PD source once an insulation fault occurs in the transformer oil. This study, which has both theoretical and practical significance, proposes a method of identifying multiple PD sources in the transformer oil. The method combines the two-sided correlation transformation algorithm in the broadband signal focusing and the modified Gerschgorin disk estimator. The method of classification of multiple signals is used to determine the directions of arrival of signals from multiple PD sources. The ultrasonic array positioning method is based on the multi-platform direction finding and the global optimization searching. Both the 4 × 4 square planar ultrasonic sensor array and the ultrasonic array detection platform are built to test the method of identifying and positioning multiple PD sources. The obtained results verify the validity and the engineering practicability of this method.
Common source-multiple load vs. separate source-individual load photovoltaic system
NASA Technical Reports Server (NTRS)
Appelbaum, Joseph
1989-01-01
A comparison of system performance is made for two possible system setups: (1) individual loads powered by separate solar cell sources; and (2) multiple loads powered by a common solar cell source. A proof for resistive loads is given that shows the advantage of a common source over a separate source photovoltaic system for a large range of loads. For identical loads, both systems perform the same.
Stamer, J.K.; Cherry, R.N.; Faye, R.E.; Kleckner, R.L.
1978-01-01
On an average annual basis and during the storm period of March 12-15, 1976, nonpoint-source loads for most constituents were larger than point-source loads at the Whitesburg station, located on the Chattahoochee River about 40 miles downstream from Atlanta, GA. Most of the nonpoint-source constituent loads in the Atlanta to Whitesburg reach were from urban areas. Average annual point-source discharges accounted for about 50 percent of the dissolved nitrogen, total nitrogen, and total phosphorus loads and about 70 percent of the dissolved phosphorus loads at Whitesburg. During a low-flow period, June 1-2, 1977, five municipal point-sources contributed 63 percent of the ultimate biochemical oxygen demand, and 97 percent of the ammonium nitrogen loads at the Franklin station, at the upstream end of West Point Lake. Dissolved-oxygen concentrations of 4.1 to 5.0 milligrams per liter occurred in a 22-mile reach of the river downstream from Atlanta due about equally to nitrogenous and carbonaceous oxygen demands. The heat load from two thermoelectric powerplants caused a decrease in dissolved-oxygen concentration of about 0.2 milligrams per liter. Phytoplankton concentrations in West Point Lake, about 70 miles downstream from Atlanta, could exceed three million cells per millimeter during extended low-flow periods in the summer with present point-source phosphorus loads. (Woodard-USGS)
Unidentified point sources in the IRAS minisurvey
NASA Technical Reports Server (NTRS)
Houck, J. R.; Soifer, B. T.; Neugebauer, G.; Beichman, C. A.; Aumann, H. H.; Clegg, P. E.; Gillett, F. C.; Habing, H. J.; Hauser, M. G.; Low, F. J.
1984-01-01
Nine bright, point-like 60 micron sources have been selected from the sample of 8709 sources in the IRAS minisurvey. These sources have no counterparts in a variety of catalogs of nonstellar objects. Four objects have no visible counterparts, while five have faint stellar objects visible in the error ellipse. These sources do not resemble objects previously known to be bright infrared sources.
Ferdous, Jannatul; Sultana, Rebeca; Rashid, Ridwan B; Tasnimuzzaman, Md; Nordland, Andreas; Begum, Anowara; Jensen, Peter K M
2018-01-01
Bangladesh is a cholera endemic country with a population at high risk of cholera. Toxigenic and non-toxigenic Vibrio cholerae ( V. cholerae ) can cause cholera and cholera-like diarrheal illness and outbreaks. Drinking water is one of the primary routes of cholera transmission in Bangladesh. The aim of this study was to conduct a comparative assessment of the presence of V. cholerae between point-of-drinking water and source water, and to investigate the variability of virulence profile using molecular methods of a densely populated low-income settlement of Dhaka, Bangladesh. Water samples were collected and tested for V. cholerae from "point-of-drinking" and "source" in 477 study households in routine visits at 6 week intervals over a period of 14 months. We studied the virulence profiles of V. cholerae positive water samples using 22 different virulence gene markers present in toxigenic O1/O139 and non-O1/O139 V. cholerae using polymerase chain reaction (PCR). A total of 1,463 water samples were collected, with 1,082 samples from point-of-drinking water in 388 households and 381 samples from 66 water sources. V. cholerae was detected in 10% of point-of-drinking water samples and in 9% of source water samples. Twenty-three percent of households and 38% of the sources were positive for V. cholerae in at least one visit. Samples collected from point-of-drinking and linked sources in a 7 day interval showed significantly higher odds ( P < 0.05) of V. cholerae presence in point-of-drinking compared to source [OR = 17.24 (95% CI = 7.14-42.89)] water. Based on the 7 day interval data, 53% (17/32) of source water samples were negative for V. cholerae while linked point-of-drinking water samples were positive. There were significantly higher odds ( p < 0.05) of the presence of V. cholerae O1 [OR = 9.13 (95% CI = 2.85-29.26)] and V. cholerae O139 [OR = 4.73 (95% CI = 1.19-18.79)] in source water samples than in point-of-drinking water samples. Contamination of water at the point-of-drinking is less likely to depend on the contamination at the water source. Hygiene education interventions and programs should focus and emphasize on water at the point-of-drinking, including repeated cleaning of drinking vessels, which is of paramount importance in preventing cholera.
NASA Astrophysics Data System (ADS)
Zhang, Tianhe C.; Grill, Warren M.
2010-12-01
Deep brain stimulation (DBS) has emerged as an effective treatment for movement disorders; however, the fundamental mechanisms by which DBS works are not well understood. Computational models of DBS can provide insights into these fundamental mechanisms and typically require two steps: calculation of the electrical potentials generated by DBS and, subsequently, determination of the effects of the extracellular potentials on neurons. The objective of this study was to assess the validity of using a point source electrode to approximate the DBS electrode when calculating the thresholds and spatial distribution of activation of a surrounding population of model neurons in response to monopolar DBS. Extracellular potentials in a homogenous isotropic volume conductor were calculated using either a point current source or a geometrically accurate finite element model of the Medtronic DBS 3389 lead. These extracellular potentials were coupled to populations of model axons, and thresholds and spatial distributions were determined for different electrode geometries and axon orientations. Median threshold differences between DBS and point source electrodes for individual axons varied between -20.5% and 9.5% across all orientations, monopolar polarities and electrode geometries utilizing the DBS 3389 electrode. Differences in the percentage of axons activated at a given amplitude by the point source electrode and the DBS electrode were between -9.0% and 12.6% across all monopolar configurations tested. The differences in activation between the DBS and point source electrodes occurred primarily in regions close to conductor-insulator interfaces and around the insulating tip of the DBS electrode. The robustness of the point source approximation in modeling several special cases—tissue anisotropy, a long active electrode and bipolar stimulation—was also examined. Under the conditions considered, the point source was shown to be a valid approximation for predicting excitation of populations of neurons in response to DBS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kastner, J. H.; Montez, R. Jr.; Rapson, V.
2012-08-15
We present an overview of the initial results from the Chandra Planetary Nebula Survey (CHANPLANS), the first systematic (volume-limited) Chandra X-Ray Observatory survey of planetary nebulae (PNe) in the solar neighborhood. The first phase of CHANPLANS targeted 21 mostly high-excitation PNe within {approx}1.5 kpc of Earth, yielding four detections of diffuse X-ray emission and nine detections of X-ray-luminous point sources at the central stars (CSPNe) of these objects. Combining these results with those obtained from Chandra archival data for all (14) other PNe within {approx}1.5 kpc that have been observed to date, we find an overall X-ray detection rate ofmore » {approx}70% for the 35 sample objects. Roughly 50% of the PNe observed by Chandra harbor X-ray-luminous CSPNe, while soft, diffuse X-ray emission tracing shocks-in most cases, 'hot bubbles'-formed by energetic wind collisions is detected in {approx}30%; five objects display both diffuse and point-like emission components. The presence (or absence) of X-ray sources appears correlated with PN density structure, in that molecule-poor, elliptical nebulae are more likely to display X-ray emission (either point-like or diffuse) than molecule-rich, bipolar, or Ring-like nebulae. All but one of the point-like CSPNe X-ray sources display X-ray spectra that are harder than expected from hot ({approx}100 kK) central stars emitting as simple blackbodies; the lone apparent exception is the central star of the Dumbbell nebula, NGC 6853. These hard X-ray excesses may suggest a high frequency of binary companions to CSPNe. Other potential explanations include self-shocking winds or PN mass fallback. Most PNe detected as diffuse X-ray sources are elliptical nebulae that display a nested shell/halo structure and bright ansae; the diffuse X-ray emission regions are confined within inner, sharp-rimmed shells. All sample PNe that display diffuse X-ray emission have inner shell dynamical ages {approx}< 5 Multiplication-Sign 10{sup 3} yr, placing firm constraints on the timescale for strong shocks due to wind interactions in PNe. The high-energy emission arising in such wind shocks may contribute to the high excitation states of certain archetypical 'hot bubble' nebulae (e.g., NGC 2392, 3242, 6826, and 7009).« less
NASA Astrophysics Data System (ADS)
Khizhanok, Andrei
Development of a compact source of high-spectral brilliance and high impulse frequency gamma rays has been in scope of Fermi National Accelerator Laboratory for quite some time. Main goal of the project is to develop a setup to support gamma rays detection test and gamma ray spectroscopy. Potential applications include but not limited to nuclear astrophysics, nuclear medicine, oncology ('gamma knife'). Present work covers multiple interconnected stages of development of the interaction region to ensure high levels of structural strength and vibrational resistance. Inverse Compton scattering is a complex phenomenon, in which charged particle transfers a part of its energy to a photon. It requires extreme precision as the interaction point is estimated to be 20 microm. The slightest deflection of the mirrors will reduce effectiveness of conversion by orders of magnitude. For acceptable conversion efficiency laser cavity also must have >1000 finesse value, which requires a trade-off between size, mechanical stability, complexity, and price of the setup. This work focuses on advantages and weak points of different designs of interaction regions as well as in-depth description of analyses performed. This includes laser cavity amplification and finesse estimates, natural frequency mapping, harmonic analysis. Structural analysis is required as interaction must occur under high vacuum conditions.
NASA Astrophysics Data System (ADS)
Salido-Monzú, David; Wieser, Andreas
2018-04-01
The intermode beats generated by direct detection of a mode-locked femtosecond laser represent inherent high-quality and high-frequency modulations suitable for electro-optical distance measurement (EDM). This approach has already been demonstrated as a robust alternative to standard long-distance EDM techniques. However, we extend this idea to intermode beating of a wideband source obtained by spectral broadening of a femtosecond laser. We aim at establishing a technological basis for accurate and flexible multiwavelength distance measurement. Results are presented from experiments using beat notes at 1 GHz generated by two bandpass-filtered regions from both extremes of a coherent supercontinuum ranging from 550 to 1050 nm. The displacement measurements performed simultaneously on both colors on a short-distance setup show that noise and coherence of the wideband laser are adequate for achieving accuracies of about 0.01 mm on each channel with a potential improvement by accessing higher beat notes. Pointing and power instabilities have been identified as dominant sources of systematic deviations. Nevertheless, the results demonstrate the basic feasibility of the proposed technique. We consider this a promising starting point for the further development of multiwavelength EDM enabling increased accuracy over long distances through dispersion-based integral refractivity compensation and for remote surface material probing along with distance measurement in laser scanning.
NASA Astrophysics Data System (ADS)
Tenkès, Lucille-Marie; Hollerbach, Rainer; Kim, Eun-jin
2017-12-01
A probabilistic description is essential for understanding growth processes in non-stationary states. In this paper, we compute time-dependent probability density functions (PDFs) in order to investigate stochastic logistic and Gompertz models, which are two of the most popular growth models. We consider different types of short-correlated multiplicative and additive noise sources and compare the time-dependent PDFs in the two models, elucidating the effects of the additive and multiplicative noises on the form of PDFs. We demonstrate an interesting transition from a unimodal to a bimodal PDF as the multiplicative noise increases for a fixed value of the additive noise. A much weaker (leaky) attractor in the Gompertz model leads to a significant (singular) growth of the population of a very small size. We point out the limitation of using stationary PDFs, mean value and variance in understanding statistical properties of the growth in non-stationary states, highlighting the importance of time-dependent PDFs. We further compare these two models from the perspective of information change that occurs during the growth process. Specifically, we define an infinitesimal distance at any time by comparing two PDFs at times infinitesimally apart and sum these distances in time. The total distance along the trajectory quantifies the total number of different states that the system undergoes in time, and is called the information length. We show that the time-evolution of the two models become more similar when measured in units of the information length and point out the merit of using the information length in unifying and understanding the dynamic evolution of different growth processes.
Multiband super-resolution imaging of graded-index photonic crystal flat lens
NASA Astrophysics Data System (ADS)
Xie, Jianlan; Wang, Junzhong; Ge, Rui; Yan, Bei; Liu, Exian; Tan, Wei; Liu, Jianjun
2018-05-01
Multiband super-resolution imaging of point source is achieved by a graded-index photonic crystal flat lens. With the calculations of six bands in common photonic crystal (CPC) constructed with scatterers of different refractive indices, it can be found that the super-resolution imaging of point source can be realized by different physical mechanisms in three different bands. In the first band, the imaging of point source is based on far-field condition of spherical wave while in the second band, it is based on the negative effective refractive index and exhibiting higher imaging quality than that of the CPC. However, in the fifth band, the imaging of point source is mainly based on negative refraction of anisotropic equi-frequency surfaces. The novel method of employing different physical mechanisms to achieve multiband super-resolution imaging of point source is highly meaningful for the field of imaging.
Long Term Temporal and Spectral Evolution of Point Sources in Nearby Elliptical Galaxies
NASA Astrophysics Data System (ADS)
Durmus, D.; Guver, T.; Hudaverdi, M.; Sert, H.; Balman, Solen
2016-06-01
We present the results of an archival study of all the point sources detected in the lines of sight of the elliptical galaxies NGC 4472, NGC 4552, NGC 4649, M32, Maffei 1, NGC 3379, IC 1101, M87, NGC 4477, NGC 4621, and NGC 5128, with both the Chandra and XMM-Newton observatories. Specifically, we studied the temporal and spectral evolution of these point sources over the course of the observations of the galaxies, mostly covering the 2000 - 2015 period. In this poster we present the first results of this study, which allows us to further constrain the X-ray source population in nearby elliptical galaxies and also better understand the nature of individual point sources.
Very Luminous X-ray Point Sources in Starburst Galaxies
NASA Astrophysics Data System (ADS)
Colbert, E.; Heckman, T.; Ptak, A.; Weaver, K. A.; Strickland, D.
Extranuclear X-ray point sources in external galaxies with luminosities above 1039.0 erg/s are quite common in elliptical, disk and dwarf galaxies, with an average of ~ 0.5 and dwarf galaxies, with an average of ~0.5 sources per galaxy. These objects may be a new class of object, perhaps accreting intermediate-mass black holes, or beamed stellar mass black hole binaries. Starburst galaxies tend to have a larger number of these intermediate-luminosity X-ray objects (IXOs), as well as a large number of lower-luminosity (1037 - 1039 erg/s) point sources. These point sources dominate the total hard X-ray emission in starburst galaxies. We present a review of both types of objects and discuss possible schemes for their formation.
Alvarsson, Jonathan; Andersson, Claes; Spjuth, Ola; Larsson, Rolf; Wikberg, Jarl E S
2011-05-20
Compound profiling and drug screening generates large amounts of data and is generally based on microplate assays. Current information systems used for handling this are mainly commercial, closed source, expensive, and heavyweight and there is a need for a flexible lightweight open system for handling plate design, and validation and preparation of data. A Bioclipse plugin consisting of a client part and a relational database was constructed. A multiple-step plate layout point-and-click interface was implemented inside Bioclipse. The system contains a data validation step, where outliers can be removed, and finally a plate report with all relevant calculated data, including dose-response curves. Brunn is capable of handling the data from microplate assays. It can create dose-response curves and calculate IC50 values. Using a system of this sort facilitates work in the laboratory. Being able to reuse already constructed plates and plate layouts by starting out from an earlier step in the plate layout design process saves time and cuts down on error sources.
Energy Spectral Behaviors of Communication Networks of Open-Source Communities
Yang, Jianmei; Yang, Huijie; Liao, Hao; Wang, Jiangtao; Zeng, Jinqun
2015-01-01
Large-scale online collaborative production activities in open-source communities must be accompanied by large-scale communication activities. Nowadays, the production activities of open-source communities, especially their communication activities, have been more and more concerned. Take CodePlex C # community for example, this paper constructs the complex network models of 12 periods of communication structures of the community based on real data; then discusses the basic concepts of quantum mapping of complex networks, and points out that the purpose of the mapping is to study the structures of complex networks according to the idea of quantum mechanism in studying the structures of large molecules; finally, according to this idea, analyzes and compares the fractal features of the spectra in different quantum mappings of the networks, and concludes that there are multiple self-similarity and criticality in the communication structures of the community. In addition, this paper discusses the insights and application conditions of different quantum mappings in revealing the characteristics of the structures. The proposed quantum mapping method can also be applied to the structural studies of other large-scale organizations. PMID:26047331
An IR Navigation System for Pleural PDT
NASA Astrophysics Data System (ADS)
Zhu, Timothy; Liang, Xing; Kim, Michele; Finlay, Jarod; Dimofte, Andreea; Rodriguez, Carmen; Simone, Charles; Friedberg, Joseph; Cengel, Keith
2015-03-01
Pleural photodynamic therapy (PDT) has been used as an adjuvant treatment with lung-sparing surgical treatment for malignant pleural mesothelioma (MPM). In the current pleural PDT protocol, a moving fiber-based point source is used to deliver the light. The light fluences at multiple locations are monitored by several isotropic detectors placed in the pleural cavity. To improve the delivery of light fluence uniformity, an infrared (IR) navigation system is used to track the motion of the light source in real-time at a rate of 20 - 60 Hz. A treatment planning system uses the laser source positions obtained from the IR camera to calculate light fluence distribution to monitor the light dose uniformity on the surface of the pleural cavity. A novel reconstruction algorithm is used to determine the pleural cavity surface contour. A dual-correction method is used to match the calculated fluences at detector locations to the detector readings. Preliminary data from a phantom shows superior light uniformity using this method. Light fluence uniformity from patient treatments is also shown with and without the correction method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Xiquan
In this proposed research, we will investigate how different meteorological regimes and aerosol sources affect DCS properties, diurnal and life cycles, and precipitation using multiple observational platforms (surface, satellite, and aircraft) and NARR reanalysis at the ARM SGP site. The Feng et al. (2011, 2012) DCS results will serve as a starting point for this proposed research, and help us to address some fundamental issues of DCSs, such as convective initiation, rain rate, areal extent (including stratiform and convective regions), and longevity. Convective properties will be stratified by meteorological regime (synoptic/mesoscale patterns) identified by reanalysis. Aerosol information obtained from themore » ARM SGP site will also be stratified by meteorological regimes to understand their effects on convection. Finally, the aircraft in-situ measurements and various radar observations and retrievals during the MC3E campaign will provide a “cloud-truth” dataset and are an invaluable data source for verifying the findings and investigating the proposed hypotheses in Objective 1.« less
The resolution of point sources of light as analyzed by quantum detection theory
NASA Technical Reports Server (NTRS)
Helstrom, C. W.
1972-01-01
The resolvability of point sources of incoherent light is analyzed by quantum detection theory in terms of two hypothesis-testing problems. In the first, the observer must decide whether there are two sources of equal radiant power at given locations, or whether there is only one source of twice the power located midway between them. In the second problem, either one, but not both, of two point sources is radiating, and the observer must decide which it is. The decisions are based on optimum processing of the electromagnetic field at the aperture of an optical instrument. In both problems the density operators of the field under the two hypotheses do not commute. The error probabilities, determined as functions of the separation of the points and the mean number of received photons, characterize the ultimate resolvability of the sources.
Profiling Students' Multiple Source Use by Question Type
ERIC Educational Resources Information Center
List, Alexandra; Grossnickle, Emily M.; Alexander, Patricia A.
2016-01-01
The present study examined undergraduate students' multiple source use in response to two different types of academic questions, one discrete and one open-ended. Participants (N = 240) responded to two questions using a library of eight digital sources, varying in source type (e.g., newspaper article) and reliability (e.g., authors' credentials).…
ERIC Educational Resources Information Center
Lee, Kwan Min; Nass, Clifford
2004-01-01
Two experiments examine the effect of multiple synthetic voices in an e-commerce context. In Study 1, participants (N=40) heard five positive reviews about a book from five different synthetic voices or from a single synthetic voice. Consistent with the multiple source effect, results showed that participants hearing multiple synthetic voices…
Ouwehand, Kim; van Gog, Tamara; Paas, Fred
2016-10-01
Research showed that source memory functioning declines with ageing. Evidence suggests that encoding visual stimuli with manual pointing in addition to visual observation can have a positive effect on spatial memory compared with visual observation only. The present study investigated whether pointing at picture locations during encoding would lead to better spatial source memory than naming (Experiment 1) and visual observation only (Experiment 2) in young and older adults. Experiment 3 investigated whether response modality during the test phase would influence spatial source memory performance. Experiments 1 and 2 supported the hypothesis that pointing during encoding led to better source memory for picture locations than naming or observation only. Young adults outperformed older adults on the source memory but not the item memory task in both Experiments 1 and 2. In Experiments 1 and 2, participants manually responded in the test phase. Experiment 3 showed that if participants had to verbally respond in the test phase, the positive effect of pointing compared with naming during encoding disappeared. The results suggest that pointing at picture locations during encoding can enhance spatial source memory in both young and older adults, but only if the response modality is congruent in the test phase.
Kaduszkiewicz, Hanna; Zimmermann, Thomas; Beck-Bornholdt, Hans-Peter; van den Bussche, Hendrik
2005-01-01
Objectives Pharmacological treatment of Alzheimer's disease focuses on correcting the cholinergic deficiency in the central nervous system with cholinesterase inhibitors. Three cholinesterase inhibitors are currently recommended: donepezil, rivastigmine, and galantamine. This review assessed the scientific evidence for the recommendation of these agents. Data sources The terms “donepezil”, “rivastigmine”, and “galantamine”, limited by “randomized-controlled-trials” were searched in Medline (1989-November 2004), Embase (1989-November 2004), and the Cochrane Database of Systematic Reviews without restriction for language. Study selection All published, double blind, randomised controlled trials examining efficacy on the basis of clinical outcomes, in which treatment with donepezil, rivastigmine, or galantamine was compared with placebo in patients with Alzheimer's disease, were included. Each study was assessed independently, following a predefined checklist of criteria of methodological quality. Results 22 trials met the inclusion criteria. Follow-up ranged from six weeks to three years. 12 of 14 studies measuring the cognitive outcome by means of the 70 point Alzheimer's disease assessment scale—cognitive subscale showed differences ranging from 1.5 points to 3.9 points in favour of the respective cholinesterase inhibitors. Benefits were also reported from all 12 trials that used the clinician's interview based impression of change scale with input from caregivers. Methodological assessment of all studies found considerable flaws—for example, multiple testing without correction for multiplicity or exclusion of patients after randomisation. Conclusion Because of flawed methods and small clinical benefits, the scientific basis for recommendations of cholinesterase inhibitors for the treatment of Alzheimer's disease is questionable. PMID:16081444
Integration of optical imaging with a small animal irradiator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weersink, Robert A., E-mail: robert.weersink@rmp.uhn.on.ca; Ansell, Steve; Wang, An
Purpose: The authors describe the integration of optical imaging with a targeted small animal irradiator device, focusing on design, instrumentation, 2D to 3D image registration, 2D targeting, and the accuracy of recovering and mapping the optical signal to a 3D surface generated from the cone-beam computed tomography (CBCT) imaging. The integration of optical imaging will improve targeting of the radiation treatment and offer longitudinal tracking of tumor response of small animal models treated using the system. Methods: The existing image-guided small animal irradiator consists of a variable kilovolt (peak) x-ray tube mounted opposite an aSi flat panel detector, both mountedmore » on a c-arm gantry. The tube is used for both CBCT imaging and targeted irradiation. The optical component employs a CCD camera perpendicular to the x-ray treatment/imaging axis with a computer controlled filter for spectral decomposition. Multiple optical images can be acquired at any angle as the gantry rotates. The optical to CBCT registration, which uses a standard pinhole camera model, was modeled and tested using phantoms with markers visible in both optical and CBCT images. Optically guided 2D targeting in the anterior/posterior direction was tested on an anthropomorphic mouse phantom with embedded light sources. The accuracy of the mapping of optical signal to the CBCT surface was tested using the same mouse phantom. A surface mesh of the phantom was generated based on the CBCT image and optical intensities projected onto the surface. The measured surface intensity was compared to calculated surface for a point source at the actual source position. The point-source position was also optimized to provide the closest match between measured and calculated intensities, and the distance between the optimized and actual source positions was then calculated. This process was repeated for multiple wavelengths and sources. Results: The optical to CBCT registration error was 0.8 mm. Two-dimensional targeting of a light source in the mouse phantom based on optical imaging along the anterior/posterior direction was accurate to 0.55 mm. The mean square residual error in the normalized measured projected surface intensities versus the calculated normalized intensities ranged between 0.0016 and 0.006. Optimizing the position reduced this error from 0.00016 to 0.0004 with distances ranging between 0.7 and 1 mm between the actual and calculated position source positions. Conclusions: The integration of optical imaging on an existing small animal irradiation platform has been accomplished. A targeting accuracy of 1 mm can be achieved in rigid, homogeneous phantoms. The combination of optical imaging with a CBCT image-guided small animal irradiator offers the potential to deliver functionally targeted dose distributions, as well as monitor spatial and temporal functional changes that occur with radiation therapy.« less
Management of Globally Distributed Software Development Projects in Multiple-Vendor Constellations
NASA Astrophysics Data System (ADS)
Schott, Katharina; Beck, Roman; Gregory, Robert Wayne
Global information systems development outsourcing is an apparent trend that is expected to continue in the foreseeable future. Thereby, IS-related services are not only increasingly provided from different geographical sites simultaneously but beyond that from multiple service providers based in different countries. The purpose of this paper is to understand how the involvement of multiple service providers affects the management of the globally distributed information systems development projects. As research on this topic is scarce, we applied an exploratory in-depth single-case study design as research approach. The case we analyzed comprises a global software development outsourcing project initiated by a German bank together with several globally distributed vendors. For data collection and data analysis we have adopted techniques suggested by the grounded theory method. Whereas the extant literature points out the increased management overhead associated with multi-sourcing, the analysis of our case suggests that the required effort for managing global outsourcing projects with multiple vendors depends among other things on the maturation level of the cooperation within the vendor portfolio. Furthermore, our data indicate that this interplay maturity is positively impacted through knowledge about the client that has been derived based on already existing client-vendor relationships. The paper concludes by offering theoretical and practical implications.
Multiple excitation nano-spot generation and confocal detection for far-field microscopy.
Mondal, Partha Pratim
2010-03-01
An imaging technique is developed for the controlled generation of multiple excitation nano-spots for far-field microscopy. The system point spread function (PSF) is obtained by interfering two counter-propagating extended depth-of-focus PSF (DoF-PSF), resulting in highly localized multiple excitation spots along the optical axis. The technique permits (1) simultaneous excitation of multiple planes in the specimen; (2) control of the number of spots by confocal detection; and (3) overcoming the point-by-point based excitation. Fluorescence detection from the excitation spots can be efficiently achieved by Z-scanning the detector/pinhole assembly. The technique complements most of the bioimaging techniques and may find potential application in high resolution fluorescence microscopy and nanoscale imaging.
Multiple excitation nano-spot generation and confocal detection for far-field microscopy
NASA Astrophysics Data System (ADS)
Mondal, Partha Pratim
2010-03-01
An imaging technique is developed for the controlled generation of multiple excitation nano-spots for far-field microscopy. The system point spread function (PSF) is obtained by interfering two counter-propagating extended depth-of-focus PSF (DoF-PSF), resulting in highly localized multiple excitation spots along the optical axis. The technique permits (1) simultaneous excitation of multiple planes in the specimen; (2) control of the number of spots by confocal detection; and (3) overcoming the point-by-point based excitation. Fluorescence detection from the excitation spots can be efficiently achieved by Z-scanning the detector/pinhole assembly. The technique complements most of the bioimaging techniques and may find potential application in high resolution fluorescence microscopy and nanoscale imaging.
WATER QUALITY IN SOURCE WATER, TREATMENT, AND DISTRIBUTION SYSTEMS
Most drinking water utilities practice the multiple-barrier concept as the guiding principle for providing safe water. This chapter discusses multiple barriers as they relate to the basic criteria for selecting and protecting source waters, including known and potential sources ...
Paper focuses on trading schemes in which regulated point sources are allowed to avoid upgrading their pollution control technology to meet water quality-based effluent limits if they pay for equivalent (or greater) reductions in nonpoint source pollution.
The Microbial Source Module (MSM) estimates microbial loading rates to land surfaces from non-point sources, and to streams from point sources for each subwatershed within a watershed. A subwatershed, the smallest modeling unit, represents the common basis for information consume...
Code of Federal Regulations, 2010 CFR
2010-07-01
... ORGANIC CHEMICALS, PLASTICS, AND SYNTHETIC FIBERS Direct Discharge Point Sources That Use End-of-Pipe... subcategory of direct discharge point sources that use end-of-pipe biological treatment. 414.90 Section 414.90... that use end-of-pipe biological treatment. The provisions of this subpart are applicable to the process...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false BAT and NSPS Effluent Limitations for Priority Pollutants for Direct Discharge Point Sources That use End-of-Pipe Biological Treatment 4 Table 4... Limitations for Priority Pollutants for Direct Discharge Point Sources That use End-of-Pipe Biological...
Multi-rate, real time image compression for images dominated by point sources
NASA Technical Reports Server (NTRS)
Huber, A. Kris; Budge, Scott E.; Harris, Richard W.
1993-01-01
An image compression system recently developed for compression of digital images dominated by point sources is presented. Encoding consists of minimum-mean removal, vector quantization, adaptive threshold truncation, and modified Huffman encoding. Simulations are presented showing that the peaks corresponding to point sources can be transmitted losslessly for low signal-to-noise ratios (SNR) and high point source densities while maintaining a reduced output bit rate. Encoding and decoding hardware has been built and tested which processes 552,960 12-bit pixels per second at compression rates of 10:1 and 4:1. Simulation results are presented for the 10:1 case only.
The Treatment Train approach to reducing non-point source pollution from agriculture
NASA Astrophysics Data System (ADS)
Barber, N.; Reaney, S. M.; Barker, P. A.; Benskin, C.; Burke, S.; Cleasby, W.; Haygarth, P.; Jonczyk, J. C.; Owen, G. J.; Snell, M. A.; Surridge, B.; Quinn, P. F.
2016-12-01
An experimental approach has been applied to an agricultural catchment in NW England, where non-point pollution adversely affects freshwater ecology. The aim of the work (as part of the River Eden Demonstration Test Catchment project) is to develop techniques to manage agricultural runoff whilst maintaining food production. The approach used is the Treatment Train (TT), which applies multiple connected mitigation options that control nutrient and fine sediment pollution at source, and address polluted runoff pathways at increasing spatial scale. The principal agricultural practices in the study sub-catchment (1.5 km2) are dairy and stock production. Farm yards can act as significant pollution sources by housing large numbers of animals; these areas are addressed initially with infrastructure improvements e.g. clean/dirty water separation and upgraded waste storage. In-stream high resolution monitoring of hydrology and water quality parameters showed high-discharge events to account for the majority of pollutant exports ( 80% total phosphorus; 95% fine sediment), and primary transfer routes to be surface and shallow sub-surface flow pathways, including drains. To manage these pathways and reduce hydrological connectivity, a series of mitigation features were constructed to intercept and temporarily store runoff. Farm tracks, field drains, first order ditches and overland flow pathways were all targeted. The efficacy of the mitigation features has been monitored at event and annual scale, using inflow-outflow sampling and sediment/nutrient accumulation measurements, respectively. Data presented here show varied but positive results in terms of reducing acute and chronic sediment and nutrient losses. An aerial fly-through of the catchment is used to demonstrate how the TT has been applied to a fully-functioning agricultural landscape. The elevated perspective provides a better understanding of the spatial arrangement of mitigation features, and how they can be implemented without impacting on the farm's primary function. The TT has the potential to yield benefits beyond those associated with water quality. Increasing catchment resilience through the use of landscape interventions can provide multiple benefits by mitigating for floods and droughts and creating ecological habitat.
Zhou, Liang; Xu, Jian-Gang; Sun, Dong-Qi; Ni, Tian-Hua
2013-02-01
Agricultural non-point source pollution is of importance in river deterioration. Thus identifying and concentrated controlling the key source-areas are the most effective approaches for non-point source pollution control. This study adopts inventory method to analysis four kinds of pollution sources and their emissions intensity of the chemical oxygen demand (COD), total nitrogen (TN), and total phosphorus (TP) in 173 counties (cities, districts) in Huaihe River Basin. The four pollution sources include livestock breeding, rural life, farmland cultivation, aquacultures. The paper mainly addresses identification of non-point polluted sensitivity areas, key pollution sources and its spatial distribution characteristics through cluster, sensitivity evaluation and spatial analysis. A geographic information system (GIS) and SPSS were used to carry out this study. The results show that: the COD, TN and TP emissions of agricultural non-point sources were 206.74 x 10(4) t, 66.49 x 10(4) t, 8.74 x 10(4) t separately in Huaihe River Basin in 2009; the emission intensity were 7.69, 2.47, 0.32 t.hm-2; the proportions of COD, TN, TP emissions were 73%, 24%, 3%. The paper achieves that: the major pollution source of COD, TN and TP was livestock breeding and rural life; the sensitivity areas and priority pollution control areas among the river basin of non-point source pollution are some sub-basins of the upper branches in Huaihe River, such as Shahe River, Yinghe River, Beiru River, Jialu River and Qingyi River; livestock breeding is the key pollution source in the priority pollution control areas. Finally, the paper concludes that pollution type of rural life has the highest pollution contribution rate, while comprehensive pollution is one type which is hard to control.
1991-11-01
Nicholas George "Image Deblurring for Multiple-Point Impulse Responses," Bryan J. Stossel and Nicholas George 14. SUBJECT TERMS 15. NUMBER OF PAGES...Keith B. Farr Nicholas George Backscatter from a Tilted Rough Disc Donald J. Schertler Nicholas George Image Deblurring for Multiple-Point Impulse ...correlation components. Uf) c)z 0 CL C/) Ix I- z 0 0 LL C,z -J a 0l IMAGE DEBLURRING FOR MULTIPLE-POINT IMPULSE RESPONSES Bryan J. Stossel and Nicholas George
Jiang, Mengzhen; Chen, Haiying; Chen, Qinghui
2013-11-01
With the purpose of providing scientific basis for environmental planning about non-point source pollution prevention and control, and improving the pollution regulating efficiency, this paper established the Grid Landscape Contrast Index based on Location-weighted Landscape Contrast Index according to the "source-sink" theory. The spatial distribution of non-point source pollution caused by Jiulongjiang Estuary could be worked out by utilizing high resolution remote sensing images. The results showed that, the area of "source" of nitrogen and phosphorus in Jiulongjiang Estuary was 534.42 km(2) in 2008, and the "sink" was 172.06 km(2). The "source" of non-point source pollution was distributed mainly over Xiamen island, most of Haicang, east of Jiaomei and river bank of Gangwei and Shima; and the "sink" was distributed over southwest of Xiamen island and west of Shima. Generally speaking, the intensity of "source" gets weaker along with the distance from the seas boundary increase, while "sink" gets stronger. Copyright © 2013 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Shih, Ching-Hsiang; Chang, Man-Ling; Shih, Ching-Tien
2009-01-01
This study evaluated whether two people with multiple disabilities and minimal motor behavior would be able to improve their pointing performance using finger poke ability with a mouse wheel through a Dynamic Pointing Assistive Program (DPAP) and a newly developed mouse driver (i.e., a new mouse driver replaces standard mouse driver, changes a…
ERIC Educational Resources Information Center
Shih, Ching-Hsiang; Chiu, Sheng-Kai; Chu, Chiung-Ling; Shih, Ching-Tien; Liao, Yung-Kun; Lin, Chia-Chen
2010-01-01
This study evaluated whether two people with multiple disabilities would be able to improve their pointing performance using hand swing with a standard mouse through an Extended Dynamic Pointing Assistive Program (EDPAP) and a newly developed mouse driver (i.e., a new mouse driver replaces standard mouse driver, and changes a mouse into a precise…
Some Characteristics of Current Star Formation in the 30 Doradus Nebula Revealed by HST/NICMOS
NASA Astrophysics Data System (ADS)
Walborn, Nolan R.; Barbá, Rodolfo H.; Brandner, Wolfgang; Rubio, Mónica; Grebel, Eva K.; Probst, Ronald G.
1999-01-01
The extensive ``second generation'' of star formation within the 30 Doradus Nebula, evidently triggered by the R136 central cluster around its periphery, has been imaged with the Near-Infrared Camera and Multi-Object Spectrometer (NICMOS) on the Hubble Space Telescope. Many new IR sources, including multiple systems, clusters, and nebular structures, are found in these images. Six of the NICMOS fields are described here, in comparison with the WFPC2 images of the same fields. Knots 1-3 of Walborn & Blades (early O stars embedded in dense nebular knots) are all found to be compact multiple systems. Knot 1 is shown to reside at the top of a massive dust pillar oriented directly toward R136, whose summit has just been removed, exposing the newborn stellar system. Knots 1 and 3 are also near the brightest IR sources in the region, while parsec-scale jet structures are discovered in association with Knots 2 and 3. The Knot 2 structures consist of detached, nonstellar IR sources aligned on either side of the stellar system, which are interpreted as impact points of a highly collimated, possibly rotating bipolar jet on the surrounding dark clouds; the H_2O maser found by Whiteoak et al. is also in this field. These outflows from young massive stars in 30 Dor are the first extragalactic examples of the phenomenon. In the field of the pillars south of R136, recently discussed in comparison with the M16 pillars by Scowen et al., a new luminous stellar IR source has been discovered. These results establish the 30 Doradus Nebula as a prime region in which to investigate the formation and very early evolution of massive stars and multiple systems. The theme of triggered formation within the heads of extensive dust pillars oriented toward R136 is strong. In addition, these results provide further insights into the global structure and evolution of 30 Doradus, which are significant in view of its status as the best resolved extragalactic starburst. This paper is dedicated to W. W. Morgan, who taught me the power of morphology to uncover new phenomena in astronomy.-N. R. W.
NASA Astrophysics Data System (ADS)
Tomljenovic, Ivan; Tiede, Dirk; Blaschke, Thomas
2016-10-01
In the past two decades Object-Based Image Analysis (OBIA) established itself as an efficient approach for the classification and extraction of information from remote sensing imagery and, increasingly, from non-image based sources such as Airborne Laser Scanner (ALS) point clouds. ALS data is represented in the form of a point cloud with recorded multiple returns and intensities. In our work, we combined OBIA with ALS point cloud data in order to identify and extract buildings as 2D polygons representing roof outlines in a top down mapping approach. We performed rasterization of the ALS data into a height raster for the purpose of the generation of a Digital Surface Model (DSM) and a derived Digital Elevation Model (DEM). Further objects were generated in conjunction with point statistics from the linked point cloud. With the use of class modelling methods, we generated the final target class of objects representing buildings. The approach was developed for a test area in Biberach an der Riß (Germany). In order to point out the possibilities of the adaptation-free transferability to another data set, the algorithm has been applied ;as is; to the ISPRS Benchmarking data set of Toronto (Canada). The obtained results show high accuracies for the initial study area (thematic accuracies of around 98%, geometric accuracy of above 80%). The very high performance within the ISPRS Benchmark without any modification of the algorithm and without any adaptation of parameters is particularly noteworthy.
Terrestrial laser scanning in monitoring of anthropogenic objects
NASA Astrophysics Data System (ADS)
Zaczek-Peplinska, Janina; Kowalska, Maria
2017-12-01
The registered xyz coordinates in the form of a point cloud captured by terrestrial laser scanner and the intensity values (I) assigned to them make it possible to perform geometric and spectral analyses. Comparison of point clouds registered in different time periods requires conversion of the data to a common coordinate system and proper data selection is necessary. Factors like point distribution dependant on the distance between the scanner and the surveyed surface, angle of incidence, tasked scan's density and intensity value have to be taken into consideration. A prerequisite for running a correct analysis of the obtained point clouds registered during periodic measurements using a laser scanner is the ability to determine the quality and accuracy of the analysed data. The article presents a concept of spectral data adjustment based on geometric analysis of a surface as well as examples of geometric analyses integrating geometric and physical data in one cloud of points: cloud point coordinates, recorded intensity values, and thermal images of an object. The experiments described here show multiple possibilities of usage of terrestrial laser scanning data and display the necessity of using multi-aspect and multi-source analyses in anthropogenic object monitoring. The article presents examples of multisource data analyses with regard to Intensity value correction due to the beam's incidence angle. The measurements were performed using a Leica Nova MS50 scanning total station, Z+F Imager 5010 scanner and the integrated Z+F T-Cam thermal camera.
Statistical approaches for the determination of cut points in anti-drug antibody bioassays.
Schaarschmidt, Frank; Hofmann, Matthias; Jaki, Thomas; Grün, Bettina; Hothorn, Ludwig A
2015-03-01
Cut points in immunogenicity assays are used to classify future specimens into anti-drug antibody (ADA) positive or negative. To determine a cut point during pre-study validation, drug-naive specimens are often analyzed on multiple microtiter plates taking sources of future variability into account, such as runs, days, analysts, gender, drug-spiked and the biological variability of un-spiked specimens themselves. Five phenomena may complicate the statistical cut point estimation: i) drug-naive specimens may contain already ADA-positives or lead to signals that erroneously appear to be ADA-positive, ii) mean differences between plates may remain after normalization of observations by negative control means, iii) experimental designs may contain several factors in a crossed or hierarchical structure, iv) low sample sizes in such complex designs lead to low power for pre-tests on distribution, outliers and variance structure, and v) the choice between normal and log-normal distribution has a serious impact on the cut point. We discuss statistical approaches to account for these complex data: i) mixture models, which can be used to analyze sets of specimens containing an unknown, possibly larger proportion of ADA-positive specimens, ii) random effects models, followed by the estimation of prediction intervals, which provide cut points while accounting for several factors, and iii) diagnostic plots, which allow the post hoc assessment of model assumptions. All methods discussed are available in the corresponding R add-on package mixADA. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Liu, Jinxin; Chen, Xuefeng; Gao, Jiawei; Zhang, Xingwu
2016-12-01
Air vehicles, space vehicles and underwater vehicles, the cabins of which can be viewed as variable section cylindrical structures, have multiple rotational vibration sources (e.g., engines, propellers, compressors and motors), making the spectrum of noise multiple-harmonic. The suppression of such noise has been a focus of interests in the field of active vibration control (AVC). In this paper, a multiple-source multiple-harmonic (MSMH) active vibration suppression algorithm with feed-forward structure is proposed based on reference amplitude rectification and conjugate gradient method (CGM). An AVC simulation scheme called finite element model in-loop simulation (FEMILS) is also proposed for rapid algorithm verification. Numerical studies of AVC are conducted on a variable section cylindrical structure based on the proposed MSMH algorithm and FEMILS scheme. It can be seen from the numerical studies that: (1) the proposed MSMH algorithm can individually suppress each component of the multiple-harmonic noise with an unified and improved convergence rate; (2) the FEMILS scheme is convenient and straightforward for multiple-source simulations with an acceptable loop time. Moreover, the simulations have similar procedure to real-life control and can be easily extended to physical model platform.
Searching Information Sources in Networks
2017-06-14
SECURITY CLASSIFICATION OF: During the course of this project, we made significant progresses in multiple directions of the information detection...result on information source detection on non-tree networks; (2) The development of information source localization algorithms to detect multiple... information sources. The algorithms have provable performance guarantees and outperform existing algorithms in 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND
Improved Multiple-Species Cyclotron Ion Source
NASA Technical Reports Server (NTRS)
Soli, George A.; Nichols, Donald K.
1990-01-01
Use of pure isotope 86Kr instead of natural krypton in multiple-species ion source enables source to produce krypton ions separated from argon ions by tuning cylcotron with which source used. Addition of capability to produce and separate krypton ions at kinetic energies of 150 to 400 MeV necessary for simulation of worst-case ions occurring in outer space.
The Kumamoto Mw7.1 mainshock: deep initiation triggered by the shallow foreshocks
NASA Astrophysics Data System (ADS)
Shi, Q.; Wei, S.
2017-12-01
The Kumamoto Mw7.1 earthquake and its Mw6.2 foreshock struck the central Kyushu region in mid-April, 2016. The surface ruptures are characterized with multiple fault segments and a mix of strike-slip and normal motion extended from the intersection area of Hinagu and Futagawa faults to the southwest of Mt. Aso. Despite complex surface ruptures, most of the finite fault inversions use two fault segments to approximate the fault geometry. To study the rupture process and the complex fault geometry of this earthquake, we performed a multiple point source inversion for the mainshock using the data on 93 K-net and Kik-net stations. With path calibration from the Mw6.0 foreshock, we selected the frequency ranges for the Pnl waves (0.02 0.26 Hz) and surface waves (0.02 0.12 Hz), as well as the components that can be well modeled with the 1D velocity model. Our four-point-source results reveal a unilateral rupture towards Mt. Aso and varying fault geometries. The first sub-event is a high angle ( 79°) right-lateral strike-slip event at the depth of 16 km on the north end of the Hinagu fault. Notably the two M>6 foreshocks is located by our previous studies near the north end of the Hinagu fault at the depth of 5 9 km, which may give rise to the stress concentration at depth. The following three sub-events are distributed along the surface rupture of the Futagawa fault, with focal depths within 4 10 km. Their focal mechanisms present similar right-lateral fault slips with relatively small dip angles (62 67°) and apparent normal-fault component. Thus, the mainshock rupture initiated from the relatively deep part of the Hinagu fault and propagated through the fault-bend toward NE along the relatively shallow part of the Futagawa fault until it was terminated near Mt. Aso. Based on the four-point-source solution, we conducted a finite-fault inversion and obtained a kinematic rupture model of the mainshock. We then performed the Coulomb Stress analyses on the two foreshocks and the mainshock. The results support that the stress alternation after the foreshocks may have triggered the failure on the fault plane of the Mw7.1 earthquake. Therefore, the 2016 Kumamoto earthquake sequence is dominated by a series of large triggering events whose initiation is associated with the geometric barrier in the intersection of the Futagawa and Hinagu faults.
Ferdous, Jannatul; Sultana, Rebeca; Rashid, Ridwan B.; Tasnimuzzaman, Md.; Nordland, Andreas; Begum, Anowara; Jensen, Peter K. M.
2018-01-01
Bangladesh is a cholera endemic country with a population at high risk of cholera. Toxigenic and non-toxigenic Vibrio cholerae (V. cholerae) can cause cholera and cholera-like diarrheal illness and outbreaks. Drinking water is one of the primary routes of cholera transmission in Bangladesh. The aim of this study was to conduct a comparative assessment of the presence of V. cholerae between point-of-drinking water and source water, and to investigate the variability of virulence profile using molecular methods of a densely populated low-income settlement of Dhaka, Bangladesh. Water samples were collected and tested for V. cholerae from “point-of-drinking” and “source” in 477 study households in routine visits at 6 week intervals over a period of 14 months. We studied the virulence profiles of V. cholerae positive water samples using 22 different virulence gene markers present in toxigenic O1/O139 and non-O1/O139 V. cholerae using polymerase chain reaction (PCR). A total of 1,463 water samples were collected, with 1,082 samples from point-of-drinking water in 388 households and 381 samples from 66 water sources. V. cholerae was detected in 10% of point-of-drinking water samples and in 9% of source water samples. Twenty-three percent of households and 38% of the sources were positive for V. cholerae in at least one visit. Samples collected from point-of-drinking and linked sources in a 7 day interval showed significantly higher odds (P < 0.05) of V. cholerae presence in point-of-drinking compared to source [OR = 17.24 (95% CI = 7.14–42.89)] water. Based on the 7 day interval data, 53% (17/32) of source water samples were negative for V. cholerae while linked point-of-drinking water samples were positive. There were significantly higher odds (p < 0.05) of the presence of V. cholerae O1 [OR = 9.13 (95% CI = 2.85–29.26)] and V. cholerae O139 [OR = 4.73 (95% CI = 1.19–18.79)] in source water samples than in point-of-drinking water samples. Contamination of water at the point-of-drinking is less likely to depend on the contamination at the water source. Hygiene education interventions and programs should focus and emphasize on water at the point-of-drinking, including repeated cleaning of drinking vessels, which is of paramount importance in preventing cholera. PMID:29616005
Modeling of Pixelated Detector in SPECT Pinhole Reconstruction.
Feng, Bing; Zeng, Gengsheng L
2014-04-10
A challenge for the pixelated detector is that the detector response of a gamma-ray photon varies with the incident angle and the incident location within a crystal. The normalization map obtained by measuring the flood of a point-source at a large distance can lead to artifacts in reconstructed images. In this work, we investigated a method of generating normalization maps by ray-tracing through the pixelated detector based on the imaging geometry and the photo-peak energy for the specific isotope. The normalization is defined for each pinhole as the normalized detector response for a point-source placed at the focal point of the pinhole. Ray-tracing is used to generate the ideal flood image for a point-source. Each crystal pitch area on the back of the detector is divided into 60 × 60 sub-pixels. Lines are obtained by connecting between a point-source and the centers of sub-pixels inside each crystal pitch area. For each line ray-tracing starts from the entrance point at the detector face and ends at the center of a sub-pixel on the back of the detector. Only the attenuation by NaI(Tl) crystals along each ray is assumed to contribute directly to the flood image. The attenuation by the silica (SiO 2 ) reflector is also included in the ray-tracing. To calculate the normalization for a pinhole, we need to calculate the ideal flood for a point-source at 360 mm distance (where the point-source was placed for the regular flood measurement) and the ideal flood image for the point-source at the pinhole focal point, together with the flood measurement at 360 mm distance. The normalizations are incorporated in the iterative OSEM reconstruction as a component of the projection matrix. Applications to single-pinhole and multi-pinhole imaging showed that this method greatly reduced the reconstruction artifacts.
THE SCREENING AND RANKING ALGORITHM FOR CHANGE-POINTS DETECTION IN MULTIPLE SAMPLES
Song, Chi; Min, Xiaoyi; Zhang, Heping
2016-01-01
The chromosome copy number variation (CNV) is the deviation of genomic regions from their normal copy number states, which may associate with many human diseases. Current genetic studies usually collect hundreds to thousands of samples to study the association between CNV and diseases. CNVs can be called by detecting the change-points in mean for sequences of array-based intensity measurements. Although multiple samples are of interest, the majority of the available CNV calling methods are single sample based. Only a few multiple sample methods have been proposed using scan statistics that are computationally intensive and designed toward either common or rare change-points detection. In this paper, we propose a novel multiple sample method by adaptively combining the scan statistic of the screening and ranking algorithm (SaRa), which is computationally efficient and is able to detect both common and rare change-points. We prove that asymptotically this method can find the true change-points with almost certainty and show in theory that multiple sample methods are superior to single sample methods when shared change-points are of interest. Additionally, we report extensive simulation studies to examine the performance of our proposed method. Finally, using our proposed method as well as two competing approaches, we attempt to detect CNVs in the data from the Primary Open-Angle Glaucoma Genes and Environment study, and conclude that our method is faster and requires less information while our ability to detect the CNVs is comparable or better. PMID:28090239
NASA Astrophysics Data System (ADS)
Lubow, S.; Budavári, T.
2013-10-01
We have created an initial catalog of objects observed by the WFPC2 and ACS instruments on the Hubble Space Telescope (HST). The catalog is based on observations taken on more than 6000 visits (telescope pointings) of ACS/WFC and more than 25000 visits of WFPC2. The catalog is obtained by cross matching by position in the sky all Hubble Legacy Archive (HLA) Source Extractor source lists for these instruments. The source lists describe properties of source detections within a visit. The calculations are performed on a SQL Server database system. First we collect overlapping images into groups, e.g., Eta Car, and determine nearby (approximately matching) pairs of sources from different images within each group. We then apply a novel algorithm for improving the cross matching of pairs of sources by adjusting the astrometry of the images. Next, we combine pairwise matches into maximal sets of possible multi-source matches. We apply a greedy Bayesian method to split the maximal matches into more reliable matches. We test the accuracy of the matches by comparing the fluxes of the matched sources. The result is a set of information that ties together multiple observations of the same object. A byproduct of the catalog is greatly improved relative astrometry for many of the HST images. We also provide information on nondetections that can be used to determine dropouts. With the catalog, for the first time, one can carry out time domain, multi-wavelength studies across a large set of HST data. The catalog is publicly available. Much more can be done to expand the catalog capabilities.
Searches for point sources in the Galactic Center region
NASA Astrophysics Data System (ADS)
di Mauro, Mattia; Fermi-LAT Collaboration
2017-01-01
Several groups have demonstrated the existence of an excess in the gamma-ray emission around the Galactic Center (GC) with respect to the predictions from a variety of Galactic Interstellar Emission Models (GIEMs) and point source catalogs. The origin of this excess, peaked at a few GeV, is still under debate. A possible interpretation is that it comes from a population of unresolved Millisecond Pulsars (MSPs) in the Galactic bulge. We investigate the detection of point sources in the GC region using new tools which the Fermi-LAT Collaboration is developing in the context of searches for Dark Matter (DM) signals. These new tools perform very fast scans iteratively testing for additional point sources at each of the pixels of the region of interest. We show also how to discriminate between point sources and structural residuals from the GIEM. We apply these methods to the GC region considering different GIEMs and testing the DM and MSPs intepretations for the GC excess. Additionally, we create a list of promising MSP candidates that could represent the brightest sources of a MSP bulge population.
Single-channel mixed signal blind source separation algorithm based on multiple ICA processing
NASA Astrophysics Data System (ADS)
Cheng, Xiefeng; Li, Ji
2017-01-01
Take separating the fetal heart sound signal from the mixed signal that get from the electronic stethoscope as the research background, the paper puts forward a single-channel mixed signal blind source separation algorithm based on multiple ICA processing. Firstly, according to the empirical mode decomposition (EMD), the single-channel mixed signal get multiple orthogonal signal components which are processed by ICA. The multiple independent signal components are called independent sub component of the mixed signal. Then by combining with the multiple independent sub component into single-channel mixed signal, the single-channel signal is expanded to multipath signals, which turns the under-determined blind source separation problem into a well-posed blind source separation problem. Further, the estimate signal of source signal is get by doing the ICA processing. Finally, if the separation effect is not very ideal, combined with the last time's separation effect to the single-channel mixed signal, and keep doing the ICA processing for more times until the desired estimated signal of source signal is get. The simulation results show that the algorithm has good separation effect for the single-channel mixed physiological signals.
NASA Astrophysics Data System (ADS)
Fang, Huaiyang; Lu, Qingshui; Gao, Zhiqiang; Shi, Runhe; Gao, Wei
2013-09-01
China economy has been rapidly increased since 1978. Rapid economic growth led to fast growth of fertilizer and pesticide consumption. A significant portion of fertilizers and pesticides entered the water and caused water quality degradation. At the same time, rapid economic growth also caused more and more point source pollution discharge into the water. Eutrophication has become a major threat to the water bodies. Worsening environment problems forced governments to take measures to control water pollution. We extracted land cover from Landsat TM images; calculated point source pollution with export coefficient method; then SWAT model was run to simulate non-point source pollution. We found that the annual TP loads from industry pollution into rivers are 115.0 t in the entire watershed. Average annual TP loads from each sub-basin ranged from 0 to 189.4 ton. Higher TP loads of each basin from livestock and human living mainly occurs in the areas where they are far from large towns or cities and the TP loads from industry are relatively low. Mean annual TP loads that delivered to the streams was 246.4 tons and the highest TP loads occurred in north part of this area, and the lowest TP loads is mainly distributed in middle part. Therefore, point source pollution has much high proportion in this area and governments should take measures to control point source pollution.
Yang, Liping; Mei, Kun; Liu, Xingmei; Wu, Laosheng; Zhang, Minghua; Xu, Jianming; Wang, Fan
2013-08-01
Water quality degradation in river systems has caused great concerns all over the world. Identifying the spatial distribution and sources of water pollutants is the very first step for efficient water quality management. A set of water samples collected bimonthly at 12 monitoring sites in 2009 and 2010 were analyzed to determine the spatial distribution of critical parameters and to apportion the sources of pollutants in Wen-Rui-Tang (WRT) river watershed, near the East China Sea. The 12 monitoring sites were divided into three administrative zones of urban, suburban, and rural zones considering differences in land use and population density. Multivariate statistical methods [one-way analysis of variance, principal component analysis (PCA), and absolute principal component score-multiple linear regression (APCS-MLR) methods] were used to investigate the spatial distribution of water quality and to apportion the pollution sources. Results showed that most water quality parameters had no significant difference between the urban and suburban zones, whereas these two zones showed worse water quality than the rural zone. Based on PCA and APCS-MLR analysis, urban domestic sewage and commercial/service pollution, suburban domestic sewage along with fluorine point source pollution, and agricultural nonpoint source pollution with rural domestic sewage pollution were identified to the main pollution sources in urban, suburban, and rural zones, respectively. Understanding the water pollution characteristics of different administrative zones could put insights into effective water management policy-making especially in the area across various administrative zones.
Airborne Dioxins, Furans and Polycyclic Aromatic Hydrocarbons Exposure to Military Personnel in Iraq
Masiol, Mauro; Mallon, Timothy; Haines, Kevin M.; Utell, Mark J.; Hopke, Philip K.
2016-01-01
Objectives The objective was to use ambient polycyclic aromatic hydrocarbon (PAH), polychlorinated dibenzo-p-dioxins (PCDD) and polychlorinated dibenzofurans (PCDF) concentrations measured at Joint Base Balad in Iraq in 2007 to identify the sources of these species and their spatial patterns. Methods The ratios of the measured species were compared to literature data for likely emission sources. Using the multiple site measurements on specific days, contour maps have been drawn using inverse distance weighting (IDW). Results These analyses suggest multiple sources including the burn pit (primarily a source of PCDD/PCDFs), the transportation field (primarily as source of PAHs) and other sources of PAHs that include aircraft, space heating, and diesel power generation. Conclusions The nature and locations of the sources were identified. PCDD/PCDFs were emitted by the burn pit. Multiple PAH sources exist across the base. PMID:27501100
Churkin, Alexander; Barash, Danny
2008-01-01
Background RNAmute is an interactive Java application which, given an RNA sequence, calculates the secondary structure of all single point mutations and organizes them into categories according to their similarity to the predicted structure of the wild type. The secondary structure predictions are performed using the Vienna RNA package. A more efficient implementation of RNAmute is needed, however, to extend from the case of single point mutations to the general case of multiple point mutations, which may often be desired for computational predictions alongside mutagenesis experiments. But analyzing multiple point mutations, a process that requires traversing all possible mutations, becomes highly expensive since the running time is O(nm) for a sequence of length n with m-point mutations. Using Vienna's RNAsubopt, we present a method that selects only those mutations, based on stability considerations, which are likely to be conformational rearranging. The approach is best examined using the dot plot representation for RNA secondary structure. Results Using RNAsubopt, the suboptimal solutions for a given wild-type sequence are calculated once. Then, specific mutations are selected that are most likely to cause a conformational rearrangement. For an RNA sequence of about 100 nts and 3-point mutations (n = 100, m = 3), for example, the proposed method reduces the running time from several hours or even days to several minutes, thus enabling the practical application of RNAmute to the analysis of multiple-point mutations. Conclusion A highly efficient addition to RNAmute that is as user friendly as the original application but that facilitates the practical analysis of multiple-point mutations is presented. Such an extension can now be exploited prior to site-directed mutagenesis experiments by virologists, for example, who investigate the change of function in an RNA virus via mutations that disrupt important motifs in its secondary structure. A complete explanation of the application, called MultiRNAmute, is available at [1]. PMID:18445289
Lisbon 1755, a multiple-rupture earthquake
NASA Astrophysics Data System (ADS)
Fonseca, J. F. B. D.
2017-12-01
The Lisbon earthquake of 1755 poses a challenge to seismic hazard assessment. Reports pointing to MMI 8 or above at distances of the order of 500km led to magnitude estimates near M9 in classic studies. A refined analysis of the coeval sources lowered the estimates to 8.7 (Johnston, 1998) and 8.5 (Martinez-Solares, 2004). I posit that even these lower magnitude values reflect the combined effect of multiple ruptures. Attempts to identify a single source capable of explaining the damage reports with published ground motion models did not gather consensus and, compounding the challenge, the analysis of tsunami traveltimes has led to disparate source models, sometimes separated by a few hundred kilometers. From this viewpoint, the most credible source would combine a sub-set of the multiple active structures identifiable in SW Iberia. No individual moment magnitude needs to be above M8.1, thus rendering the search for candidate structures less challenging. The possible combinations of active structures should be ranked as a function of their explaining power, for macroseismic intensities and tsunami traveltimes taken together. I argue that the Lisbon 1755 earthquake is an example of a distinct class of intraplate earthquake previously unrecognized, of which the Indian Ocean earthquake of 2012 is the first instrumentally recorded example, showing space and time correlation over scales of the orders of a few hundred km and a few minutes. Other examples may exist in the historical record, such as the M8 1556 Shaanxi earthquake, with an unusually large damage footprint (MMI equal or above 6 in 10 provinces; 830000 fatalities). The ability to trigger seismicity globally, observed after the 2012 Indian Ocean earthquake, may be a characteristic of this type of event: occurrences in Massachussets (M5.9 Cape Ann earthquake on 18/11/1755), Morocco (M6.5 Fez earthquake on 27/11/1755) and Germany (M6.1 Duren earthquake, on 18/02/1756) had in all likelyhood a causal link to the Lisbon earthquake. This may reflect the very long period of surface waves generated by the combined sources as a result of the delays between ruptures. Recognition of this new class of large intraplate earthquakes may pave the way to a better understanding of the mechanisms driving intraplate deformation.
NASA Astrophysics Data System (ADS)
Song, Seok Goo; Kwak, Sangmin; Lee, Kyungbook; Park, Donghee
2017-04-01
It is a critical element to predict the intensity and variability of strong ground motions in seismic hazard assessment. The characteristics and variability of earthquake rupture process may be a dominant factor in determining the intensity and variability of near-source strong ground motions. Song et al. (2014) demonstrated that the variability of earthquake rupture scenarios could be effectively quantified in the framework of 1-point and 2-point statistics of earthquake source parameters, constrained by rupture dynamics and past events. The developed pseudo-dynamic source modeling schemes were also validated against the recorded ground motion data of past events and empirical ground motion prediction equations (GMPEs) at the broadband platform (BBP) developed by the Southern California Earthquake Center (SCEC). Recently we improved the computational efficiency of the developed pseudo-dynamic source-modeling scheme by adopting the nonparametric co-regionalization algorithm, introduced and applied in geostatistics initially. We also investigated the effect of earthquake rupture process on near-source ground motion characteristics in the framework of 1-point and 2-point statistics, particularly focusing on the forward directivity region. Finally we will discuss whether the pseudo-dynamic source modeling can reproduce the variability (standard deviation) of empirical GMPEs and the efficiency of 1-point and 2-point statistics to address the variability of ground motions.
Direction-Sensitive Hand-Held Gamma-Ray Spectrometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukhopadhyay, S.
2012-10-04
A novel, light-weight, hand-held gamma-ray detector with directional sensitivity is being designed. The detector uses a set of multiple rings around two cylindrical surfaces, which provides precise location of two interaction points on two concentric cylindrical planes, wherefrom the source location can be traced back by back projection and/or Compton imaging technique. The detectors are 2.0 × 2.0 mm europium-doped strontium iodide (SrI2:Eu2+) crystals, whose light output has been measured to exceed 120,000 photons/MeV, making it one of the brightest scintillators in existence. The crystal’s energy resolution, less than 3% at 662 keV, is also excellent, and the response ismore » highly linear over a wide range of gamma-ray energies. The emission of SrI2:Eu2+ is well matched to both photo-multiplier tubes and blue-enhanced silicon photodiodes. The solid-state photomultipliers used in this design (each 2.0 × 2.0 mm) are arrays of active pixel sensors (avalanche photodiodes driven beyond their breakdown voltage in reverse bias); each pixel acts as a binary photon detector, and their summed output is an analog representation of the total photon energy, while the individual pixel accurately defines the point of interaction. A simple back-projection algorithm involving cone-surface mapping is being modeled. The back projection for an event cone is a conical surface defining the possible location of the source. The cone axis is the straight line passing through the first and second interaction points.« less
Evaluating emissions of HCHO, HONO, NO2, and SO2 from point sources using portable Imaging DOAS
NASA Astrophysics Data System (ADS)
Pikelnaya, O.; Tsai, C.; Herndon, S. C.; Wood, E. C.; Fu, D.; Lefer, B. L.; Flynn, J. H.; Stutz, J.
2011-12-01
Our ability to quantitatively describe urban air pollution to a large extent depends on an accurate understanding of anthropogenic emissions. In areas with a high density of individual point sources of pollution, such as petrochemical facilities with multiple flares or regions with active commercial ship traffic, this is particularly challenging as access to facilities and ships is often restricted. Direct formaldehyde emissions from flares may play an important role for ozone chemistry, acting as an initial radical precursor and enhancing the degradation of co-emitted hydrocarbons. HONO is also recognized as an important OH source throughout the day. However, very little is known about direct HCHO and HONO emissions. Imaging Differential Optical Absorption Spectroscopy (I-DOAS), a relatively new remote sensing technique, provides an opportunity to investigate emissions from these sources from a distance, making this technique attractive for fence-line monitoring. In this presentation, we will describe I-DOAS measurements during the FLAIR campaign in the spring/summer of 2009. We performed measurements outside of various industrial facilities in the larger Houston area as well as in the Houston Ship Channel to visualize and quantify the emissions of HCHO, NO2, HONO, and SO2 from flares of petrochemical facilities and ship smoke stacks. We will present the column density images of pollutant plumes as well as fluxes from individual flares calculated from I-DOAS observations. Fluxes from individual flares and smoke stacks determined from the I-DOAS measurements vary widely in time and by the emission sources. We will also present HONO/NOx ratios in ship smoke stacks derived from the combination of I-DOAS and in-situ measurements, and discuss other trace gas ratios in plumes derived from the I-DOAS observations. Finally, we will show images of HCHO, NO2 and SO2 plumes from control burn forest fires observed in November of 2009 at Vandenberg Air Force Base, Santa Maria, CA.
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen; Liu, Yuan; Liang, Fuxun; Wang, Yongjun
2017-04-01
In recent years, updating the inventory of road infrastructures based on field work is labor intensive, time consuming, and costly. Fortunately, vehicle-based mobile laser scanning (MLS) systems provide an efficient solution to rapidly capture three-dimensional (3D) point clouds of road environments with high flexibility and precision. However, robust recognition of road facilities from huge volumes of 3D point clouds is still a challenging issue because of complicated and incomplete structures, occlusions and varied point densities. Most existing methods utilize point or object based features to recognize object candidates, and can only extract limited types of objects with a relatively low recognition rate, especially for incomplete and small objects. To overcome these drawbacks, this paper proposes a semantic labeling framework by combing multiple aggregation levels (point-segment-object) of features and contextual features to recognize road facilities, such as road surfaces, road boundaries, buildings, guardrails, street lamps, traffic signs, roadside-trees, power lines, and cars, for highway infrastructure inventory. The proposed method first identifies ground and non-ground points, and extracts road surfaces facilities from ground points. Non-ground points are segmented into individual candidate objects based on the proposed multi-rule region growing method. Then, the multiple aggregation levels of features and the contextual features (relative positions, relative directions, and spatial patterns) associated with each candidate object are calculated and fed into a SVM classifier to label the corresponding candidate object. The recognition performance of combining multiple aggregation levels and contextual features was compared with single level (point, segment, or object) based features using large-scale highway scene point clouds. Comparative studies demonstrated that the proposed semantic labeling framework significantly improves road facilities recognition precision (90.6%) and recall (91.2%), particularly for incomplete and small objects.
Point and Compact Hα Sources in the Interior of M33
NASA Astrophysics Data System (ADS)
Moody, J. Ward; Hintz, Eric G.; Joner, Michael D.; Roming, Peter W. A.; Hintz, Maureen L.
2017-12-01
A variety of interesting objects such as Wolf-Rayet stars, tight OB associations, planetary nebulae, X-ray binaries, etc., can be discovered as point or compact sources in Hα surveys. How these objects distribute through a galaxy sheds light on the galaxy star formation rate and history, mass distribution, and dynamics. The nearby galaxy M33 is an excellent place to study the distribution of Hα-bright point sources in a flocculant spiral galaxy. We have reprocessed an archived WIYN continuum-subtracted Hα image of the inner 6.‧5 × 6.‧5 of M33 and, employing both eye and machine searches, have tabulated sources with a flux greater than approximately 10-15 erg cm-2s-1. We have effectively recovered previously mapped H II regions and have identified 152 unresolved point sources and 122 marginally resolved compact sources, of which 39 have not been previously identified in any archive. An additional 99 Hα sources were found to have sufficient archival flux values to generate a Spectral Energy Distribution. Using the SED, flux values, Hα flux value, and compactness, we classified 67 of these sources.
Xu, Hua-Shan; Xu, Zong-Xue; Liu, Pin
2013-03-01
One of the key techniques in establishing and implementing TMDL (total maximum daily load) is to utilize hydrological model to quantify non-point source pollutant loads, establish BMPs scenarios, reduce non-point source pollutant loads. Non-point source pollutant loads under different years (wet, normal and dry year) were estimated by using SWAT model in the Zhangweinan River basin, spatial distribution characteristics of non-point source pollutant loads were analyzed on the basis of the simulation result. During wet years, total nitrogen (TN) and total phosphorus (TP) accounted for 0.07% and 27.24% of the total non-point source pollutant loads, respectively. Spatially, agricultural and residential land with steep slope are the regions that contribute more non-point source pollutant loads in the basin. Compared to non-point source pollutant loads with those during the baseline period, 47 BMPs scenarios were set to simulate the reduction efficiency of different BMPs scenarios for 5 kinds of pollutants (organic nitrogen, organic phosphorus, nitrate nitrogen, dissolved phosphorus and mineral phosphorus) in 8 prior controlled subbasins. Constructing vegetation type ditch was optimized as the best measure to reduce TN and TP by comparing cost-effective relationship among different BMPs scenarios, and the costs of unit pollutant reduction are 16.11-151.28 yuan x kg(-1) for TN, and 100-862.77 yuan x kg(-1) for TP, which is the most cost-effective measure among the 47 BMPs scenarios. The results could provide a scientific basis and technical support for environmental protection and sustainable utilization of water resources in the Zhangweinan River basin.
Content Integration across Multiple Documents Reduces Memory for Sources
ERIC Educational Resources Information Center
Braasch, Jason L. G.; McCabe, Rebecca M.; Daniel, Frances
2016-01-01
The current experiments systematically examined semantic content integration as a mechanism for explaining source inattention and forgetting when reading-to-remember multiple texts. For all 3 experiments, degree of semantic overlap was manipulated amongst messages provided by various information sources. In Experiment 1, readers' source…
NASA Technical Reports Server (NTRS)
Salikuddin, M.; Brown, W. H.; Ramakrishnan, R.; Tanna, H. K.
1983-01-01
An improved acoustic impulse technique was developed and was used to study the transmission characteristics of duct/nozzle systems. To accomplish the above objective, various problems associated with the existing spark-discharge impulse technique were first studied. These included (1) the nonlinear behavior of high intensity pulses, (2) the contamination of the signal with flow noise, (3) low signal-to-noise ratio at high exhaust velocities, and (4) the inability to control or shape the signal generated by the source, specially when multiple spark points were used as the source. The first step to resolve these problems was the replacement of the spark-discharge source with electroacoustic driver(s). These included (1) synthesizing on acoustic impulse with acoustic driver(s) to control and shape the output signal, (2) time domain signal averaging to remove flow noise from the contaminated signal, (3) signal editing to remove unwanted portions of the time history, (4) spectral averaging, and (5) numerical smoothing. The acoustic power measurement technique was improved by taking multiple induct measurements and by a modal decomposition process to account for the contribution of higher order modes in the power computation. The improved acoustic impulse technique was then validated by comparing the results derived by an impedance tube method. The mechanism of acoustic power loss, that occurs when sound is transmitted through nozzle terminations, was investigated. Finally, the refined impulse technique was applied to obtain more accurate results for the acoustic transmission characteristics of a conical nozzle and a multi-lobe multi-tube supressor nozzle.