Comparative Analysis of Aerosol Retrievals from MODIS, OMI and MISR Over Sahara Region
NASA Technical Reports Server (NTRS)
Lyapustin, A.; Wang, Y.; Hsu, C.; Terres, O.; Leptoukh, G.; Kalashnikova, O.; Korkin, S.
2011-01-01
MODIS is a wide field-of-view sensor providing daily global observations of the Earth. Currently, global MODIS aerosol retrievals over land are performed with the main Dark Target algorithm complimented with the Deep Blue (DB) Algorithm over bright deserts. The Dark Target algorithm relies on surface parameterization which relates reflectance in MODIS visible bands with the 2.1 micrometer region, whereas the Deep Blue algorithm uses an ancillary angular distribution model of surface reflectance developed from the time series of clear-sky MODIS observations. Recently, a new Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm has been developed for MODIS. MAIAC uses a time series and an image based processing to perform simultaneous retrievals of aerosol properties and surface bidirectional reflectance. It is a generic algorithm which works over both dark vegetative surfaces and bright deserts and performs retrievals at 1 km resolution. In this work, we will provide a comparative analysis of DB, MAIAC, MISR and OMI aerosol products over bright deserts of northern Africa.
Satellite aerosol retrieval using dark target algorithm by coupling BRDF effect over AERONET site
NASA Astrophysics Data System (ADS)
Yang, Leiku; Xue, Yong; Guang, Jie; Li, Chi
2012-11-01
For most satellite aerosol retrieval algorithms even for multi-angle instrument, the simple forward model (FM) based on Lambertian surface assumption is employed to simulate top of the atmosphere (TOA) spectral reflectance, which does not fully consider the surface bi-directional reflectance functions (BRDF) effect. The approximating forward model largely simplifies the radiative transfer model, reduces the size of the look-up tables, and creates faster algorithm. At the same time, it creates systematic biases in the aerosol optical depth (AOD) retrieval. AOD product from the Moderate Resolution Imaging Spectro-radiometer (MODIS) data based on the dark target algorithm is considered as one of accurate satellite aerosol products at present. Though it performs well at a global scale, uncertainties are still found on regional in a lot of studies. The Lambertian surface assumpiton employed in the retrieving algorithm may be one of the uncertain factors. In this study, we first use radiative transfer simulations over dark target to assess the uncertainty to what extent is introduced from the Lambertian surface assumption. The result shows that the uncertainties of AOD retrieval could reach up to ±0.3. Then the Lambertian FM (L_FM) and the BRDF FM (BRDF_FM) are respectively employed in AOD retrieval using dark target algorithm from MODARNSS (MODIS/Terra and MODIS/Aqua Atmosphere Aeronet Subsetting Product) data over Beijing AERONET site. The validation shows that accuracy in AOD retrieval has been improved by employing the BRDF_FM accounting for the surface BRDF effect, the regression slope of scatter plots with retrieved AOD against AEROENET AOD increases from 0.7163 (for L_FM) to 0.7776 (for BRDF_FM) and the intercept decreases from 0.0778 (for L_FM) to 0.0627 (for BRDF_FM).
Comparing MODIS C6 'Deep Blue' and 'Dark Target' Aerosol Data
NASA Technical Reports Server (NTRS)
Hsu, N. C.; Sayer, A. M.; Bettenhausen, C.; Lee, J.; Levy, R. C.; Mattoo, S.; Munchak, L. A.; Kleidman, R.
2014-01-01
The MODIS Collection 6 Atmospheres product suite includes refined versions of both 'Deep Blue' (DB) and 'Dark Target' (DT) aerosol algorithms, with the DB dataset now expanded to include coverage over vegetated land surfaces. This means that, over much of the global land surface, users will have both DB and DT data to choose from. A 'merged' dataset is also provided, primarily for visualization purposes, which takes retrievals from either or both algorithms based on regional and seasonal climatologies of normalized difference vegetation index (NDVI). This poster present some comparisons of these two C6 aerosol algorithms, focusing on AOD at 550 nm derived from MODIS Aqua measurements, with each other and with Aerosol Robotic Network (AERONET) data, with the intent to facilitate user decisions about the suitability of the two datasets for their desired applications.
Algorithm for Detecting a Bright Spot in an Image
NASA Technical Reports Server (NTRS)
2009-01-01
An algorithm processes the pixel intensities of a digitized image to detect and locate a circular bright spot, the approximate size of which is known in advance. The algorithm is used to find images of the Sun in cameras aboard the Mars Exploration Rovers. (The images are used in estimating orientations of the Rovers relative to the direction to the Sun.) The algorithm can also be adapted to tracking of circular shaped bright targets in other diverse applications. The first step in the algorithm is to calculate a dark-current ramp a correction necessitated by the scheme that governs the readout of pixel charges in the charge-coupled-device camera in the original Mars Exploration Rover application. In this scheme, the fraction of each frame period during which dark current is accumulated in a given pixel (and, hence, the dark-current contribution to the pixel image-intensity reading) is proportional to the pixel row number. For the purpose of the algorithm, the dark-current contribution to the intensity reading from each pixel is assumed to equal the average of intensity readings from all pixels in the same row, and the factor of proportionality is estimated on the basis of this assumption. Then the product of the row number and the factor of proportionality is subtracted from the reading from each pixel to obtain a dark-current-corrected intensity reading. The next step in the algorithm is to determine the best location, within the overall image, for a window of N N pixels (where N is an odd number) large enough to contain the bright spot of interest plus a small margin. (In the original application, the overall image contains 1,024 by 1,024 pixels, the image of the Sun is about 22 pixels in diameter, and N is chosen to be 29.)
Research on multi-source image fusion technology in haze environment
NASA Astrophysics Data System (ADS)
Ma, GuoDong; Piao, Yan; Li, Bing
2017-11-01
In the haze environment, the visible image collected by a single sensor can express the details of the shape, color and texture of the target very well, but because of the haze, the sharpness is low and some of the target subjects are lost; Because of the expression of thermal radiation and strong penetration ability, infrared image collected by a single sensor can clearly express the target subject, but it will lose detail information. Therefore, the multi-source image fusion method is proposed to exploit their respective advantages. Firstly, the improved Dark Channel Prior algorithm is used to preprocess the visible haze image. Secondly, the improved SURF algorithm is used to register the infrared image and the haze-free visible image. Finally, the weighted fusion algorithm based on information complementary is used to fuse the image. Experiments show that the proposed method can improve the clarity of the visible target and highlight the occluded infrared target for target recognition.
A modeling approach for aerosol optical depth analysis during forest fire events
NASA Astrophysics Data System (ADS)
Aube, Martin P.; O'Neill, Normand T.; Royer, Alain; Lavoue, David
2004-10-01
Measurements of aerosol optical depth (AOD) are important indicators of aerosol particle behavior. Up to now the two standard techniques used for retrieving AOD are; (i) sun photometry which provides measurements of high temporal frequency and sparse spatial frequency, and (ii) satellite based approaches such as DDV (Dense Dark Vegetation) based inversion algorithms which yield AOD over dark targets in remotely sensed imagery. Although the latter techniques allow AOD retrieval over appreciable spatial domains, the irregular spatial pattern of dark targets and the typically low repeat frequencies of imaging satellites exclude the acquisition of AOD databases on a continuous spatio-temporal basis. We attempt to fill gaps in spatio-temporal AOD measurements using a new assimilation methodology that links AOD measurements and the predictions of a particulate matter Transport Model. This modelling package (AODSEM V2.0 for Aerosol Optical Depth Spatio-temporal Evolution Model) uses a size and aerosol type segregated semi-Lagrangian trajectory algorithm driven by analysed meteorological data. Its novelty resides in the fact that the model evolution may be tied to both ground based and satellite level AOD measurement and all physical processes have been optimized to track this important and robust parameter. We applied this methodology to a significant smoke event that occurred over the eastern part of North America in July 2002.
The Time Series Technique for Aerosol Retrievals over Land from MODIS: Algorithm MAIAC
NASA Technical Reports Server (NTRS)
Lyapustin, Alexei; Wang, Yujie
2008-01-01
Atmospheric aerosols interact with sun light by scattering and absorbing radiation. By changing irradiance of the Earth surface, modifying cloud fractional cover and microphysical properties and a number of other mechanisms, they affect the energy balance, hydrological cycle, and planetary climate [IPCC, 2007]. In many world regions there is a growing impact of aerosols on air quality and human health. The Earth Observing System [NASA, 1999] initiated high quality global Earth observations and operational aerosol retrievals over land. With the wide swath (2300 km) of MODIS instrument, the MODIS Dark Target algorithm [Kaufman et al., 1997; Remer et al., 2005; Levy et al., 2007] currently complemented with the Deep Blue method [Hsu et al., 2004] provides daily global view of planetary atmospheric aerosol. The MISR algorithm [Martonchik et al., 1998; Diner et al., 2005] makes high quality aerosol retrievals in 300 km swaths covering the globe in 8 days. With MODIS aerosol program being very successful, there are still several unresolved issues in the retrieval algorithms. The current processing is pixel-based and relies on a single-orbit data. Such an approach produces a single measurement for every pixel characterized by two main unknowns, aerosol optical thickness (AOT) and surface reflectance (SR). This lack of information constitutes a fundamental problem of the remote sensing which cannot be resolved without a priori information. For example, MODIS Dark Target algorithm makes spectral assumptions about surface reflectance, whereas the Deep Blue method uses ancillary global database of surface reflectance composed from minimal monthly measurements with Rayleigh correction. Both algorithms use Lambertian surface model. The surface-related assumptions in the aerosol retrievals may affect subsequent atmospheric correction in unintended way. For example, the Dark Target algorithm uses an empirical relationship to predict SR in the Blue (B3) and Red (B1) bands from the 2.1 m channel (B7) for the purpose of aerosol retrieval. Obviously, the subsequent atmospheric correction will produce the same SR in the red and blue bands as predicted, i.e. an empirical function of 2.1. In other words, the spectral, spatial and temporal variability of surface reflectance in the Blue and Red bands appears borrowed from band B7. This may have certain implications for the vegetation and global carbon analysis because the chlorophyll-sensing bands B1, B3 are effectively substituted in terms of variability by band B7, which is sensitive to the plant liquid water. This chapter describes a new recently developed generic aerosol-surface retrieval algorithm for MODIS. The Multi-Angle Implementation of Atmospheric Correction (MAIAC) algorithm simultaneously retrieves AOT and surface bi-directional reflection factor (BRF) using the time series of MODIS measurements.
Target Recognition Using Neural Networks for Model Deformation Measurements
NASA Technical Reports Server (NTRS)
Ross, Richard W.; Hibler, David L.
1999-01-01
Optical measurements provide a non-invasive method for measuring deformation of wind tunnel models. Model deformation systems use targets mounted or painted on the surface of the model to identify known positions, and photogrammetric methods are used to calculate 3-D positions of the targets on the model from digital 2-D images. Under ideal conditions, the reflective targets are placed against a dark background and provide high-contrast images, aiding in target recognition. However, glints of light reflecting from the model surface, or reduced contrast caused by light source or model smoothness constraints, can compromise accurate target determination using current algorithmic methods. This paper describes a technique using a neural network and image processing technologies which increases the reliability of target recognition systems. Unlike algorithmic methods, the neural network can be trained to identify the characteristic patterns that distinguish targets from other objects of similar size and appearance and can adapt to changes in lighting and environmental conditions.
MODIS-VIIRS Intercalibration for Dark Target Aerosol Retrieval Over Ocean
NASA Astrophysics Data System (ADS)
Sawyer, V. R.; Levy, R. C.; Mattoo, S.; Quinn, G.; Veglio, P.
2016-12-01
Any future climate record for satellite aerosol retrieval will require continuity over multiple decades, longer than the lifespan of an individual satellite instrument. The Dark Target algorithm was developed for MODIS, which began taking observations in 1999; the two MODIS instruments currently in orbit are not expected to continue taking observations beyond the early 2020s. However, the algorithm is portable, and a Dark Target product for VIIRS is scheduled for release December 2016. Because MODIS and VIIRS operate at different wavelengths, resolutions, fields of view and orbital timing, the transition can introduce artifacts that must be corrected. Without these corrections, it will be difficult to find any changes that may occur in the global aerosol climate record over time periods that span the transition from MODIS to VIIRS retrievals. The University of Wisconsin-Madison SIPS team found thousands of matches between 2012 and 2016 in which Aqua-MODIS and Suomi-NPP VIIRS observe the same location at similar times and view angles. These matched cases are used to identify corresponding matches in the Intermediate File Format (IFF) aerosol retrievals for MODIS and VIIRS, which are compared to one another in turn. Because most known sources of disagreement between the two instruments have already been corrected during the IFF retrieval, the direct comparison between near-collocated cases shows only the differences that remain at local and regional scales. The comparison is further restricted to clear-sky cases over ocean, so that the investigation of seasonal, diurnal and geographic variation is not affected by uncertainties in the land surface or cloud contamination.
Results from the DarkSide-50 Dark Matter Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fan, Alden
2016-01-01
While there is tremendous astrophysical and cosmological evidence for dark matter, its precise nature is one of the most significant open questions in modern physics. Weakly interacting massive particles (WIMPs) are a particularly compelling class of dark matter candidates with masses of the order 100 GeV and couplings to ordinary matter at the weak scale. Direct detection experiments are aiming to observe the low energy (<100 keV) scattering of dark matter off normal matter. With the liquid noble technology leading the way in WIMP sensitivity, no conclusive signals have been observed yet. The DarkSide experiment is looking for WIMP darkmore » matter using a liquid argon target in a dual-phase time projection chamber located deep underground at Gran Sasso National Laboratory (LNGS) in Italy. Currently filled with argon obtained from underground sources, which is greatly reduced in radioactive 39Ar, DarkSide-50 recently made the most sensitive measurement of the 39Ar activity in underground argon and used it to set the strongest WIMP dark matter limit using liquid argon to date. This work describes the full chain of analysis used to produce the recent dark matter limit, from reconstruction of raw data to evaluation of the final exclusion curve. The DarkSide- 50 apparatus is described in detail, followed by discussion of the low level reconstruction algorithms. The algorithms are then used to arrive at three broad analysis results: The electroluminescence signals in DarkSide-50 are used to perform a precision measurement of ii longitudinal electron diffusion in liquid argon. A search is performed on the underground argon data to identify the delayed coincidence signature of 85Kr decays to the 85mRb state, a crucial ingredient in the measurement of the 39Ar activity in the underground argon. Finally, a full description of the WIMP search is given, including development of cuts, efficiencies, energy scale, and exclusion curve in the WIMP mass vs. spin-independent WIMP-nucleon scattering cross section plane. This work was supervised by Hanguo Wang and was completed in collaboration with members of the DarkSide collaboration.« less
Bayesian aerosol retrieval algorithm for MODIS AOD retrieval over land
NASA Astrophysics Data System (ADS)
Lipponen, Antti; Mielonen, Tero; Pitkänen, Mikko R. A.; Levy, Robert C.; Sawyer, Virginia R.; Romakkaniemi, Sami; Kolehmainen, Ville; Arola, Antti
2018-03-01
We have developed a Bayesian aerosol retrieval (BAR) algorithm for the retrieval of aerosol optical depth (AOD) over land from the Moderate Resolution Imaging Spectroradiometer (MODIS). In the BAR algorithm, we simultaneously retrieve all dark land pixels in a granule, utilize spatial correlation models for the unknown aerosol parameters, use a statistical prior model for the surface reflectance, and take into account the uncertainties due to fixed aerosol models. The retrieved parameters are total AOD at 0.55 µm, fine-mode fraction (FMF), and surface reflectances at four different wavelengths (0.47, 0.55, 0.64, and 2.1 µm). The accuracy of the new algorithm is evaluated by comparing the AOD retrievals to Aerosol Robotic Network (AERONET) AOD. The results show that the BAR significantly improves the accuracy of AOD retrievals over the operational Dark Target (DT) algorithm. A reduction of about 29 % in the AOD root mean square error and decrease of about 80 % in the median bias of AOD were found globally when the BAR was used instead of the DT algorithm. Furthermore, the fraction of AOD retrievals inside the ±(0.05+15 %) expected error envelope increased from 55 to 76 %. In addition to retrieving the values of AOD, FMF, and surface reflectance, the BAR also gives pixel-level posterior uncertainty estimates for the retrieved parameters. The BAR algorithm always results in physical, non-negative AOD values, and the average computation time for a single granule was less than a minute on a modern personal computer.
NASA Astrophysics Data System (ADS)
Loria-Salazar, S. Marcela
The aim of the present work is to carry out a detailed analysis of ground and columnar aerosol properties obtained by in-situ Photoacoustic and Integrated Nephelometer (PIN), Cimel CE-318 sunphotometer and MODIS instrument onboard Aqua and Terra satellites, for semi-arid Reno, Nevada, USA in the local summer months of 2012. Satellite determination of local aerosol pollution is desirable because of the potential for broad spatial and temporal coverage. However, retrieval of quantitative measures of air pollution such as Aerosol Optical Depth (AOD) from satellite measurements is challenging because of the underlying surface albedo being heterogeneous in space and time. Therefore, comparisons of satellite retrievals with measurements from ground-based sun photometers are crucial for validation, testing, and further development of instruments and retrieval algorithms. Ground-based sunphotometry and in-situ ground observations show that seasonal weather changes and fire plumes have great influence on the atmosphere aerosol optics. The Apparent Optical Height (AOH) follows the shape of the development of the Convective Boundary Layer (CBL) when fire conditions were not present. However, significant fine particle optical depth was inferred beyond the CBL thereby complicating the use of remote sensing measurements for near-ground aerosol pollution measurements. A meteorological analysis was performed to help diagnose the nature of the aerosols above Reno. The calculation of a Zephyr index and back trajectory analysis demonstrated that a local circulation often induces aerosol transport from Northern CA over the Sierra Nevada Mountains that doubles the Aerosol Optical Depth (AOD) at 500 nm. Sunphotometer measurements were used as a `ground truth' for satellite retrievals to evaluate the current state of the science retrievals in this challenging location. Satellite retrieved for AOD showed the presence of wild fires in Northern CA during August. AOD retrieved using the "dark-target algorithm" may be unrealistically high over the Great Basin. Low correlation was found between AERONET AOD and dark-target algorithm AOD retrievals from Aqua and Terra during June and July. During fire conditions the dark-target algorithm AOD values correlated better with AERONET measurements in August. Use of the Deep-blue algorithm for MODIS data to retrieve AOD did not provide enough points to compare with AERONET in June and July. In August, AOD from deep-blue and AERONET retrievals exhibited low correlation. AEE from MODIS products and AERONET exhibited low correlation during every month. Apparently satellite AOD retrievals need much improvement for areas like semi-arid Reno.
NASA Astrophysics Data System (ADS)
Leonardi, E.; Piperno, G.; Raggi, M.
2017-10-01
A possible solution to the Dark Matter problem postulates that it interacts with Standard Model particles through a new force mediated by a “portal”. If the new force has a U(1) gauge structure, the “portal” is a massive photon-like vector particle, called dark photon or A’. The PADME experiment at the DAΦNE Beam-Test Facility (BTF) in Frascati is designed to detect dark photons produced in positron on fixed target annihilations decaying to dark matter (e+e-→γA‧) by measuring the final state missing mass. One of the key roles of the experiment will be played by the electromagnetic calorimeter, which will be used to measure the properties of the final state recoil γ. The calorimeter will be composed by 616 21×21×230 mm3 BGO crystals oriented with the long axis parallel to the beam direction and disposed in a roughly circular shape with a central hole to avoid the pile up due to the large number of low angle Bremsstrahlung photons. The total energy and position of the electromagnetic shower generated by a photon impacting on the calorimeter can be reconstructed by collecting the energy deposits in the cluster of crystals interested by the shower. In PADME we are testing two different clustering algorithms, PADME-Radius and PADME-Island, based on two complementary strategies. In this paper we will describe the two algorithms, with the respective implementations, and report on the results obtained with them at the PADME energy scale (< 1 GeV), both with a GEANT4 based simulation and with an existing 5×5 matrix of BGO crystals tested at the DAΦNE BTF.
Infrared traffic image enhancement algorithm based on dark channel prior and gamma correction
NASA Astrophysics Data System (ADS)
Zheng, Lintao; Shi, Hengliang; Gu, Ming
2017-07-01
The infrared traffic image acquired by the intelligent traffic surveillance equipment has low contrast, little hierarchical differences in perceptions of image and the blurred vision effect. Therefore, infrared traffic image enhancement, being an indispensable key step, is applied to nearly all infrared imaging based traffic engineering applications. In this paper, we propose an infrared traffic image enhancement algorithm that is based on dark channel prior and gamma correction. In existing research dark channel prior, known as a famous image dehazing method, here is used to do infrared image enhancement for the first time. Initially, in the proposed algorithm, the original degraded infrared traffic image is transformed with dark channel prior as the initial enhanced result. A further adjustment based on the gamma curve is needed because initial enhanced result has lower brightness. Comprehensive validation experiments reveal that the proposed algorithm outperforms the current state-of-the-art algorithms.
Retrieval of aerosol optical properties using MERIS observations: Algorithm and some first results.
Mei, Linlu; Rozanov, Vladimir; Vountas, Marco; Burrows, John P; Levy, Robert C; Lotz, Wolfhardt
2017-08-01
The MEdium Resolution Imaging Spectrometer (MERIS) instrument on board ESA Envisat made measurements from 2002 to 2012. Although MERIS was limited in spectral coverage, accurate Aerosol Optical Thickness (AOT) from MERIS data are retrieved by using appropriate additional information. We introduce a new AOT retrieval algorithm for MERIS over land surfaces, referred to as eXtensible Bremen AErosol Retrieval (XBAER). XBAER is similar to the "dark-target" (DT) retrieval algorithm used for Moderate-resolution Imaging Spectroradiometer (MODIS), in that it uses a lookup table (LUT) to match to satellite-observed reflectance and derive the AOT. Instead of a global parameterization of surface spectral reflectance, XBAER uses a set of spectral coefficients to prescribe surface properties. In this manner, XBAER is not limited to dark surfaces (vegetation) and retrieves AOT over bright surface (desert, semiarid, and urban areas). Preliminary validation of the MERIS-derived AOT and the ground-based Aerosol Robotic Network (AERONET) measurements yield good agreement, the resulting regression equation is y = (0.92 × ± 0.07) + (0.05 ± 0.01) and Pearson correlation coefficient of R = 0.78. Global monthly means of AOT have been compared from XBAER, MODIS and other satellite-derived datasets.
MODIS 3km Aerosol Product: Algorithm and Global Perspective
NASA Technical Reports Server (NTRS)
Remer, L. A.; Mattoo, S.; Levy, R. C.; Munchak, L.
2013-01-01
After more than a decade of producing a nominal 10 km aerosol product based on the dark target method, the MODIS aerosol team will be releasing a nominal 3 km product as part of their Collection 6 release. The new product differs from the original 10 km product only in the manner in which reflectance pixels are ingested, organized and selected by the aerosol algorithm. Overall, the 3 km product closely mirrors the 10 km product. However, the finer resolution product is able to retrieve over ocean closer to islands and coastlines, and is better able to resolve fine aerosol features such as smoke plumes over both ocean and land. In some situations, it provides retrievals over entire regions that the 10 km product barely samples. In situations traditionally difficult for the dark target algorithm, such as over bright or urban surfaces the 3 km product introduces isolated spikes of artificially high aerosol optical depth (AOD) that the 10 km algorithm avoids. Over land, globally, the 3 km product appears to be 0.01 to 0.02 higher than the 10 km product, while over ocean, the 3 km algorithm is retrieving a proportionally greater number of very low aerosol loading situations. Based on collocations with ground-based observations for only six months, expected errors associated with the 3 km land product are determined to be greater than for the 10 km product: 0.05 0.25 AOD. Over ocean, the suggestion is for expected errors to be the same as the 10 km product: 0.03 0.05 AOD. The advantage of the product is on the local scale, which will require continued evaluation not addressed here. Nevertheless, the new 3 km product is expected to provide important information complementary to existing satellite-derived products and become an important tool for the aerosol community.
The Collection 6 'dark-target' MODIS Aerosol Products
NASA Technical Reports Server (NTRS)
Levy, Robert C.; Mattoo, Shana; Munchak, Leigh A.; Kleidman, Richard G.; Patadia, Falguni; Gupta, Pawan; Remer, Lorraine
2013-01-01
Aerosol retrieval algorithms are applied to Moderate resolution Imaging Spectroradiometer (MODIS) sensors on both Terra and Aqua, creating two streams of decade-plus aerosol information. Products of aerosol optical depth (AOD) and aerosol size are used for many applications, but the primary concern is that these global products are comprehensive and consistent enough for use in climate studies. One of our major customers is the international modeling comparison study known as AEROCOM, which relies on the MODIS data as a benchmark. In order to keep up with the needs of AEROCOM and other MODIS data users, while utilizing new science and tools, we have improved the algorithms and products. The code, and the associated products, will be known as Collection 6 (C6). While not a major overhaul from the previous Collection 5 (C5) version, there are enough changes that there are significant impacts to the products and their interpretation. In its entirety, the C6 algorithm is comprised of three sub-algorithms for retrieving aerosol properties over different surfaces: These include the dark-target DT algorithms to retrieve over (1) ocean and (2) vegetated-dark-soiled land, plus the (3) Deep Blue (DB) algorithm, originally developed to retrieve over desert-arid land. Focusing on the two DT algorithms, we have updated assumptions for central wavelengths, Rayleigh optical depths and gas (H2O, O3, CO2, etc.) absorption corrections, while relaxing the solar zenith angle limit (up to 84) to increase pole-ward coverage. For DT-land, we have updated the cloud mask to allow heavy smoke retrievals, fine-tuned the assignments for aerosol type as function of season location, corrected bugs in the Quality Assurance (QA) logic, and added diagnostic parameters such as topographic altitude. For DT-ocean, improvements include a revised cloud mask for thin-cirrus detection, inclusion of wind speed dependence in the retrieval, updates to logic of QA Confidence flag (QAC) assignment, and additions of important diagnostic information. At the same time as we have introduced algorithm changes, we have also accounted for upstream changes including: new instrument calibration, revised land-sea masking, and changed cloud masking. Upstream changes also impact the coverage and global statistics of the retrieved AOD. Although our responsibility is to the DT code and products, we have also added a product that merges DT and DB product over semi-arid land surfaces to provide a more gap-free dataset, primarily for visualization purposes. Preliminary validation shows that compared to surface-based sunphotometer data, the C6, Level 2 (along swath) DT-products compare at least as well as those from C5. C6 will include new diagnostic information about clouds in the aerosol field, including an aerosol cloud mask at 500 m resolution, and calculations of the distance to the nearest cloud from clear pixels. Finally, we have revised the strategy for aggregating and averaging the Level 2 (swath) data to become Level 3 (gridded) data. All together, the changes to the DT algorithms will result in reduced global AOD (by 0.02) over ocean and increased AOD (by 0.02) over land, along with changes in spatial coverage. Changes in calibration will have more impact to Terras time series, especially over land. This will result in a significant reduction in artificial differences in the Terra and Aqua datasets, and will stabilize the MODIS data as a target for AEROCOM studie
Modeling of Aerosol Optical Depth Variability during the 1998 Canadian Forest Fire Smoke Event
NASA Astrophysics Data System (ADS)
Aubé, M.; O`Neill, N. T.; Royer, A.; Lavoué, D.
2003-04-01
Monitoring of aerosol optical depth (AOD) is of particular importance due to the significant role of aerosols in the atmospheric radiative budget. Up to now the two standard techniques used for retrieving AOD are; (i) sun photometry which provides measurements of high temporal frequency and sparse spatial frequency, and (ii) satellite based approaches such as based DDV (Dense Dark Vegetation) inversion algorithms which extract AOD over dark targets in remotely sensed imagery. Although the latter techniques allow AOD retrieval over appreciable spatial domains, the irregular spatial pattern of dark targets and the typically low repeat frequencies of imaging satellites exclude the acquisition of AOD databases on a continuous spatio-temporal basis. We attempt to fill gaps in spatio-temporal AOD measurements using a new methodology that links AOD measurements and particulate matter Transport Model using a data assimilation approach. This modelling package (AODSEM for Aerosol Optical Depth Spatio-temporal Evolution Model) uses a size and aerosol type segregated semi-Lagrangian-Eulerian trajectory algorithm driven by analysed meteorological data. Its novelty resides in the fact that the model evolution is tied to both ground based and satellite level AOD measurement and all physical processes have been optimized to track this important but crude parameter. We applied this methodology to a significant smoke event that occurred over Canada in august 1998. The results show the potential of this approach inasmuch as residuals between AODSEM assimilated analysis and measurements are smaller than typical errors associated to remotely sensed AOD (satellite or ground based). The AODSEM assimilation approach also gives better results than classical interpolation techniques. This improvement is especially evident when the available number of AOD measurements is small.
WFC3/UVIS Dark Calibration: Monitoring Results and Improvements to Dark Reference Files
NASA Astrophysics Data System (ADS)
Bourque, M.; Baggett, S.
2016-04-01
The Wide Field Camera 3 (WFC3) UVIS detector possesses an intrinsic signal during exposures, even in the absence of light, known as dark current. A daily monitor program is employed every HST cycle to characterize and measure this current as well as to create calibration files which serve to subtract the dark current from science data. We summarize the results of the daily monitor program for all on-orbit data. We also introduce a new algorithm for generating the dark reference files that provides several improvements to their overall quality. Key features to the new algorithm include correcting the dark frames for Charge Transfer Efficiency (CTE) losses, using an anneal-cycle average value to measure the dark current, and generating reference files on a daily basis. This new algorithm is part of the release of the CALWF3 v3.3 calibration pipeline on February 23, 2016 (also known as "UVIS 2.0"). Improved dark reference files have been regenerated and re-delivered to the Calibration Reference Data System (CRDS) for all on-orbit data. Observers with science data taken prior to the release of CALWF3 v3.3 may request their data through the Mikulski Archive for Space Telescopes (MAST) to obtain the improved products.
Robotic Spectroscopy at the Dark Sky Observatory
NASA Astrophysics Data System (ADS)
Rosenberg, Daniel E.; Gray, Richard O.; Mashburn, Jonathan; Swenson, Aaron W.; McGahee, Courtney E.; Briley, Michael M.
2018-06-01
Spectroscopic observations using the classification-resolution Gray-Miller spectrograph attached to the Dark Sky Observatory 32 inch telescope (Appalachian State University, North Carolina) have been automated with a robotic script called the “Robotic Spectroscopist” (RS). RS runs autonomously during the night and controls all operations related to spectroscopic observing. At the heart of RS are a number of algorithms that first select and center the target star in the field of an imaging camera and then on the spectrograph slit. RS monitors the observatory weather station, and suspends operations and closes the dome when weather conditions warrant, and can reopen and resume observations when the weather improves. RS selects targets from a list using a queue-observing protocol based on observer-assigned priorities, but also uses target-selection criteria based on weather conditions, especially seeing. At the end of the night RS transfers the data files to the main campus, where they are reduced with an automatic pipeline. Our experience has shown that RS is more efficient and consistent than a human observer, and produces data sets that are ideal for automatic reduction. RS should be adaptable for use at other similar observatories, and so we are making the code freely available to the astronomical community.
Creating a consistent dark-target aerosol optical depth record from MODIS and VIIRS
NASA Astrophysics Data System (ADS)
Levy, R. C.; Mattoo, S.; Munchak, L. A.; Patadia, F.; Holz, R.
2014-12-01
To answer fundamental questions about our changing climate, we must quantify how aerosols are changing over time. This is a global question that requires regional characterization, because in some places aerosols are increasing and in others they are decreasing. Although NASA's Moderate resolution Imaging Spectrometer (MODIS) sensors have provided quantitative information about global aerosol optical depth (AOD) for more than a decade, the creation of an aerosol climate data record (CDR) requires consistent multi-decadal data. With the Visible and Infrared Imaging Radiometer Suite (VIIRS) aboard Suomi-NPP, there is potential to continue the MODIS aerosol time series. Yet, since the operational VIIRS aerosol product is produced by a different algorithm, it is not suitable to continue MODIS to create an aerosol CDR. Therefore, we have applied the MODIS Dark-target (DT) algorithm to VIIRS observations, taking into account the slight differences in wavelengths, resolutions and geometries between the two sensors. More specifically, we applied the MODIS DT algorithm to a dataset known as the Intermediate File Format (IFF), created by the University of Wisconsin. The IFF is produced for both MODIS and VIIRS, with the idea that a single (MODIS-like or ML) algorithm can be run either dataset, which can in turn be compared to the MODIS Collection 6 (M6) retrieval that is run on standard MODIS data. After minimizing or characterizing remaining differences between ML on MODIS-IFF (or ML-M) and M6, we have performed apples-to-apples comparison between ML-M and ML on VIIRS IFF (ML-V). Examples of these comparisons include time series of monthly global mean, monthly and seasonal global maps at 1° resolution, and collocations as compared to AERONET. We concentrate on the overlapping period January 2012 through June 2014, and discuss some of the remaining discrepancies between the ML-V and ML-M datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Binder, Gary A.; /Caltech /SLAC
2010-08-25
In order to make accurate measurements of dark energy, a system is needed to monitor the focus and alignment of the Dark Energy Camera (DECam) to be located on the Blanco 4m Telescope for the upcoming Dark Energy Survey. One new approach under development is to fit out-of-focus star images to a point spread function from which information about the focus and tilt of the camera can be obtained. As a first test of a new algorithm using this idea, simulated star images produced from a model of DECam in the optics software Zemax were fitted. Then, real images frommore » the Mosaic II imager currently installed on the Blanco telescope were used to investigate the algorithm's capabilities. A number of problems with the algorithm were found, and more work is needed to understand its limitations and improve its capabilities so it can reliably predict camera alignment and focus.« less
Prospects for detection of target-dependent annual modulation in direct dark matter searches
Nobile, Eugenio Del; Gelmini, Graciela B.; Witte, Samuel J.
2016-02-03
Earth's rotation about the Sun produces an annual modulation in the expected scattering rate at direct dark matter detection experiments. The annual modulation as a function of the recoil energy E R imparted by the dark matter particle to a target nucleus is expected to vary depending on the detector material. However, for most interactions a change of variables from E R to v min, the minimum speed a dark matter particle must have to impart a fixed E R to a target nucleus, produces an annual modulation independent of the target element. We recently showed that if the darkmore » matter-nucleus cross section contains a non-factorizable target and dark matter velocity dependence, the annual modulation as a function of v min can be target dependent. Here we examine more extensively the necessary conditions for target-dependent modulation, its observability in present-day experiments, and the extent to which putative signals could identify a dark matter-nucleus differential cross section with a non-factorizable dependence on the dark matter velocity.« less
Directional detection of dark matter with two-dimensional targets
Hochberg, Yonit; Kahn, Yonatan; Lisanti, Mariangela; ...
2017-09-01
We propose two-dimensional materials as targets for direct detection of dark matter. Using graphene as an example, we focus on the case where dark matter scattering deposits sufficient energy on a valence-band electron to eject it from the target. Here, we show that the sensitivity of graphene to dark matter of MeV to GeV mass can be comparable, for similar exposure and background levels, to that of semiconductor targets such as silicon and germanium. Moreover, a two-dimensional target is an excellent directional detector, as the ejected electron retains information about the angular dependence of the incident dark matter particle. Ourmore » proposal can be implemented by the PTOLEMY experiment, presenting for the first time an opportunity for directional detection of sub-GeV dark matter.« less
Directional detection of dark matter with two-dimensional targets
NASA Astrophysics Data System (ADS)
Hochberg, Yonit; Kahn, Yonatan; Lisanti, Mariangela; Tully, Christopher G.; Zurek, Kathryn M.
2017-09-01
We propose two-dimensional materials as targets for direct detection of dark matter. Using graphene as an example, we focus on the case where dark matter scattering deposits sufficient energy on a valence-band electron to eject it from the target. We show that the sensitivity of graphene to dark matter of MeV to GeV mass can be comparable, for similar exposure and background levels, to that of semiconductor targets such as silicon and germanium. Moreover, a two-dimensional target is an excellent directional detector, as the ejected electron retains information about the angular dependence of the incident dark matter particle. This proposal can be implemented by the PTOLEMY experiment, presenting for the first time an opportunity for directional detection of sub-GeV dark matter.
Directional detection of dark matter with two-dimensional targets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hochberg, Yonit; Kahn, Yonatan; Lisanti, Mariangela
We propose two-dimensional materials as targets for direct detection of dark matter. Using graphene as an example, we focus on the case where dark matter scattering deposits sufficient energy on a valence-band electron to eject it from the target. Here, we show that the sensitivity of graphene to dark matter of MeV to GeV mass can be comparable, for similar exposure and background levels, to that of semiconductor targets such as silicon and germanium. Moreover, a two-dimensional target is an excellent directional detector, as the ejected electron retains information about the angular dependence of the incident dark matter particle. Ourmore » proposal can be implemented by the PTOLEMY experiment, presenting for the first time an opportunity for directional detection of sub-GeV dark matter.« less
NASA Technical Reports Server (NTRS)
Di Tomaso, Enza; Schutgens, Nick A. J.; Jorba, Oriol; Perez Garcia-Pando, Carlos
2017-01-01
A data assimilation capability has been built for the NMMB-MONARCH chemical weather prediction system, with a focus on mineral dust, a prominent type of aerosol. An ensemble-based Kalman filter technique (namely the local ensemble transform Kalman filter - LETKF) has been utilized to optimally combine model background and satellite retrievals. Our implementation of the ensemble is based on known uncertainties in the physical parametrizations of the dust emission scheme. Experiments showed that MODIS AOD retrievals using the Dark Target algorithm can help NMMB-MONARCH to better characterize atmospheric dust. This is particularly true for the analysis of the dust outflow in the Sahel region and over the African Atlantic coast. The assimilation of MODIS AOD retrievals based on the Deep Blue algorithm has a further positive impact in the analysis downwind from the strongest dust sources of the Sahara and in the Arabian Peninsula. An analysis-initialized forecast performs better (lower forecast error and higher correlation with observations) than a standard forecast, with the exception of underestimating dust in the long-range Atlantic transport and degradation of the temporal evolution of dust in some regions after day 1. Particularly relevant is the improved forecast over the Sahara throughout the forecast range thanks to the assimilation of Deep Blue retrievals over areas not easily covered by other observational datasets.The present study on mineral dust is a first step towards data assimilation with a complete aerosol prediction system that includes multiple aerosol species.
NASA Astrophysics Data System (ADS)
Di Tomaso, Enza; Schutgens, Nick A. J.; Jorba, Oriol; Pérez García-Pando, Carlos
2017-03-01
A data assimilation capability has been built for the NMMB-MONARCH chemical weather prediction system, with a focus on mineral dust, a prominent type of aerosol. An ensemble-based Kalman filter technique (namely the local ensemble transform Kalman filter - LETKF) has been utilized to optimally combine model background and satellite retrievals. Our implementation of the ensemble is based on known uncertainties in the physical parametrizations of the dust emission scheme. Experiments showed that MODIS AOD retrievals using the Dark Target algorithm can help NMMB-MONARCH to better characterize atmospheric dust. This is particularly true for the analysis of the dust outflow in the Sahel region and over the African Atlantic coast. The assimilation of MODIS AOD retrievals based on the Deep Blue algorithm has a further positive impact in the analysis downwind from the strongest dust sources of the Sahara and in the Arabian Peninsula. An analysis-initialized forecast performs better (lower forecast error and higher correlation with observations) than a standard forecast, with the exception of underestimating dust in the long-range Atlantic transport and degradation of the temporal evolution of dust in some regions after day 1. Particularly relevant is the improved forecast over the Sahara throughout the forecast range thanks to the assimilation of Deep Blue retrievals over areas not easily covered by other observational datasets. The present study on mineral dust is a first step towards data assimilation with a complete aerosol prediction system that includes multiple aerosol species.
Detection of sub-MeV dark matter with three-dimensional Dirac materials
NASA Astrophysics Data System (ADS)
Hochberg, Yonit; Kahn, Yonatan; Lisanti, Mariangela; Zurek, Kathryn M.; Grushin, Adolfo G.; Ilan, Roni; Griffin, Sinéad M.; Liu, Zhen-Fei; Weber, Sophie F.; Neaton, Jeffrey B.
2018-01-01
We propose the use of three-dimensional Dirac materials as targets for direct detection of sub-MeV dark matter. Dirac materials are characterized by a linear dispersion for low-energy electronic excitations, with a small band gap of O (meV ) if lattice symmetries are broken. Dark matter at the keV scale carrying kinetic energy as small as a few meV can scatter and excite an electron across the gap. Alternatively, bosonic dark matter as light as a few meV can be absorbed by the electrons in the target. We develop the formalism for dark matter scattering and absorption in Dirac materials and calculate the experimental reach of these target materials. We find that Dirac materials can play a crucial role in detecting dark matter in the keV to MeV mass range that scatters with electrons via a kinetically mixed dark photon, as the dark photon does not develop an in-medium effective mass. The same target materials provide excellent sensitivity to absorption of light bosonic dark matter in the meV to hundreds of meV mass range, superior to all other existing proposals when the dark matter is a kinetically mixed dark photon.
Detection of sub-MeV dark matter with three-dimensional Dirac materials
Hochberg, Yonit; Kahn, Yonatan; Lisanti, Mariangela; ...
2018-01-08
Here, we propose the use of three-dimensional Dirac materials as targets for direct detection of sub-MeV dark matter. Dirac materials are characterized by a linear dispersion for low-energy electronic excitations, with a small band gap of Ο(meV) if lattice symmetries are broken. Dark matter at the keV scale carrying kinetic energy as small as a few meV can scatter and excite an electron across the gap. Alternatively, bosonic dark matter as light as a few meV can be absorbed by the electrons in the target. We develop the formalism for dark matter scattering and absorption in Dirac materials and calculatemore » the experimental reach of these target materials. We find that Dirac materials can play a crucial role in detecting dark matter in the keV to MeV mass range that scatters with electrons via a kinetically mixed dark photon, as the dark photon does not develop an in-medium effective mass. The same target materials provide excellent sensitivity to absorption of light bosonic dark matter in the meV to hundreds of meV mass range, superior to all other existing proposals when the dark matter is a kinetically mixed dark photon.« less
Detection of sub-MeV dark matter with three-dimensional Dirac materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hochberg, Yonit; Kahn, Yonatan; Lisanti, Mariangela
Here, we propose the use of three-dimensional Dirac materials as targets for direct detection of sub-MeV dark matter. Dirac materials are characterized by a linear dispersion for low-energy electronic excitations, with a small band gap of Ο(meV) if lattice symmetries are broken. Dark matter at the keV scale carrying kinetic energy as small as a few meV can scatter and excite an electron across the gap. Alternatively, bosonic dark matter as light as a few meV can be absorbed by the electrons in the target. We develop the formalism for dark matter scattering and absorption in Dirac materials and calculatemore » the experimental reach of these target materials. We find that Dirac materials can play a crucial role in detecting dark matter in the keV to MeV mass range that scatters with electrons via a kinetically mixed dark photon, as the dark photon does not develop an in-medium effective mass. The same target materials provide excellent sensitivity to absorption of light bosonic dark matter in the meV to hundreds of meV mass range, superior to all other existing proposals when the dark matter is a kinetically mixed dark photon.« less
DAEδALUS and dark matter detection
Kahn, Yonatan; Krnjaic, Gordan; Thaler, Jesse; ...
2015-03-05
Among laboratory probes of dark matter, fixed-target neutrino experiments are particularly well suited to search for light weakly coupled dark sectors. Here in this paper, we show that the DAEδALUS source setup$-$an 800 MeV proton beam impinging on a target of graphite and copper$-$can improve the present LSND bound on dark photon models by an order of magnitude over much of the accessible parameter space for light dark matter when paired with a suitable neutrino detector such as LENA. Interestingly, both DAEδALUS and LSND are sensitive to dark matter produced from off-shell dark photons. We show for the first timemore » that LSND can be competitive with searches for visible dark photon decays and that fixed-target experiments have sensitivity to a much larger range of heavy dark photon masses than previously thought. We review the mechanism for dark matter production and detection through a dark photon mediator, discuss the beam-off and beam-on backgrounds, and present the sensitivity in dark photon kinetic mixing for both the DAEδALUS/LENA setup and LSND in both the on- and off-shell regimes.« less
NASA Astrophysics Data System (ADS)
Kurugol, Sila; Dy, Jennifer G.; Rajadhyaksha, Milind; Gossage, Kirk W.; Weissmann, Jesse; Brooks, Dana H.
2011-03-01
The examination of the dermis/epidermis junction (DEJ) is clinically important for skin cancer diagnosis. Reflectance confocal microscopy (RCM) is an emerging tool for detection of skin cancers in vivo. However, visual localization of the DEJ in RCM images, with high accuracy and repeatability, is challenging, especially in fair skin, due to low contrast, heterogeneous structure and high inter- and intra-subject variability. We recently proposed a semi-automated algorithm to localize the DEJ in z-stacks of RCM images of fair skin, based on feature segmentation and classification. Here we extend the algorithm to dark skin. The extended algorithm first decides the skin type and then applies the appropriate DEJ localization method. In dark skin, strong backscatter from the pigment melanin causes the basal cells above the DEJ to appear with high contrast. To locate those high contrast regions, the algorithm operates on small tiles (regions) and finds the peaks of the smoothed average intensity depth profile of each tile. However, for some tiles, due to heterogeneity, multiple peaks in the depth profile exist and the strongest peak might not be the basal layer peak. To select the correct peak, basal cells are represented with a vector of texture features. The peak with most similar features to this feature vector is selected. The results show that the algorithm detected the skin types correctly for all 17 stacks tested (8 fair, 9 dark). The DEJ detection algorithm achieved an average distance from the ground truth DEJ surface of around 4.7μm for dark skin and around 7-14μm for fair skin.
Real-time image dehazing using local adaptive neighborhoods and dark-channel-prior
NASA Astrophysics Data System (ADS)
Valderrama, Jesus A.; Díaz-Ramírez, Víctor H.; Kober, Vitaly; Hernandez, Enrique
2015-09-01
A real-time algorithm for single image dehazing is presented. The algorithm is based on calculation of local neighborhoods of a hazed image inside a moving window. The local neighborhoods are constructed by computing rank-order statistics. Next the dark-channel-prior approach is applied to the local neighborhoods to estimate the transmission function of the scene. By using the suggested approach there is no need for applying a refining algorithm to the estimated transmission such as the soft matting algorithm. To achieve high-rate signal processing the proposed algorithm is implemented exploiting massive parallelism on a graphics processing unit (GPU). Computer simulation results are carried out to test the performance of the proposed algorithm in terms of dehazing efficiency and speed of processing. These tests are performed using several synthetic and real images. The obtained results are analyzed and compared with those obtained with existing dehazing algorithms.
NASA Astrophysics Data System (ADS)
Zhang, Hai; Kondragunta, Shobha; Laszlo, Istvan; Liu, Hongqing; Remer, Lorraine A.; Huang, Jingfeng; Superczynski, Stephen; Ciren, Pubu
2016-09-01
The Visible/Infrared Imager Radiometer Suite (VIIRS) on board the Suomi National Polar-orbiting Partnership (S-NPP) satellite has been retrieving aerosol optical thickness (AOT), operationally and globally, over ocean and land since shortly after S-NPP launch in 2011. However, the current operational VIIRS AOT retrieval algorithm over land has two limitations in its assumptions for land surfaces: (1) it only retrieves AOT over the dark surfaces and (2) it assumes that the global surface reflectance ratios between VIIRS bands are constants. In this work, we develop a surface reflectance ratio database over land with a spatial resolution 0.1° × 0.1° using 2 years of VIIRS top of atmosphere reflectances. We enhance the current operational VIIRS AOT retrieval algorithm by applying the surface reflectance ratio database in the algorithm. The enhanced algorithm is able to retrieve AOT over both dark and bright surfaces. Over bright surfaces, the VIIRS AOT retrievals from the enhanced algorithm have a correlation of 0.79, mean bias of -0.008, and standard deviation (STD) of error of 0.139 when compared against the ground-based observations at the global AERONET (Aerosol Robotic Network) sites. Over dark surfaces, the VIIRS AOT retrievals using the surface reflectance ratio database improve the root-mean-square error from 0.150 to 0.123. The use of the surface reflectance ratio database also increases the data coverage of more than 20% over dark surfaces. The AOT retrievals over bright surfaces are comparable to MODIS Deep Blue AOT retrievals.
A synthetic genetic edge detection program.
Tabor, Jeffrey J; Salis, Howard M; Simpson, Zachary Booth; Chevalier, Aaron A; Levskaya, Anselm; Marcotte, Edward M; Voigt, Christopher A; Ellington, Andrew D
2009-06-26
Edge detection is a signal processing algorithm common in artificial intelligence and image recognition programs. We have constructed a genetically encoded edge detection algorithm that programs an isogenic community of E. coli to sense an image of light, communicate to identify the light-dark edges, and visually present the result of the computation. The algorithm is implemented using multiple genetic circuits. An engineered light sensor enables cells to distinguish between light and dark regions. In the dark, cells produce a diffusible chemical signal that diffuses into light regions. Genetic logic gates are used so that only cells that sense light and the diffusible signal produce a positive output. A mathematical model constructed from first principles and parameterized with experimental measurements of the component circuits predicts the performance of the complete program. Quantitatively accurate models will facilitate the engineering of more complex biological behaviors and inform bottom-up studies of natural genetic regulatory networks.
A Synthetic Genetic Edge Detection Program
Tabor, Jeffrey J.; Salis, Howard; Simpson, Zachary B.; Chevalier, Aaron A.; Levskaya, Anselm; Marcotte, Edward M.; Voigt, Christopher A.; Ellington, Andrew D.
2009-01-01
Summary Edge detection is a signal processing algorithm common in artificial intelligence and image recognition programs. We have constructed a genetically encoded edge detection algorithm that programs an isogenic community of E.coli to sense an image of light, communicate to identify the light-dark edges, and visually present the result of the computation. The algorithm is implemented using multiple genetic circuits. An engineered light sensor enables cells to distinguish between light and dark regions. In the dark, cells produce a diffusible chemical signal that diffuses into light regions. Genetic logic gates are used so that only cells that sense light and the diffusible signal produce a positive output. A mathematical model constructed from first principles and parameterized with experimental measurements of the component circuits predicts the performance of the complete program. Quantitatively accurate models will facilitate the engineering of more complex biological behaviors and inform bottom-up studies of natural genetic regulatory networks. PMID:19563759
An improved dehazing algorithm of aerial high-definition image
NASA Astrophysics Data System (ADS)
Jiang, Wentao; Ji, Ming; Huang, Xiying; Wang, Chao; Yang, Yizhou; Li, Tao; Wang, Jiaoying; Zhang, Ying
2016-01-01
For unmanned aerial vehicle(UAV) images, the sensor can not get high quality images due to fog and haze weather. To solve this problem, An improved dehazing algorithm of aerial high-definition image is proposed. Based on the model of dark channel prior, the new algorithm firstly extracts the edges from crude estimated transmission map and expands the extracted edges. Then according to the expended edges, the algorithm sets a threshold value to divide the crude estimated transmission map into different areas and makes different guided filter on the different areas compute the optimized transmission map. The experimental results demonstrate that the performance of the proposed algorithm is substantially the same as the one based on dark channel prior and guided filter. The average computation time of the new algorithm is around 40% of the one as well as the detection ability of UAV image is improved effectively in fog and haze weather.
Searching for a dark photon with DarkLight
NASA Astrophysics Data System (ADS)
Corliss, R.; DarkLight Collaboration
2017-09-01
Despite compelling astrophysical evidence for the existence of dark matter in the universe, we have yet to positively identify it in any terrestrial experiment. If such matter is indeed particle in nature, it may have a new interaction as well, carried by a dark counterpart to the photon. The DarkLight experiment proposes to search for such a beyond-the-standard-model dark photon through complete reconstruction of the final states of electron-proton collisions. In order to accomplish this, the experiment requires a moderate-density target and a very high intensity, low energy electron beam. I describe DarkLight's approach and focus on the implications this has for the design of the experiment, which centers on the use of an internal gas target in Jefferson Lab's Low Energy Recirculating Facility. I also discuss upcoming beam tests, where we will place our target and solenoidal magnet in the beam for the first time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, M; Kang, S; Lee, S
Purpose: Implant-supported dentures seem particularly appropriate for the predicament of becoming edentulous and cancer patients are no exceptions. As the number of people having dental implants increased in different ages, critical dosimetric verification of metal artifact effects are required for the more accurate head and neck radiation therapy. The purpose of this study is to verify the theoretical analysis of the metal(streak and dark) artifact, and to evaluate dosimetric effect which cause by dental implants in CT images of patients with the patient teeth and implants inserted humanoid phantom. Methods: The phantom comprises cylinder which is shaped to simulate themore » anatomical structures of a human head and neck. Through applying various clinical cases, made phantom which is closely allied to human. Developed phantom can verify two classes: (i)closed mouth (ii)opened mouth. RapidArc plans of 4 cases were created in the Eclipse planning system. Total dose of 2000 cGy in 10 fractions is prescribed to the whole planning target volume (PTV) using 6MV photon beams. Acuros XB (AXB) advanced dose calculation algorithm, Analytical Anisotropic Algorithm (AAA) and progressive resolution optimizer were used in dose optimization and calculation. Results: In closed and opened mouth phantom, because dark artifacts formed extensively around the metal implants, dose variation was relatively higher than that of streak artifacts. As the PTV was delineated on the dark regions or large streak artifact regions, maximum 7.8% dose error and average 3.2% difference was observed. The averaged minimum dose to the PTV predicted by AAA was about 5.6% higher and OARs doses are also 5.2% higher compared to AXB. Conclusion: The results of this study showed that AXB dose calculation involving high-density materials is more accurate than AAA calculation, and AXB was superior to AAA in dose predictions beyond dark artifact/air cavity portion when compared against the measurements.« less
Absorption of light dark matter in semiconductors
Hochberg, Yonit; Lin, Tongyan; Zurek, Kathryn M.
2017-01-01
Semiconductors are by now well-established targets for direct detection of MeV to GeV dark matter via scattering off electrons. We show that semiconductor targets can also detect significantly lighter dark matter via an absorption process. When the dark matter mass is above the band gap of the semiconductor (around an eV), absorption proceeds by excitation of an electron into the conduction band. Below the band gap, multiphonon excitations enable absorption of dark matter in the 0.01 eV to eV mass range. Energetic dark matter particles emitted from the sun can also be probed for masses below an eV. We derivemore » the reach for absorption of a relic kinetically mixed dark photon or pseudoscalar in germanium and silicon, and show that existing direct detection results already probe new parameter space. Finally, with only a moderate exposure, low-threshold semiconductor target experiments can exceed current astrophysical and terrestrial constraints on sub-keV bosonic dark matter.« less
Autofocus algorithm using one-dimensional Fourier transform and Pearson correlation
NASA Astrophysics Data System (ADS)
Bueno Mario, A.; Alvarez-Borrego, Josue; Acho, L.
2004-10-01
A new autofocus algorithm based on one-dimensional Fourier transform and Pearson correlation for Z automatized microscope is proposed. Our goal is to determine in fast response time and accuracy, the best focused plane through an algorithm. We capture in bright and dark field several images set at different Z distances from biological organism sample. The algorithm uses the one-dimensional Fourier transform to obtain the image frequency content of a vectors pattern previously defined comparing the Pearson correlation of these frequency vectors versus the reference image frequency vector, the most out of focus image, we find the best focusing. Experimental results showed the algorithm has fast response time and accuracy in getting the best focus plane from captured images. In conclusions, the algorithm can be implemented in real time systems due fast response time, accuracy and robustness. The algorithm can be used to get focused images in bright and dark field and it can be extended to include fusion techniques to construct multifocus final images beyond of this paper.
Crowded Cluster Cores. Algorithms for Deblending in Dark Energy Survey Images
Zhang, Yuanyuan; McKay, Timothy A.; Bertin, Emmanuel; ...
2015-10-26
Deep optical images are often crowded with overlapping objects. We found that this is especially true in the cores of galaxy clusters, where images of dozens of galaxies may lie atop one another. Accurate measurements of cluster properties require deblending algorithms designed to automatically extract a list of individual objects and decide what fraction of the light in each pixel comes from each object. In this article, we introduce a new software tool called the Gradient And Interpolation based (GAIN) deblender. GAIN is used as a secondary deblender to improve the separation of overlapping objects in galaxy cluster cores inmore » Dark Energy Survey images. It uses image intensity gradients and an interpolation technique originally developed to correct flawed digital images. Our paper is dedicated to describing the algorithm of the GAIN deblender and its applications, but we additionally include modest tests of the software based on real Dark Energy Survey co-add images. GAIN helps to extract an unbiased photometry measurement for blended sources and improve detection completeness, while introducing few spurious detections. When applied to processed Dark Energy Survey data, GAIN serves as a useful quick fix when a high level of deblending is desired.« less
Large Oil Spill Classification Using SAR Images Based on Spatial Histogram
NASA Astrophysics Data System (ADS)
Schvartzman, I.; Havivi, S.; Maman, S.; Rotman, S. R.; Blumberg, D. G.
2016-06-01
Among the different types of marine pollution, oil spill is a major threat to the sea ecosystems. Remote sensing is used in oil spill response. Synthetic Aperture Radar (SAR) is an active microwave sensor that operates under all weather conditions and provides information about the surface roughness and covers large areas at a high spatial resolution. SAR is widely used to identify and track pollutants in the sea, which may be due to a secondary effect of a large natural disaster or by a man-made one . The detection of oil spill in SAR imagery relies on the decrease of the backscattering from the sea surface, due to the increased viscosity, resulting in a dark formation that contrasts with the brightness of the surrounding area. Most of the use of SAR images for oil spill detection is done by visual interpretation. Trained interpreters scan the image, and mark areas of low backscatter and where shape is a-symmetrical. It is very difficult to apply this method for a wide area. In contrast to visual interpretation, automatic detection algorithms were suggested and are mainly based on scanning dark formations, extracting features, and applying big data analysis. We propose a new algorithm that applies a nonlinear spatial filter that detects dark formations and is not susceptible to noises, such as internal or speckle. The advantages of this algorithm are both in run time and the results retrieved. The algorithm was tested in genesimulations as well as on COSMO-SkyMed images, detecting the Deep Horizon oil spill in the Gulf of Mexico (occurred on 20/4/2010). The simulation results show that even in a noisy environment, oil spill is detected. Applying the algorithm to the Deep Horizon oil spill, the algorithm classified the oil spill better than focusing on dark formation algorithm. Furthermore, the results were validated by the National Oceanic and Atmospheric Administration (NOAA) data.
V2.1.4 L2AS Detailed Release Description September 27, 2001
Atmospheric Science Data Center
2013-03-14
... 27, 2001 Algorithm Changes Change method of selecting radiance pixels to use in aerosol retrieval over ... het. surface retrieval algorithm over areas of 100% dark water. Modify algorithm for selecting a default aerosol model to use in ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lisanti, Mariangela; Mishra-Sharma, Siddharth; Rodd, Nicholas L.
Dark matter in the halos surrounding galaxy groups and clusters can annihilate to high-energy photons. Recent advancements in the construction of galaxy group catalogs provide many thousands of potential extragalactic targets for dark matter. In this paper, we outline a procedure to infer the dark matter signal associated with a given galaxy group. Applying this procedure to a catalog of sources, one can create a full-sky map of the brightest extragalactic dark matter targets in the nearby Universe (z≲0.03), supplementing sources of dark matter annihilation from within the local group. As with searches for dark matter in dwarf galaxies, thesemore » extragalactic targets can be stacked together to enhance the signals associated with dark matter. We validate this procedure on mock Fermi gamma-ray data sets using a galaxy catalog constructed from the DarkSky N-body cosmological simulation and demonstrate that the limits are robust, at O(1) levels, to systematic uncertainties on halo mass and concentration. We also quantify other sources of systematic uncertainty arising from the analysis and modeling assumptions. Lastly, our results suggest that a stacking analysis using galaxy group catalogs provides a powerful opportunity to discover extragalactic dark matter and complements existing studies of Milky Way dwarf galaxies.« less
NASA Astrophysics Data System (ADS)
Lisanti, Mariangela; Mishra-Sharma, Siddharth; Rodd, Nicholas L.; Safdi, Benjamin R.; Wechsler, Risa H.
2018-03-01
Dark matter in the halos surrounding galaxy groups and clusters can annihilate to high-energy photons. Recent advancements in the construction of galaxy group catalogs provide many thousands of potential extragalactic targets for dark matter. In this paper, we outline a procedure to infer the dark matter signal associated with a given galaxy group. Applying this procedure to a catalog of sources, one can create a full-sky map of the brightest extragalactic dark matter targets in the nearby Universe (z ≲0.03 ), supplementing sources of dark matter annihilation from within the local group. As with searches for dark matter in dwarf galaxies, these extragalactic targets can be stacked together to enhance the signals associated with dark matter. We validate this procedure on mock Fermi gamma-ray data sets using a galaxy catalog constructed from the DarkSky N -body cosmological simulation and demonstrate that the limits are robust, at O (1 ) levels, to systematic uncertainties on halo mass and concentration. We also quantify other sources of systematic uncertainty arising from the analysis and modeling assumptions. Our results suggest that a stacking analysis using galaxy group catalogs provides a powerful opportunity to discover extragalactic dark matter and complements existing studies of Milky Way dwarf galaxies.
Detecting superlight dark matter with Fermi-degenerate materials
Hochberg, Yonit; Pyle, Matt; Zhao, Yue; ...
2016-08-08
We examine in greater detail the recent proposal of using superconductors for detecting dark matter as light as the warm dark matter limit of O(keV). Detection of suc light dark matter is possible if the entire kinetic energy of the dark matter is extracted in the scattering, and if the experiment is sensitive to O(meV) energy depositions. This is the case for Fermi-degenerate materials in which the Fermi velocity exceeds the dark matter velocity dispersion in the Milky Way of ~10 –3. We focus on a concrete experimental proposal using a superconducting target with a transition edge sensor in ordermore » to detect the small energy deposits from the dark matter scatterings. Considering a wide variety of constraints, from dark matter self-interactions to the cosmic microwave background, we show that models consistent with cosmological/astrophysical and terrestrial constraints are observable with such detectors. A wider range of viable models with dark matter mass below an MeV is available if dark matter or mediator properties (such as couplings or masses) differ at BBN epoch or in stellar interiors from those in superconductors. We also show that metal targets pay a strong in-medium suppression for kinetically mixed mediators; this suppression is alleviated with insulating targets.« less
Lisanti, Mariangela; Mishra-Sharma, Siddharth; Rodd, Nicholas L.; ...
2018-03-09
Dark matter in the halos surrounding galaxy groups and clusters can annihilate to high-energy photons. Recent advancements in the construction of galaxy group catalogs provide many thousands of potential extragalactic targets for dark matter. In this paper, we outline a procedure to infer the dark matter signal associated with a given galaxy group. Applying this procedure to a catalog of sources, one can create a full-sky map of the brightest extragalactic dark matter targets in the nearby Universe (z≲0.03), supplementing sources of dark matter annihilation from within the local group. As with searches for dark matter in dwarf galaxies, thesemore » extragalactic targets can be stacked together to enhance the signals associated with dark matter. We validate this procedure on mock Fermi gamma-ray data sets using a galaxy catalog constructed from the DarkSky N-body cosmological simulation and demonstrate that the limits are robust, at O(1) levels, to systematic uncertainties on halo mass and concentration. We also quantify other sources of systematic uncertainty arising from the analysis and modeling assumptions. Lastly, our results suggest that a stacking analysis using galaxy group catalogs provides a powerful opportunity to discover extragalactic dark matter and complements existing studies of Milky Way dwarf galaxies.« less
A cascade method for TFT-LCD defect detection
NASA Astrophysics Data System (ADS)
Yi, Songsong; Wu, Xiaojun; Yu, Zhiyang; Mo, Zhuoya
2017-07-01
In this paper, we propose a novel cascade detection algorithm which focuses on point and line defects on TFT-LCD. At the first step of the algorithm, we use the gray level difference of su-bimage to segment the abnormal area. The second step is based on phase only transform (POT) which corresponds to the Discrete Fourier Transform (DFT), normalized by the magnitude. It can remove regularities like texture and noise. After that, we improve the method of setting regions of interest (ROI) with the method of edge segmentation and polar transformation. The algorithm has outstanding performance in both computation speed and accuracy. It can solve most of the defect detections including dark point, light point, dark line, etc.
Purpose-Driven Communities in Multiplex Networks: Thresholding User-Engaged Layer Aggregation
2016-06-01
dark networks is a non-trivial yet useful task. Because terrorists work hard to hide their relationships/network, analysts have an incomplete picture...them identify meaningful terrorist communities. This thesis introduces a general-purpose algorithm for community detection in multiplex dark networks...aggregation, dark networks, conductance, cluster adequacy, mod- ularity, Louvain method, shortest path interdiction 15. NUMBER OF PAGES 155 16. PRICE CODE
An adaptive enhancement algorithm for infrared video based on modified k-means clustering
NASA Astrophysics Data System (ADS)
Zhang, Linze; Wang, Jingqi; Wu, Wen
2016-09-01
In this paper, we have proposed a video enhancement algorithm to improve the output video of the infrared camera. Sometimes the video obtained by infrared camera is very dark since there is no clear target. In this case, infrared video should be divided into frame images by frame extraction, in order to carry out the image enhancement. For the first frame image, which can be divided into k sub images by using K-means clustering according to the gray interval it occupies before k sub images' histogram equalization according to the amount of information per sub image, we used a method to solve a problem that final cluster centers close to each other in some cases; and for the other frame images, their initial cluster centers can be determined by the final clustering centers of the previous ones, and the histogram equalization of each sub image will be carried out after image segmentation based on K-means clustering. The histogram equalization can make the gray value of the image to the whole gray level, and the gray level of each sub image is determined by the ratio of pixels to a frame image. Experimental results show that this algorithm can improve the contrast of infrared video where night target is not obvious which lead to a dim scene, and reduce the negative effect given by the overexposed pixels adaptively in a certain range.
MODIS Aerosol Optical Depth retrieval over land considering surface BRDF effects
NASA Astrophysics Data System (ADS)
Wu, Yerong; de Graaf, Martin; Menenti, Massimo
2016-04-01
Aerosols in the atmosphere play an important role in the climate system and human health. Retrieval from satellite data, Aerosol Optical Depth (AOD), one of most important indices of aerosol optical properties, has been extensively investigated. Benefiting from the high resolution at spatial and temporal and the maturity of the aerosol retrieval algorithm, MOderate Resolution Imaging Spectroradiometer (MODIS) Dark Target AOD product has been extensively applied in other scientific research such as climate change and air pollution. The latest product - MODIS Collection 6 Dark Target AOD (C6_DT) has been released. However, the accuracy of C6_DT AOD (global mean ±0.03) over land is still too low for the constraint on radiative forcing in the climate system, where the uncertainty should be reduced to ±0.02. The major uncertainty mainly lies on the underestimation/overestimation of the surface contribution to the Top Of Atmosphere (TOA) radiance since a lambertian surface is assumed in the C6_DT land algorithm. In the real world, it requires considering the heterogeneity of the surface reflection in the radiative transfer process. Based on this, we developed a new algorithm to retrieve AOD by considering surface Bidirectional Reflectance Distribution Function (BRDF) effects. The surface BRDF is much more complicated than isotropic reflection, described as 4 elements: directional-directional, directional-hemispherical, hemispherical-directional and hemispherical-hemispherical reflectance, and coupled into radiative transfer equation to generate an accurate top of atmosphere reflectance. The limited MODIS measurements (three channels available) allow us to retrieve only three parameters, which including AOD, the surface directional-directional reflectance and fine aerosol ratio η. The other three elements of the surface reflectance are expected to be constrained by ancillary data and assumptions or "a priori" information since there are more unknowns than MODIS measurements in our algorithm. We validated three case studies with AErosol Robotic NETwork (AERONET) AOD, and the results show that the AOD retrieval was improved compared to C6_DT AOD, with the increase of within expected accuracy ±(0.05 + 15%) by ranging from 2.7% to 7.5% for the best quality only (Quality Assurance =3), and from 5.8% to 9.5% for the marginal and better quality (Quality Assurance ≥ 1).
Fishing for Northern Pike in Minnesota: A comparison of anglers and dark house spearers
Schroeder, Susan A.; Fulton, David C.
2014-01-01
In order to project fishing effort and demand of individuals targeting Northern Pike Esox lucius in Minnesota, it is important to understand the catch orientations, management preferences, and site choice preferences of those individuals. Northern Pike are specifically targeted by about 35% of the approximately 1.5 million licensed anglers in Minnesota and by approximately 14,000–15,000 dark house spearers. Dark house spearing is a traditional method of harvesting fish through the ice in winter. Mail surveys were distributed to three research strata: anglers targeting Northern Pike, dark house spearing license holders spearing Northern Pike, and dark house spearing license holders angling for Northern Pike. Dark house spearers, whether spearing or angling, reported a stronger orientation toward keeping Northern Pike than did anglers. Anglers reported a stronger orientation toward catching large Northern Pike than did dark house spearers when spearing or angling. Northern Pike regulations were the most important attribute affecting site choice for respondents in all three strata. Models for all strata indicated a preference for lakes without protected slot limits. However, protected slot limits had a stronger negative influence on lake preference for dark house spearing licensees (whether spearing or angling) than for anglers.
Real-time single image dehazing based on dark channel prior theory and guided filtering
NASA Astrophysics Data System (ADS)
Zhang, Zan
2017-10-01
Images and videos taken outside the foggy day are serious degraded. In order to restore degraded image taken in foggy day and overcome traditional Dark Channel prior algorithms problems of remnant fog in edge, we propose a new dehazing method.We first find the fog area in the dark primary color map to obtain the estimated value of the transmittance using quadratic tree. Then we regard the gray-scale image after guided filtering as atmospheric light map and remove haze based on it. Box processing and image down sampling technology are also used to improve the processing speed. Finally, the atmospheric light scattering model is used to restore the image. A plenty of experiments show that algorithm is effective, efficient and has a wide range of application.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hochberg, Yonit; Pyle, Matt; Zhao, Yue
We examine in greater detail the recent proposal of using superconductors for detecting dark matter as light as the warm dark matter limit of O(keV). Detection of suc light dark matter is possible if the entire kinetic energy of the dark matter is extracted in the scattering, and if the experiment is sensitive to O(meV) energy depositions. This is the case for Fermi-degenerate materials in which the Fermi velocity exceeds the dark matter velocity dispersion in the Milky Way of ~10 –3. We focus on a concrete experimental proposal using a superconducting target with a transition edge sensor in ordermore » to detect the small energy deposits from the dark matter scatterings. Considering a wide variety of constraints, from dark matter self-interactions to the cosmic microwave background, we show that models consistent with cosmological/astrophysical and terrestrial constraints are observable with such detectors. A wider range of viable models with dark matter mass below an MeV is available if dark matter or mediator properties (such as couplings or masses) differ at BBN epoch or in stellar interiors from those in superconductors. We also show that metal targets pay a strong in-medium suppression for kinetically mixed mediators; this suppression is alleviated with insulating targets.« less
FPGA implementation of image dehazing algorithm for real time applications
NASA Astrophysics Data System (ADS)
Kumar, Rahul; Kaushik, Brajesh Kumar; Balasubramanian, R.
2017-09-01
Weather degradation such as haze, fog, mist, etc. severely reduces the effective range of visual surveillance. This degradation is a spatially varying phenomena, which makes this problem non trivial. Dehazing is an essential preprocessing stage in applications such as long range imaging, border security, intelligent transportation system, etc. However, these applications require low latency of the preprocessing block. In this work, single image dark channel prior algorithm is modified and implemented for fast processing with comparable visual quality of the restored image/video. Although conventional single image dark channel prior algorithm is computationally expensive, it yields impressive results. Moreover, a two stage image dehazing architecture is introduced, wherein, dark channel and airlight are estimated in the first stage. Whereas, transmission map and intensity restoration are computed in the next stages. The algorithm is implemented using Xilinx Vivado software and validated by using Xilinx zc702 development board, which contains an Artix7 equivalent Field Programmable Gate Array (FPGA) and ARM Cortex A9 dual core processor. Additionally, high definition multimedia interface (HDMI) has been incorporated for video feed and display purposes. The results show that the dehazing algorithm attains 29 frames per second for the image resolution of 1920x1080 which is suitable of real time applications. The design utilizes 9 18K_BRAM, 97 DSP_48, 6508 FFs and 8159 LUTs.
Portfolio Management Decision Support Tools Analysis Relating to Management Value Metrics
2007-03-01
creative activities that have been labeled “management dark matter ” (Housel and Kanevsky, 2007). Further, this new source of data can be used, not...organization. “The idea of management dark matter is introduced in this literature as the use of manager’s creative insights when they attempt to...we account for the dark matter or intuitive (i.e., non-algorithmically definable) heuristics that allow a manager to make creative management
Searching for Dark Photons with the SeaQuest Spectrometer
NASA Astrophysics Data System (ADS)
Uemura, Sho; SeaQuest Collaboration
2017-09-01
The existence of a dark sector, containing families of particles that do not couple directly to the Standard Model, is motivated as a possible model for dark matter. A ``dark photon'' - a massive vector boson that couples weakly to electric charge - is a common component of dark sector models. The SeaQuest spectrometer at Fermilab is designed to detect dimuon pairs produced by the interaction of a 120 GeV proton beam with a rotating set of thin fixed targets. An iron-filled magnet downstream of the target, 5 meters in length, serves as a beam dump. The SeaQuest spectrometer is sensitive to dark photons that are mostly produced in the beam dump and decay to dimuons, and a SeaQuest search for dark sector particles was approved as Fermilab experiment E1067. As part of E1067, a displaced-vertex trigger was built, installed and commissioned this year. This trigger uses two planes of extruded scintillators to identify dimuons originating far downstream of the target, and is sensitive to dark photons that travel deep inside the beam dump before decaying to dimuons. This trigger will be used to take data parasitically with the primary SeaQuest physics program. In this talk I will present the displaced-vertex trigger and its performance, and projected sensitivity from future running.
Searching for Dark Matter Annihilation in the Smith High-Velocity Cloud
NASA Technical Reports Server (NTRS)
Drlica-Wagner, Alex; Gomez-Vargas, German A.; Hewitt, John W.; Linden, Tim; Tibaldo, Luigi
2014-01-01
Recent observations suggest that some high-velocity clouds may be confined by massive dark matter halos. In particular, the proximity and proposed dark matter content of the Smith Cloud make it a tempting target for the indirect detection of dark matter annihilation. We argue that the Smith Cloud may be a better target than some Milky Way dwarf spheroidal satellite galaxies and use gamma-ray observations from the Fermi Large Area Telescope to search for a dark matter annihilation signal. No significant gamma-ray excess is found coincident with the Smith Cloud, and we set strong limits on the dark matter annihilation cross section assuming a spatially extended dark matter profile consistent with dynamical modeling of the Smith Cloud. Notably, these limits exclude the canonical thermal relic cross section (approximately 3 x 10 (sup -26) cubic centimeters per second) for dark matter masses less than or approximately 30 gigaelectronvolts annihilating via the B/B- bar oscillation or tau/antitau channels for certain assumptions of the dark matter density profile; however, uncertainties in the dark matter content of the Smith Cloud may significantly weaken these constraints.
Searching For Dark Matter Annihilation In The Smith High-Velocity Cloud
Drlica-Wagner, Alex; Gómez-Vargas, Germán A.; Hewitt, John W.; ...
2014-06-27
Recent observations suggest that some high-velocity clouds may be confined by massive dark matter halos. In particular, the proximity and proposed dark matter content of the Smith Cloud make it a tempting target for the indirect detection of dark matter annihilation. We argue that the Smith Cloud may be a better target than some Milky Way dwarf spheroidal satellite galaxies and use γ-ray observations from the Fermi Large Area Telescope to search for a dark matter annihilation signal. No significant γ-ray excess is found coincident with the Smith Cloud, and we set strong limits on the dark matter annihilation crossmore » section assuming a spatially extended dark matter profile consistent with dynamical modeling of the Smith Cloud. Notably, these limits exclude the canonical thermal relic cross section (~3 × 10 -26 cm3 s -1) for dark matter masses . 30 GeV annihilating via the b¯b or τ⁺τ⁻ channels for certain assumptions of the dark matter density profile; however, uncertainties in the dark matter content of the Smith Cloud may significantly weaken these constraints.« less
NASA Astrophysics Data System (ADS)
Kaufman, Y. J.; Tanré, D.; Remer, L. A.; Vermote, E. F.; Chu, A.; Holben, B. N.
1997-07-01
Daily distribution of the aerosol optical thickness and columnar mass concentration will be derived over the continents, from the EOS moderate resolution imaging spectroradiometer (MODIS) using dark land targets. Dark land covers are mainly vegetated areas and dark soils observed in the red and blue channels; therefore the method will be limited to the moist parts of the continents (excluding water and ice cover). After the launch of MODIS the distribution of elevated aerosol concentrations, for example, biomass burning in the tropics or urban industrial aerosol in the midlatitudes, will be continuously monitored. The algorithm takes advantage of the MODIS wide spectral range and high spatial resolution and the strong spectral dependence of the aerosol opacity for most aerosol types that result in low optical thickness in the mid-IR (2.1 and 3.8 μm). The main steps of the algorithm are (1) identification of dark pixels in the mid-IR; (2) estimation of their reflectance at 0.47 and 0.66 μm; and (3) derivation of the optical thickness and mass concentration of the accumulation mode from the detected radiance. To differentiate between dust and aerosol dominated by accumulation mode particles, for example, smoke or sulfates, ratios of the aerosol path radiance at 0.47 and 0.66 μm are used. New dynamic aerosol models for biomass burning aerosol, dust and aerosol from industrial/urban origin, are used to determine the aerosol optical properties used in the algorithm. The error in the retrieved aerosol optical thicknesses, τa is expected to be Δτa = 0.05±0.2τa. Daily values are stored on a resolution of 10×10 pixels (1 km nadir resolution). Weighted and gridded 8-day and monthly composites of the optical thickness, the aerosol mass concentration and spectral radiative forcing are generated for selected scattering angles to increase the accuracy. The daily aerosol information over land and oceans [Tanré et al., this issue], combined with continuous aerosol remote sensing from the ground, will be used to study aerosol climatology, to monitor the sources and sinks of specific aerosol types, and to study the interaction of aerosol with water vapor and clouds and their radiative forcing of climate. The aerosol information will also be used for atmospheric corrections of remotely sensed surface reflectance. In this paper, examples of applications and validations are provided.
Dark Matter "Collider" from Inelastic Boosted Dark Matter.
Kim, Doojin; Park, Jong-Chul; Shin, Seodong
2017-10-20
We propose a novel dark matter (DM) detection strategy for models with a nonminimal dark sector. The main ingredients in the underlying DM scenario are a boosted DM particle and a heavier dark sector state. The relativistic DM impinged on target material scatters off inelastically to the heavier state, which subsequently decays into DM along with lighter states including visible (standard model) particles. The expected signal event, therefore, accompanies a visible signature by the secondary cascade process associated with a recoiling of the target particle, differing from the typical neutrino signal not involving the secondary signature. We then discuss various kinematic features followed by DM detection prospects at large-volume neutrino detectors with a model framework where a dark gauge boson is the mediator between the standard model particles and DM.
Effect of Age and Glaucoma on the Detection of Darks and Lights
Zhao, Linxi; Sendek, Caroline; Davoodnia, Vandad; Lashgari, Reza; Dul, Mitchell W.; Zaidi, Qasim; Alonso, Jose-Manuel
2015-01-01
Purpose We have shown previously that normal observers detect dark targets faster and more accurately than light targets, when presented in noisy backgrounds. We investigated how these differences in detection time and accuracy are affected by age and ganglion cell pathology associated with glaucoma. Methods We asked 21 glaucoma patients, 21 age-similar controls, and 5 young control observers to report as fast as possible the number of 1 to 3 light or dark targets. The targets were positioned at random in a binary noise background, within the central 30° of the visual field. Results We replicate previous findings that darks are detected faster and more accurately than lights. We extend these findings by demonstrating that differences in detection of darks and lights are found reliably across different ages and in observers with glaucoma. We show that differences in detection time increase at a rate of approximately 55 msec/dB at early stages of glaucoma and then remain constant at later stages at approximately 800 msec. In normal subjects, differences in detection time increase with age at a rate of approximately 8 msec/y. We also demonstrate that the accuracy to detect lights and darks is significantly correlated with the severity of glaucoma and that the mean detection time is significantly longer for subjects with glaucoma than age-similar controls. Conclusions We conclude that differences in detection of darks and lights can be demonstrated over a wide range of ages, and asymmetries in dark/light detection increase with age and early stages of glaucoma. PMID:26513506
Effect of Age and Glaucoma on the Detection of Darks and Lights.
Zhao, Linxi; Sendek, Caroline; Davoodnia, Vandad; Lashgari, Reza; Dul, Mitchell W; Zaidi, Qasim; Alonso, Jose-Manuel
2015-10-01
We have shown previously that normal observers detect dark targets faster and more accurately than light targets, when presented in noisy backgrounds. We investigated how these differences in detection time and accuracy are affected by age and ganglion cell pathology associated with glaucoma. We asked 21 glaucoma patients, 21 age-similar controls, and 5 young control observers to report as fast as possible the number of 1 to 3 light or dark targets. The targets were positioned at random in a binary noise background, within the central 30° of the visual field. We replicate previous findings that darks are detected faster and more accurately than lights. We extend these findings by demonstrating that differences in detection of darks and lights are found reliably across different ages and in observers with glaucoma. We show that differences in detection time increase at a rate of approximately 55 msec/dB at early stages of glaucoma and then remain constant at later stages at approximately 800 msec. In normal subjects, differences in detection time increase with age at a rate of approximately 8 msec/y. We also demonstrate that the accuracy to detect lights and darks is significantly correlated with the severity of glaucoma and that the mean detection time is significantly longer for subjects with glaucoma than age-similar controls. We conclude that differences in detection of darks and lights can be demonstrated over a wide range of ages, and asymmetries in dark/light detection increase with age and early stages of glaucoma.
Automated extraction of subdural electrode grid from post-implant MRI scans for epilepsy surgery
NASA Astrophysics Data System (ADS)
Pozdin, Maksym A.; Skrinjar, Oskar
2005-04-01
This paper presents an automated algorithm for extraction of Subdural Electrode Grid (SEG) from post-implant MRI scans for epilepsy surgery. Post-implant MRI scans are corrupted by the image artifacts caused by implanted electrodes. The artifacts appear as dark spherical voids and given that the cerebrospinal fluid is also dark in T1-weigthed MRI scans, it is a difficult and time-consuming task to manually locate SEG position relative to brain structures of interest. The proposed algorithm reliably and accurately extracts SEG from post-implant MRI scan, i.e. finds its shape and position relative to brain structures of interest. The algorithm was validated against manually determined electrode locations, and the average error was 1.6mm for the three tested subjects.
Primakoff Prize Talk: The Search for Dark Sectors
NASA Astrophysics Data System (ADS)
Essig, Rouven
2015-04-01
Dark sectors, consisting of new, light, weakly-coupled particles that do not interact with the known strong, weak, or electromagnetic forces, are a particularly interesting possibility for new physics. Nature may contain numerous dark sectors, each with their own beautiful structure, distinct particles, and forces. Examples of dark sector particles include dark photons, axions, axion-like particles, and dark matter. In many cases, the exploration of dark sectors can proceed with existing facilities and comparatively modest experiments. This talk summarizes the physics motivation for dark sectors and the exciting opportunities for experimental exploration. Particular emphasis will be given to the search for dark photons, the mediators of a broken dark U(1) gauge theory that kinetically mixes with the Standard Model hypercharge, with masses in the MeV-to-GeV range. Experimental searches include low-energy e+e- colliders, new and old high-intensity fixed-target experiments, and high-energy colliders. The talk will highlight the APEX and HPS experiments at Jefferson Lab, which are pioneering, low-cost experiments to search for dark photons in fixed target electroproduction. Over the next few years, they have the potential for a transformative discovery.
Searching for a dark photon with DarkLight
Corliss, R.
2016-07-30
Here, we describe the current status of the DarkLight experiment at Jefferson Laboratory. DarkLight is motivated by the possibility that a dark photon in the mass range 10 to 100 MeV/c 2 could couple the dark sector to the Standard Model. DarkLight will precisely measure electron proton scattering using the 100 MeV electron beam of intensity 5 mA at the Jefferson Laboratory energy recovering linac incident on a windowless gas target of molecular hydrogen. We will detect the complete final state including scattered electron, recoil proton, and e +e - pair. A phase-I experiment has been funded and is expectedmore » to take data in the next eighteen months. The complete phase-II experiment is under final design and could run within two years after phase-I is completed. The DarkLight experiment drives development of new technology for beam, target, and detector and provides a new means to carry out electron scattering experiments at low momentum transfers.« less
Combing Visible and Infrared Spectral Tests for Dust Identification
NASA Technical Reports Server (NTRS)
Zhou, Yaping; Levy, Robert; Kleidman, Richard; Remer, Lorraine; Mattoo, Shana
2016-01-01
The MODIS Dark Target aerosol algorithm over Ocean (DT-O) uses spectral reflectance in the visible, near-IR and SWIR wavelengths to determine aerosol optical depth (AOD) and Angstrom Exponent (AE). Even though DT-O does have "dust-like" models to choose from, dust is not identified a priori before inversion. The "dust-like" models are not true "dust models" as they are spherical and do not have enough absorption at short wavelengths, so retrieved AOD and AE for dusty regions tends to be biased. The inference of "dust" is based on postprocessing criteria for AOD and AE by users. Dust aerosol has known spectral signatures in the near-UV (Deep blue), visible, and thermal infrared (TIR) wavelength regions. Multiple dust detection algorithms have been developed over the years with varying detection capabilities. Here, we test a few of these dust detection algorithms, to determine whether they can be useful to help inform the choices made by the DT-O algorithm. We evaluate the following methods: The multichannel imager (MCI) algorithm uses spectral threshold tests in (0.47, 0.64, 0.86, 1.38, 2.26, 3.9, 11.0, 12.0 micrometer) channels and spatial uniformity test [Zhao et al., 2010]. The NOAA dust aerosol index (DAI) uses spectral contrast in the blue channels (412nm and 440nm) [Ciren and Kundragunta, 2014]. The MCI is already included as tests within the "Wisconsin" (MOD35) Cloud mask algorithm.
Computation of dark frames in digital imagers
NASA Astrophysics Data System (ADS)
Widenhorn, Ralf; Rest, Armin; Blouke, Morley M.; Berry, Richard L.; Bodegom, Erik
2007-02-01
Dark current is caused by electrons that are thermally exited into the conduction band. These electrons are collected by the well of the CCD and add a false signal to the chip. We will present an algorithm that automatically corrects for dark current. It uses a calibration protocol to characterize the image sensor for different temperatures. For a given exposure time, the dark current of every pixel is characteristic of a specific temperature. The dark current of every pixel can therefore be used as an indicator of the temperature. Hot pixels have the highest signal-to-noise ratio and are the best temperature sensors. We use the dark current of a several hundred hot pixels to sense the chip temperature and predict the dark current of all pixels on the chip. Dark current computation is not a new concept, but our approach is unique. Some advantages of our method include applicability for poorly temperature-controlled camera systems and the possibility of ex post facto dark current correction.
Frequency-Modulated, Continuous-Wave Laser Ranging Using Photon-Counting Detectors
NASA Technical Reports Server (NTRS)
Erkmen, Baris I.; Barber, Zeb W.; Dahl, Jason
2014-01-01
Optical ranging is a problem of estimating the round-trip flight time of a phase- or amplitude-modulated optical beam that reflects off of a target. Frequency- modulated, continuous-wave (FMCW) ranging systems obtain this estimate by performing an interferometric measurement between a local frequency- modulated laser beam and a delayed copy returning from the target. The range estimate is formed by mixing the target-return field with the local reference field on a beamsplitter and detecting the resultant beat modulation. In conventional FMCW ranging, the source modulation is linear in instantaneous frequency, the reference-arm field has many more photons than the target-return field, and the time-of-flight estimate is generated by balanced difference- detection of the beamsplitter output, followed by a frequency-domain peak search. This work focused on determining the maximum-likelihood (ML) estimation algorithm when continuous-time photoncounting detectors are used. It is founded on a rigorous statistical characterization of the (random) photoelectron emission times as a function of the incident optical field, including the deleterious effects caused by dark current and dead time. These statistics enable derivation of the Cramér-Rao lower bound (CRB) on the accuracy of FMCW ranging, and derivation of the ML estimator, whose performance approaches this bound at high photon flux. The estimation algorithm was developed, and its optimality properties were shown in simulation. Experimental data show that it performs better than the conventional estimation algorithms used. The demonstrated improvement is a factor of 1.414 over frequency-domainbased estimation. If the target interrogating photons and the local reference field photons are costed equally, the optimal allocation of photons between these two arms is to have them equally distributed. This is different than the state of the art, in which the local field is stronger than the target return. The optimal processing of the photocurrent processes at the outputs of the two detectors is to perform log-matched filtering followed by a summation and peak detection. This implies that neither difference detection, nor Fourier-domain peak detection, which are the staples of the state-of-the-art systems, is optimal when a weak local oscillator is employed.
A New Target Object for Constraining Annihilating Dark Matter
NASA Astrophysics Data System (ADS)
Chan, Man Ho
2017-07-01
In the past decade, gamma-ray observations and radio observations of our Milky Way and the Milky Way dwarf spheroidal satellite galaxies put very strong constraints on annihilation cross sections of dark matter. In this paper, we suggest a new target object (NGC 2976) that can be used for constraining annihilating dark matter. The radio and X-ray data of NGC 2976 can put very tight constraints on the leptophilic channels of dark matter annihilation. The lower limits of dark matter mass annihilating via {e}+{e}-, {μ }+{μ }-, and {τ }+{τ }- channels are 200 GeV, 130 GeV, and 110 GeV, respectively, with the canonical thermal relic cross section. We suggest that this kind of large nearby dwarf galaxy with a relatively high magnetic field can be a good candidate for constraining annihilating dark matter in future analyses.
NASA Astrophysics Data System (ADS)
Rossi, B.; Agnes, P.; Alexander, T.; Alton, A.; Arisaka, K.; Back, H. O.; Baldin, B.; Biery, K.; Bonfini, G.; Bossa, M.; Brigatti, A.; Brodsky, J.; Budano, F.; Calaprice, F.; Canci, N.; Candela, A.; Cariello, M.; Cavalcante, P.; Catalanotti, S.; Chavarria, A.; Chepurnov, A.; Cocco, A. G.; Covone, G.; D'Angelo, D.; D'Incecco, M.; De Deo, M.; Derbin, A.; Devoto, A.; Di Eusanio, F.; Edkins, E.; Empl, A.; Fan, A.; Fiorillo, G.; Fomenko, K.; Franco, D.; Gabriele, F.; Galbiati, C.; Goretti, A.; Grandi, L.; Guan, M. Y.; Guardincerri, Y.; Hackett, B.; Herner, K.; Hungerford, E. V.; Ianni, Al.; Ianni, An.; Kendziora, C.; Koh, G.; Korablev, D.; Korga, G.; Kurlej, A.; Li, P. X.; Lombardi, P.; Luitz, S.; Machulin, I.; Mandarano, A.; Mari, S.; Maricic, J.; Marini, L.; Martoff, C. J.; Meyers, P. D.; Montanari, D.; Montuschi, M.; Monzani, M. E.; Musico, P.; Odrowski, S.; Orsini, M.; Ortica, F.; Pagani, L.; Pallavicini, M.; Pantic, E.; Papp, L.; Parmeggiano, S.; Pelliccia, N.; Perasso, S.; Pocar, A.; Pordes, S.; Qian, H.; Randle, K.; Ranucci, G.; Razeto, A.; Reinhold, B.; Renshaw, A.; Romani, A.; Rossi, N.; Rountree, S. D.; Sablone, D.; Saldanha, R.; Sands, W.; Segreto, E.; Shields, E.; Smirnov, O.; Sotnikov, A.; Stanford, C.; Suvorov, Y.; Tartaglia, R.; Tatarowicz, J.; Testera, G.; Tonazzo, A.; Unzhakov, E.; Vogelaar, R. B.; Wada, M.; Walker, S.; Wang, H.; Watson, A.; Westerdale, S.; Wojcik, M.; Xiang, X.; Xu, J.; Yang, C. G.; Yoo, J.; Zavatarelli, S.; Zec, A.; Zhu, C.; Zuzel, G.
2016-07-01
DarkSide-50 at Gran Sasso underground laboratory (LNGS), Italy, is a direct dark matter search experiment based on a liquid argon TPC. DS-50 has completed its first dark matter run using atmospheric argon as target. The detector performances and the results of the first physics run are presented in this proceeding.
Validation and Uncertainty Estimates for MODIS Collection 6 "Deep Blue" Aerosol Data
NASA Technical Reports Server (NTRS)
Sayer, A. M.; Hsu, N. C.; Bettenhausen, C.; Jeong, M.-J.
2013-01-01
The "Deep Blue" aerosol optical depth (AOD) retrieval algorithm was introduced in Collection 5 of the Moderate Resolution Imaging Spectroradiometer (MODIS) product suite, and complemented the existing "Dark Target" land and ocean algorithms by retrieving AOD over bright arid land surfaces, such as deserts. The forthcoming Collection 6 of MODIS products will include a "second generation" Deep Blue algorithm, expanding coverage to all cloud-free and snow-free land surfaces. The Deep Blue dataset will also provide an estimate of the absolute uncertainty on AOD at 550 nm for each retrieval. This study describes the validation of Deep Blue Collection 6 AOD at 550 nm (Tau(sub M)) from MODIS Aqua against Aerosol Robotic Network (AERONET) data from 60 sites to quantify these uncertainties. The highest quality (denoted quality assurance flag value 3) data are shown to have an absolute uncertainty of approximately (0.086+0.56Tau(sub M))/AMF, where AMF is the geometric air mass factor. For a typical AMF of 2.8, this is approximately 0.03+0.20Tau(sub M), comparable in quality to other satellite AOD datasets. Regional variability of retrieval performance and comparisons against Collection 5 results are also discussed.
NASA Technical Reports Server (NTRS)
Sherman, James P.; Gupta, Pawan; Levy, Robert C.; Sherman, Peter J.
2016-01-01
The literature shows that aerosol optical depth (AOD) derived from the MODIS Collection 5 (C5) dark target algorithm has been extensively validated by spatiotemporal collocation with AERONET sites on both global and regional scales.Although generally comparing well over the eastern US region, poor performance over mountains in other regions indicate the need to evaluate the MODIS product over a mountain site. This study compares MODIS C5 AOD at 550nm to AOD measured at the Appalachian State University AERONET site in Boone, NC over 30 months between August 2010 and September 2013. For the combined Aqua and Terra datasets, although more than 70% of the 500 MODIS AOD measurements agree with collocated AERONET AOD to within error envelope of +/- (0.05 + 15%), MODIS tends to have a low bias (0.02-0.03). The agreement between MODIS and AERONET AOD does not depend on MODIS quality assurance confidence (QAC) value. However, when stratified by satellite, MODIS-Terra data does not perform as well as Aqua, with especially poor correlation (r = 0.39) for low aerosol loading conditions (AERONET AOD less than 0.15).Linear regressions between Terra and AERONET possess statistically-different slopes for AOD < 0.15 and AOD > or = 0.15. AERONET AOD measured only during MODIS overpass hours is highly correlated with daily-averaged AERONET AOD. MODIS monthly-averaged AOD also tracks that of AERONET over the study period. These results indicate that MODIS is sensitive to the day-to-day variability, as well as the annual cycle of AOD over the Appalachian State AERONET site. The complex topography and high seasonality in AOD and vegetation indices allow us to specifically evaluate MODIS dark target algorithm surface albedo and aerosol model assumptions at a regionally-representative SE US mountain site.
Unattended real-time re-establishment of visibility in high dynamic range video and stills
NASA Astrophysics Data System (ADS)
Abidi, B.
2014-05-01
We describe a portable unattended persistent surveillance system that corrects for harsh illumination conditions, where bright sun light creates mixed contrast effects, i.e., heavy shadows and washouts. These effects result in high dynamic range scenes, where illuminance can vary from few luxes to a 6 figure value. When using regular monitors and cameras, such wide span of illuminations can only be visualized if the actual range of values is compressed, leading to the creation of saturated and/or dark noisy areas and a loss of information in these areas. Images containing extreme mixed contrast cannot be fully enhanced from a single exposure, simply because all information is not present in the original data. The active intervention in the acquisition process is required. A software package, capable of integrating multiple types of COTS and custom cameras, ranging from Unmanned Aerial Systems (UAS) data links to digital single-lens reflex cameras (DSLR), is described. Hardware and software are integrated via a novel smart data acquisition algorithm, which communicates to the camera the parameters that would maximize information content in the final processed scene. A fusion mechanism is then applied to the smartly acquired data, resulting in an enhanced scene where information in both dark and bright areas is revealed. Multi-threading and parallel processing are exploited to produce automatic real time full motion corrected video. A novel enhancement algorithm was also devised to process data from legacy and non-controllable cameras. The software accepts and processes pre-recorded sequences and stills, enhances visible, night vision, and Infrared data, and successfully applies to night time and dark scenes. Various user options are available, integrating custom functionalities of the application into intuitive and easy to use graphical interfaces. The ensuing increase in visibility in surveillance video and intelligence imagery will expand the performance and timely decision making of the human analyst, as well as that of unmanned systems performing automatic data exploitation, such as target detection and identification.
Content-aware dark image enhancement through channel division.
Rivera, Adin Ramirez; Ryu, Byungyong; Chae, Oksam
2012-09-01
The current contrast enhancement algorithms occasionally result in artifacts, overenhancement, and unnatural effects in the processed images. These drawbacks increase for images taken under poor illumination conditions. In this paper, we propose a content-aware algorithm that enhances dark images, sharpens edges, reveals details in textured regions, and preserves the smoothness of flat regions. The algorithm produces an ad hoc transformation for each image, adapting the mapping functions to each image's characteristics to produce the maximum enhancement. We analyze the contrast of the image in the boundary and textured regions, and group the information with common characteristics. These groups model the relations within the image, from which we extract the transformation functions. The results are then adaptively mixed, by considering the human vision system characteristics, to boost the details in the image. Results show that the algorithm can automatically process a wide range of images-e.g., mixed shadow and bright areas, outdoor and indoor lighting, and face images-without introducing artifacts, which is an improvement over many existing methods.
Direct detection of sub-GeV dark matter with semiconductor targets
Essig, Rouven; Fernández-Serra, Marivi; Mardon, Jeremy; ...
2016-05-09
Dark matter in the sub-GeV mass range is a theoretically motivated but largely unexplored paradigm. Such light masses are out of reach for conventional nuclear recoil direct detection experiments, but may be detected through the small ionization signals caused by dark matter-electron scattering. Semiconductors are well-studied and are particularly promising target materials because their O(1 eV) band gaps allow for ionization signals from dark matter particles as light as a few hundred keV. Current direct detection technologies are being adapted for dark matter-electron scattering. In this paper, we provide the theoretical calculations for dark matter-electron scattering rate in semiconductors, overcomingmore » several complications that stem from the many-body nature of the problem. We use density functional theory to numerically calculate the rates for dark matter-electron scattering in silicon and germanium, and estimate the sensitivity for upcoming experiments such as DAMIC and SuperCDMS. We find that the reach for these upcoming experiments has the potential to be orders of magnitude beyond current direct detection constraints and that sub-GeV dark matter has a sizable modulation signal. We also give the first direct detection limits on sub-GeV dark matter from its scattering off electrons in a semiconductor target (silicon) based on published results from DAMIC. We make available publicly our code, QEdark, with which we calculate our results. Our results can be used by experimental collaborations to calculate their own sensitivities based on their specific setup. In conclusion, the searches we propose will probe vast new regions of unexplored dark matter model and parameter space.« less
Target-projectile interaction during impact melting at Kamil Crater, Egypt
NASA Astrophysics Data System (ADS)
Fazio, Agnese; D'Orazio, Massimo; Cordier, Carole; Folco, Luigi
2016-05-01
In small meteorite impacts, the projectile may survive through fragmentation; in addition, it may melt, and chemically and physically interact with both shocked and melted target rocks. However, the mixing/mingling between projectile and target melts is a process still not completely understood. Kamil Crater (45 m in diameter; Egypt), generated by the hypervelocity impact of the Gebel Kamil Ni-rich ataxite on sandstone target, allows to study the target-projectile interaction in a simple and fresh geological setting. We conducted a petrographic and geochemical study of macroscopic impact melt lapilli and bombs ejected from the crater, which were collected during our geophysical campaign in February 2010. Two types of glasses constitute the impact melt lapilli and bombs: a white glass and a dark glass. The white glass is mostly made of SiO2 and it is devoid of inclusions. Its negligible Ni and Co contents suggest derivation from the target rocks without interaction with the projectile (<0.1 wt% of projectile contamination). The dark glass is a silicate melt with variable contents of Al2O3 (0.84-18.7 wt%), FeOT (1.83-61.5 wt%), and NiO (<0.01-10.2 wt%). The dark glass typically includes fragments (from few μm to several mm in size) of shocked sandstone, diaplectic glass, lechatelierite, and Ni-Fe metal blebs. The metal blebs are enriched in Ni compared to the Gebel Kamil meteorite. The dark glass is thus a mixture of target and projectile melts (11-12 wt% of projectile contamination). Based on recently proposed models for target-projectile interaction and for impact glass formation, we suggest a scenario for the glass formation at Kamil. During the transition from the contact and compression stage and the excavation stage, projectile and target liquids formed at their interface and chemically interact in a restricted zone. Projectile contamination affected only a shallow portion of the target rocks. The SiO2 melt that eventually solidified as white glass behaved as an immiscible liquid and did not interact with the projectile. During the excavation stage dark glass melt engulfed and coated the white glass melt, target fragments, and got stuck to iron meteorite shrapnel fragments. This model could also explain the common formation of white and dark glasses in small impact craters generated by iron bodies (e.g., Wabar).
NASA Astrophysics Data System (ADS)
Davini, S.; Agnes, P.; Alexander, T.; Alton, A.; Arisaka, K.; Back, H. O.; Baldin, B.; Biery, K.; Bonfini, G.; Bossa, M.; Brigatti, A.; Brodsky, J.; Budano, F.; Calaprice, F.; Canci, N.; Candela, A.; Cariello, M.; Cavalcante, P.; Chavarria, A.; Chepurnov, A.; Cocco, A. G.; D'Angelo, D.; D'Incecco, M.; De Deo, M.; Derbin, A.; Devoto, A.; Di Eusanio, F.; Edkins, E.; Empl, A.; Fan, A.; Fiorillo, G.; Fomenko, K.; Franco, D.; Gabriele, F.; Galbiati, C.; Goretti, A.; Grandi, L.; Guan, M. Y.; Guardincerri, Y.; Hackett, B.; Herner, K.; Hungerford, E. V.; Ianni, Al.; Ianni, An.; Kendziora, C.; Koh, G.; Korablev, D.; Korga, G.; Kurlej, A.; Li, P. X.; Lombardi, P.; Luitz, S.; Machulin, I.; Mandarano, A.; Mari, S.; Maricic, J.; Marini, L.; Martoff, C. J.; Meyers, P. D.; Montanari, D.; Montuschi, M.; Monzani, M. E.; Musico, P.; Odrowski, S.; Orsini, M.; Ortica, F.; Pagani, L.; Pantic, E.; Papp, L.; Parmeggiano, S.; Pelliccia, N.; Perasso, S.; Pocar, A.; Pordes, S.; Qian, H.; Randle, K.; Ranucci, G.; Razeto, A.; Reinhold, B.; Renshaw, A.; Romani, A.; Rossi, B.; Rossi, N.; Rountree, S. D.; Sablone, D.; Saldanha, R.; Sands, W.; Segreto, E.; Shields, E.; Smirnov, O.; Sotnikov, A.; Stanford, C.; Suvorov, Y.; Tatarowicz, J.; Testera, G.; Tonazzo, A.; Unzhakov, E.; Vogelaar, R. B.; Wada, M.; Walker, S.; Wang, H.; Watson, A.; Westerdale, S.; Wojcik, M.; Xiang, X.; Xu, J.; Yang, C. G.; Yoo, J.; Zavatarelli, S.; Zec, A.; Zhu, C.; Zuzel, G.
2016-04-01
DarkSide-50 (DS-50) at Gran Sasso underground laboratory (LNGS), Italy, is a direct dark matter search experiment based on a TPC with liquid argon. DS-50 has completed its first dark matter run using atmospheric argon as target. The DS-50 detector performances and the results of the first physics run are reviewed in this proceeding.
Davini, S.; Agnes, P.; Alexander, T.; ...
2016-05-31
DarkSide-50 (DS-50) at Gran Sasso underground laboratory (LNGS), Italy, is a direct dark matter search experiment based on a TPC with liquid argon. DS-50 has completed its first dark matter run using atmospheric argon as target. Here, the DS-50 detector performances and the results of the first physics run are reviewed in this proceeding.
The DESI Experiment Part I: Science,Targeting, and Survey Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aghamousa, Amir; et al.
DESI (Dark Energy Spectroscopic Instrument) is a Stage IV ground-based dark energy experiment that will study baryon acoustic oscillations (BAO) and the growth of structure through redshift-space distortions with a wide-area galaxy and quasar redshift survey. To trace the underlying dark matter distribution, spectroscopic targets will be selected in four classes from imaging data. We will measure luminous red galaxies up tomore » $z=1.0$. To probe the Universe out to even higher redshift, DESI will target bright [O II] emission line galaxies up to $z=1.7$. Quasars will be targeted both as direct tracers of the underlying dark matter distribution and, at higher redshifts ($ 2.1 < z < 3.5$), for the Ly-$$\\alpha$$ forest absorption features in their spectra, which will be used to trace the distribution of neutral hydrogen. When moonlight prevents efficient observations of the faint targets of the baseline survey, DESI will conduct a magnitude-limited Bright Galaxy Survey comprising approximately 10 million galaxies with a median $$z\\approx 0.2$$. In total, more than 30 million galaxy and quasar redshifts will be obtained to measure the BAO feature and determine the matter power spectrum, including redshift space distortions.« less
Observing a light dark matter beam with neutrino experiments
NASA Astrophysics Data System (ADS)
Deniverville, Patrick; Pospelov, Maxim; Ritz, Adam
2011-10-01
We consider the sensitivity of fixed-target neutrino experiments at the luminosity frontier to light stable states, such as those present in models of MeV-scale dark matter. To ensure the correct thermal relic abundance, such states must annihilate via light mediators, which in turn provide an access portal for direct production in colliders or fixed targets. Indeed, this framework endows the neutrino beams produced at fixed-target facilities with a companion “dark matter beam,” which may be detected via an excess of elastic scattering events off electrons or nuclei in the (near-)detector. We study the high-luminosity proton fixed-target experiments at LSND and MiniBooNE, and determine that the ensuing sensitivity to light dark matter generally surpasses that of other direct probes. For scenarios with a kinetically-mixed U(1)' vector mediator of mass mV, we find that a large volume of parameter space is excluded for mDM˜1-5MeV, covering vector masses 2mDM≲mV≲mη and a range of kinetic mixing parameters reaching as low as κ˜10-5. The corresponding MeV-scale dark matter scenarios motivated by an explanation of the galactic 511 keV line are thus strongly constrained.
NASA Astrophysics Data System (ADS)
Lovejoy, McKenna R.; Wickert, Mark A.
2017-05-01
A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.
The DarkSide direct dark matter search with liquid argon
NASA Astrophysics Data System (ADS)
Edkins, E.; Agnes, P.; Alexander, T.; Alton, A.; Arisaka, K.; Back, H. O.; Baldin, B.; Biery, K.; Bonfini, G.; Bossa, M.; Brigatti, A.; Brodsky, J.; Budano, F.; Cadonati, L.; Calaprice, F.; Canci, N.; Candela, A.; Cao, H.; Cariello, M.; Cavalcante, P.; Chavarria, A.; Chepurnov, A.; Cocco, A. G.; Crippa, L.; D'Angelo, D.; D'Incecco, M.; Davini, S.; De Deo, M.; Derbin, A.; Devoto, A.; Di Eusanio, F.; Di Pietro, G.; Empl, A.; Fan, A.; Fiorillo, G.; Fomenko, K.; Forster, G.; Franco, D.; Gabriele, F.; Galbiati, C.; Goretti, A.; Grandi, L.; Gromov, M.; Guan, M. Y.; Guardincerri, Y.; Hackett, B.; Herner, K.; Humble, P.; Hungerford, E. V.; Ianni, Al.; Ianni, An.; Jollet, C.; Keeter, K.; Kendziora, C.; Kobychev, V.; Koh, G.; Korablev, D.; Korga, G.; Kurlej, A.; Li, P. X.; Loer, B.; Lombardi, P.; Love, C.; Ludhova, L.; Luitz, S.; Ma, Y. Q.; Machulin, I.; Mandarano, A.; Mari, S.; Maricic, J.; Marini, L.; Martoff, C. J.; Meregaglia, A.; Meroni, E.; Meyers, P. D.; Milincic, R.; Montanari, D.; Montuschi, M.; Monzani, M. E.; Mosteiro, P.; Mount, B.; Muratova, V.; Musico, P.; Nelson, A.; Odrowski, S.; Okounkova, M.; Orsini, M.; Ortica, F.; Pagani, L.; Pallavicini, M.; Pantic, E.; Papp, L.; Parmeggiano, S.; Parsells, R.; Pelczar, K.; Pelliccia, N.; Perasso, S.; Pocar, A.; Pordes, S.; Pugachev, D.; Qian, H.; Randle, K.; Ranucci, G.; Razeto, A.; Reinhold, B.; Renshaw, A.; Romani, A.; Rossi, B.; Rossi, N.; Rountree, S. D.; Sablone, D.; Saggese, P.; Saldanha, R.; Sands, W.; Sangiorgio, S.; Segreto, E.; Semenov, D.; Shields, E.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Stanford, C.; Suvorov, Y.; Tartaglia, R.; Tatarowicz, J.; Testera, G.; Tonazzo, A.; Unzhakov, E.; Vogelaar, R. B.; Wada, M.; Walker, S.; Wang, H.; Wang, Y.; Watson, A.; Westerdale, S.; Wojcik, M.; Wright, A.; Xiang, X.; Xu, J.; Yang, C. G.; Yoo, J.; Zavatarelli, S.; Zec, A.; Zhu, C.; Zuzel, G.
2017-11-01
The DarkSide-50 direct dark matter detector is a liquid argon time projection chamber (TPC) surrounded by a liquid scintillator neutron veto (LSV) and a water Cerenkov muon veto (WCV). Located under 3800 m.w.e. at the Laboratori Nazionali del Gran Sasso, Italy, it is the only direct dark matter experiment currently operating background free. The atmospheric argon target was replaced with argon from underground sources in April, 2015. The level of 39Ar, a β emitter present in atmospheric argon (AAr), has been shown to have been reduced by a factor of (1.4 ± 0.2) x 103. The combined spin-independent WIMP exclusion limit of 2.0 x 10-44 cm2 (mχ = 100 GeV/c2) is currently the best limit on a liquid argon target.
ERIC Educational Resources Information Center
Cohen, David B.
1978-01-01
Informal observation suggested that dark-haired/light eyed females (target group) might have a liability to psychopathology. Questionnaire data obtained from eight large undergraduate classes during a four year period (1974-77) yielded consistently higher percentages of target group individuals reporting hospitalization of first-degree relatives…
Image defog algorithm based on open close filter and gradient domain recursive bilateral filter
NASA Astrophysics Data System (ADS)
Liu, Daqian; Liu, Wanjun; Zhao, Qingguo; Fei, Bowen
2017-11-01
To solve the problems of fuzzy details, color distortion, low brightness of the image obtained by the dark channel prior defog algorithm, an image defog algorithm based on open close filter and gradient domain recursive bilateral filter, referred to as OCRBF, was put forward. The algorithm named OCRBF firstly makes use of weighted quad tree to obtain more accurate the global atmospheric value, then exploits multiple-structure element morphological open and close filter towards the minimum channel map to obtain a rough scattering map by dark channel prior, makes use of variogram to correct the transmittance map,and uses gradient domain recursive bilateral filter for the smooth operation, finally gets recovery images by image degradation model, and makes contrast adjustment to get bright, clear and no fog image. A large number of experimental results show that the proposed defog method in this paper can be good to remove the fog , recover color and definition of the fog image containing close range image, image perspective, the image including the bright areas very well, compared with other image defog algorithms,obtain more clear and natural fog free images with details of higher visibility, what's more, the relationship between the time complexity of SIDA algorithm and the number of image pixels is a linear correlation.
WMD Intent Identification and Interaction Analysis Using the Dark Web
2016-04-01
WMD Intent Identification and Interaction Analysis Using the Dark Web Distribution Statement A. Approved for public release; distribution is...Organization/Institution: University of Arizona Project Title: WMD Intent Identification and Interaction Analysis Using the Dark Web Report Period: Final...and social media analytics. We are leveraging our highly successful Dark Web project as our research testbed (for identifying target adversarial
New approach to the retrieval of AOD and its uncertainty from MISR observations over dark water
NASA Astrophysics Data System (ADS)
Witek, Marcin L.; Garay, Michael J.; Diner, David J.; Bull, Michael A.; Seidel, Felix C.
2018-01-01
A new method for retrieving aerosol optical depth (AOD) and its uncertainty from Multi-angle Imaging SpectroRadiometer (MISR) observations over dark water is outlined. MISR's aerosol retrieval algorithm calculates cost functions between observed and pre-simulated radiances for a range of AODs (from 0.0 to 3.0) and a prescribed set of aerosol mixtures. The previous version 22 (V22) operational algorithm considered only the AOD that minimized the cost function for each aerosol mixture and then used a combination of these values to compute the final, best estimate
AOD and associated uncertainty. The new approach considers the entire range of cost functions associated with each aerosol mixture. The uncertainty of the reported AOD depends on a combination of (a) the absolute values of the cost functions for each aerosol mixture, (b) the widths of the cost function distributions as a function of AOD, and (c) the spread of the cost function distributions among the ensemble of mixtures. A key benefit of the new approach is that, unlike the V22 algorithm, it does not rely on empirical thresholds imposed on the cost function to determine the success or failure of a particular mixture. Furthermore, a new aerosol retrieval confidence index (ARCI) is established that can be used to screen high-AOD retrieval blunders caused by cloud contamination or other factors. Requiring ARCI ≥ 0.15 as a condition for retrieval success is supported through statistical analysis and outperforms the thresholds used in the V22 algorithm. The described changes to the MISR dark water algorithm will become operational in the new MISR aerosol product (V23), planned for release in 2017.
New Approach to the Retrieval of AOD and its Uncertainty from MISR Observations Over Dark Water
NASA Astrophysics Data System (ADS)
Witek, M. L.; Garay, M. J.; Diner, D. J.; Bull, M. A.; Seidel, F.
2017-12-01
A new method for retrieving aerosol optical depth (AOD) and its uncertainty from Multi-angle Imaging SpectroRadiometer (MISR) observations over dark water is outlined. MISR's aerosol retrieval algorithm calculates cost functions between observed and pre-simulated radiances for a range of AODs (from 0.0 to 3.0) and a prescribed set of aerosol mixtures. The previous Version 22 (V22) operational algorithm considered only the AOD that minimized the cost function for each aerosol mixture, then used a combination of these values to compute the final, "best estimate" AOD and associated uncertainty. The new approach considers the entire range of cost functions associated with each aerosol mixture. The uncertainty of the reported AOD depends on a combination of a) the absolute values of the cost functions for each aerosol mixture, b) the widths of the cost function distributions as a function of AOD, and c) the spread of the cost function distributions among the ensemble of mixtures. A key benefit of the new approach is that, unlike the V22 algorithm, it does not rely on arbitrary thresholds imposed on the cost function to determine the success or failure of a particular mixture. Furthermore, a new Aerosol Retrieval Confidence Index (ARCI) is established that can be used to screen high-AOD retrieval blunders caused by cloud contamination or other factors. Requiring ARCI≥0.15 as a condition for retrieval success is supported through statistical analysis and outperforms the thresholds used in the V22 algorithm. The described changes to the MISR dark water algorithm will become operational in the new MISR aerosol product (V23), planned for release in 2017.
Searching for light dark matter with the SLAC millicharge experiment.
Diamond, M; Schuster, P
2013-11-27
New sub-GeV gauge forces ("dark photons") that kinetically mix with the photon provide a promising scenario for MeV-GeV dark matter and are the subject of a program of searches at fixed-target and collider facilities around the world. In such models, dark photons produced in collisions may decay invisibly into dark-matter states, thereby evading current searches. We reexamine results of the SLAC mQ electron beam dump experiment designed to search for millicharged particles and find that it was strongly sensitive to any secondary beam of dark matter produced by electron-nucleus collisions in the target. The constraints are competitive for dark photon masses in the ~1-30 MeV range, covering part of the parameter space that can reconcile the apparent (g-2)(μ) anomaly. Simple adjustments to the original SLAC search for millicharges may extend sensitivity to cover a sizable portion of the remaining (g-2)(μ) anomaly-motivated region. The mQ sensitivity is therefore complementary to ongoing searches for visible decays of dark photons. Compared to existing direct-detection searches, mQ sensitivity to electron-dark-matter scattering cross sections is more than an order of magnitude better for a significant range of masses and couplings in simple models.
Retrieval and Validation of Aerosol Optical Depth by using the GF-1 Remote Sensing Data
NASA Astrophysics Data System (ADS)
Zhang, L.; Xu, S.; Wang, L.; Cai, K.; Ge, Q.
2017-05-01
Based on the characteristics of GF-1 remote sensing data, the method and data processing procedure to retrieve the Aerosol Optical Depth (AOD) are developed in this study. The surface contribution over dense vegetation and urban bright target areas are respectively removed by using the dark target and deep blue algorithms. Our method is applied for the three serious polluted Beijing-Tianjin-Hebei (BTH), Yangtze River Delta (YRD) and Pearl River Delta (PRD) regions. The retrieved AOD are validated by ground-based AERONET data from Beijing, Hangzhou, Hong Kong sites. Our results show that, 1) the heavy aerosol loadings are usually distributed in high industrial emission and dense populated cities, with the AOD value near 1. 2) There is a good agreement between satellite-retrievals and in-site observations, with the coefficient factors of 0.71 (BTH), 0.55 (YRD) and 0.54(PRD). 3) The GF-1 retrieval uncertainties are mainly from the impact of cloud contamination, high surface reflectance and assumed aerosol model.
Szörényi, Tamás; Pereszlényi, Ádám; Gerics, Balázs; Hegedüs, Ramón; Barta, András
2017-01-01
Horseflies (Tabanidae) are polarotactic, being attracted to linearly polarized light when searching for water or host animals. Although it is well known that horseflies prefer sunlit dark and strongly polarizing hosts, the reason for this preference is unknown. According to our hypothesis, horseflies use their polarization sensitivity to look for targets with higher degrees of polarization in their optical environment, which as a result facilitates detection of sunlit dark host animals. In this work, we tested this hypothesis. Using imaging polarimetry, we measured the reflection–polarization patterns of a dark host model and a living black cow under various illumination conditions and with different vegetation backgrounds. We focused on the intensity and degree of polarization of light originating from dark patches of vegetation and the dark model/cow. We compared the chances of successful host selection based on either intensity or degree of polarization of the target and the combination of these two parameters. We show that the use of polarization information considerably increases the effectiveness of visual detection of dark host animals even in front of sunny–shady–patchy vegetation. Differentiation between a weakly polarizing, shady (dark) vegetation region and a sunlit, highly polarizing dark host animal increases the efficiency of host search by horseflies. PMID:29291065
Horváth, Gábor; Szörényi, Tamás; Pereszlényi, Ádám; Gerics, Balázs; Hegedüs, Ramón; Barta, András; Åkesson, Susanne
2017-11-01
Horseflies (Tabanidae) are polarotactic, being attracted to linearly polarized light when searching for water or host animals. Although it is well known that horseflies prefer sunlit dark and strongly polarizing hosts, the reason for this preference is unknown. According to our hypothesis, horseflies use their polarization sensitivity to look for targets with higher degrees of polarization in their optical environment, which as a result facilitates detection of sunlit dark host animals. In this work, we tested this hypothesis. Using imaging polarimetry, we measured the reflection-polarization patterns of a dark host model and a living black cow under various illumination conditions and with different vegetation backgrounds. We focused on the intensity and degree of polarization of light originating from dark patches of vegetation and the dark model/cow. We compared the chances of successful host selection based on either intensity or degree of polarization of the target and the combination of these two parameters. We show that the use of polarization information considerably increases the effectiveness of visual detection of dark host animals even in front of sunny-shady-patchy vegetation. Differentiation between a weakly polarizing, shady (dark) vegetation region and a sunlit, highly polarizing dark host animal increases the efficiency of host search by horseflies.
Multi-Objective UAV Mission Planning Using Evolutionary Computation
2008-03-01
on a Solution Space. . . . . . . . . . . . . . . . . . . . 41 4.3. Crowding distance calculation. Dark points are non-dominated solutions. [14...SPEA2 was devel- oped by Zitzler [64] as an improvement to the original SPEA algorithm [65]. SPEA2 Figure 4.3: Crowding distance calculation. Dark ...thesis, Los Angeles, CA, USA, 2003. Adviser-Maja J. Mataric . 114 21. Homberger, Joerg and Hermann Gehring. “Two Evolutionary Metaheuristics for the
Automated transient identification in the Dark Energy Survey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldstein, D. A.
2015-08-20
We describe an algorithm for identifying point-source transients and moving objects on reference-subtracted optical images containing artifacts of processing and instrumentation. The algorithm makes use of the supervised machine learning technique known as Random Forest. We present results from its use in the Dark Energy Survey Supernova program (DES-SN), where it was trained using a sample of 898,963 signal and background events generated by the transient detection pipeline. After reprocessing the data collected during the first DES-SN observing season (2013 September through 2014 February) using the algorithm, the number of transient candidates eligible for human scanning decreased by a factormore » of 13.4, while only 1.0 percent of the artificial Type Ia supernovae (SNe) injected into search images to monitor survey efficiency were lost, most of which were very faint events. Here we characterize the algorithm's performance in detail, and we discuss how it can inform pipeline design decisions for future time-domain imaging surveys, such as the Large Synoptic Survey Telescope and the Zwicky Transient Facility.« less
Automated transient identification in the Dark Energy Survey
Goldstein, D. A.; D'Andrea, C. B.; Fischer, J. A.; ...
2015-09-01
We describe an algorithm for identifying point-source transients and moving objects on reference-subtracted optical images containing artifacts of processing and instrumentation. The algorithm makes use of the supervised machine learning technique known as Random Forest. We present results from its use in the Dark Energy Survey Supernova program (DES-SN), where it was trained using a sample of 898,963 signal and background events generated by the transient detection pipeline. After reprocessing the data collected during the first DES-SN observing season (2013 September through 2014 February) using the algorithm, the number of transient candidates eligible for human scanning decreased by a factormore » of 13.4, while only 1.0% of the artificial Type Ia supernovae (SNe) injected into search images to monitor survey efficiency were lost, most of which were very faint events. Furthermore, we characterize the algorithm's performance in detail, and we discuss how it can inform pipeline design decisions for future time-domain imaging surveys, such as the Large Synoptic Survey Telescope and the Zwicky Transient Facility.« less
de Lasarte, Marta; Pujol, Jaume; Arjona, Montserrat; Vilaseca, Meritxell
2007-01-10
We present an optimized linear algorithm for the spatial nonuniformity correction of a CCD color camera's imaging system and the experimental methodology developed for its implementation. We assess the influence of the algorithm's variables on the quality of the correction, that is, the dark image, the base correction image, and the reference level, and the range of application of the correction using a uniform radiance field provided by an integrator cube. The best spatial nonuniformity correction is achieved by having a nonzero dark image, by using an image with a mean digital level placed in the linear response range of the camera as the base correction image and taking the mean digital level of the image as the reference digital level. The response of the CCD color camera's imaging system to the uniform radiance field shows a high level of spatial uniformity after the optimized algorithm has been applied, which also allows us to achieve a high-quality spatial nonuniformity correction of captured images under different exposure conditions.
Concept for a dark matter detector using liquid helium-4
NASA Astrophysics Data System (ADS)
Guo, W.; McKinsey, D. N.
2013-06-01
Direct searches for light dark matter particles (mass<10GeV) are especially challenging because of the low energies transferred in elastic scattering to typical heavy nuclear targets. We investigate the possibility of using liquid helium-4 as a target material, taking advantage of the favorable kinematic matching of the helium nucleus to light dark matter particles. Monte Carlo simulations are performed to calculate the charge, scintillation, and triplet helium molecule signals produced by recoil He ions, for a variety of energies and electric fields. We show that excellent background rejection might be achieved based on the ratios between different signal channels. The sensitivity of the helium-based detector to light dark matter particles is estimated for various electric fields and light collection efficiencies.
Search for Invisible Decays of Sub-GeV Dark Photons in Missing-Energy Events at the CERN SPS.
Banerjee, D; Burtsev, V; Cooke, D; Crivelli, P; Depero, E; Dermenev, A V; Donskov, S V; Dubinin, F; Dusaev, R R; Emmenegger, S; Fabich, A; Frolov, V N; Gardikiotis, A; Gninenko, S N; Hösgen, M; Kachanov, V A; Karneyeu, A E; Ketzer, B; Kirpichnikov, D V; Kirsanov, M M; Kovalenko, S G; Kramarenko, V A; Kravchuk, L V; Krasnikov, N V; Kuleshov, S V; Lyubovitskij, V E; Lysan, V; Matveev, V A; Mikhailov, Yu V; Myalkovskiy, V V; Peshekhonov, V D; Peshekhonov, D V; Petuhov, O; Polyakov, V A; Radics, B; Rubbia, A; Samoylenko, V D; Tikhomirov, V O; Tlisov, D A; Toropin, A N; Trifonov, A Yu; Vasilishin, B; Vasquez Arenas, G; Ulloa, P; Zhukov, K; Zioutas, K
2017-01-06
We report on a direct search for sub-GeV dark photons (A^{'}), which might be produced in the reaction e^{-}Z→e^{-}ZA^{'} via kinetic mixing with photons by 100 GeV electrons incident on an active target in the NA64 experiment at the CERN SPS. The dark photons would decay invisibly into dark matter particles resulting in events with large missing energy. No evidence for such decays was found with 2.75×10^{9} electrons on target. We set new limits on the γ-A^{'} mixing strength and exclude the invisible A^{'} with a mass ≲100 MeV as an explanation of the muon g_{μ}-2 anomaly.
ShellFit: Reconstruction in the MiniCLEAN Detector
NASA Astrophysics Data System (ADS)
Seibert, Stanley
2010-02-01
The MiniCLEAN dark matter experiment is an ultra-low background liquid cryogen detector with a fiducial volume of approximately 150 kg. Dark matter candidate events produce ultraviolet scintillation light in argon at 128 nm and in neon at 80 nm. In order to detect this scintillation light, the target volume is enclosed by acrylic plates forming a spherical shell upon which an organic fluor, tetraphenyl butadiene (TPB), has been applied. TPB absorbs UV light and reemits visible light isotropically which can be detected by photomultiplier tubes. Two significant sources of background events in MiniCLEAN are decays of radon daughters embedded in the acrylic surface and external sources of neutrons, such as the photomultiplier tubes themselves. Both of these backgrounds can be mitigated by reconstructing the origin of the scintillation light and cutting events beyond a particular radius. The scrambling of photon trajectories at the TPB surface makes this task very challenging. The ``ShellFit'' algorithm for reconstructing event position and energy in a detector with a spherical wavelength-shifting shell will be described. The performance of ShellFit will be demonstrated using Monte Carlo simulation of several event types in the MiniCLEAN detector. )
Retrieval of aerosol optical depth over bare soil surfaces using time series of MODIS imagery
NASA Astrophysics Data System (ADS)
Yuan, Zhengwu; Yuan, Ranyin; Zhong, Bo
2014-11-01
Aerosol Optical Depth (AOD) is one of the key parameters which can not only reflect the characterization of atmospheric turbidity, but also identify the climate effects of aerosol. The current MODIS aerosol estimation algorithm over land is based on the "dark-target" approach which works only over densely vegetated surfaces. For non-densely vegetated surfaces (such as snow/ice, desert, and bare soil surfaces), this method will be failed. In this study, we develop an algorithm to derive AOD over the bare soil surfaces. Firstly, this method uses the time series of MODIS imagery to detect the " clearest" observations during the non-growing season in multiple years for each pixel. Secondly, the "clearest" observations after suitable atmospheric correction are used to fit the bare soil's bidirectional reflectance distribution function (BRDF) using Kernel model. As long as the bare soil's BRDF is established, the surface reflectance of "hazy" observations can be simulated. Eventually, the AOD over the bare soil surfaces are derived. Preliminary validation results by comparing with the ground measurements from AERONET at Xianghe sites show a good agreement.
Search for vector mediator of dark matter production in invisible decay mode
NASA Astrophysics Data System (ADS)
Banerjee, D.; Burtsev, V. E.; Chumakov, A. G.; Cooke, D.; Crivelli, P.; Depero, E.; Dermenev, A. V.; Donskov, S. V.; Dubinin, F.; Dusaev, R. R.; Emmenegger, S.; Fabich, A.; Frolov, V. N.; Gardikiotis, A.; Gerassimov, S. G.; Gninenko, S. N.; Hösgen, M.; Karneyeu, A. E.; Ketzer, B.; Kirpichnikov, D. V.; Kirsanov, M. M.; Konorov, I. V.; Kovalenko, S. G.; Kramarenko, V. A.; Kravchuk, L. V.; Krasnikov, N. V.; Kuleshov, S. V.; Lyubovitskij, V. E.; Lysan, V.; Matveev, V. A.; Mikhailov, Yu. V.; Peshekhonov, D. V.; Polyakov, V. A.; Radics, B.; Rojas, R.; Rubbia, A.; Samoylenko, V. D.; Tikhomirov, V. O.; Tlisov, D. A.; Toropin, A. N.; Trifonov, A. Yu.; Vasilishin, B. I.; Vasquez Arenas, G.; Ulloa, P.; NA64 Collaboration
2018-04-01
A search is performed for a new sub-GeV vector boson (A') mediated production of dark matter (χ ) in the fixed-target experiment, NA64, at the CERN SPS. The A', called dark photon, can be generated in the reaction e-Z →e-Z A' of 100 GeV electrons dumped against an active target followed by its prompt invisible decay A'→χ χ ¯. The experimental signature of this process would be an event with an isolated electron and large missing energy in the detector. From the analysis of the data sample collected in 2016 corresponding to 4.3 ×1010 electrons on target no evidence of such a process has been found. New stringent constraints on the A' mixing strength with photons, 10-5≲ɛ ≲10-2, for the A' mass range mA'≲1 GeV are derived. For models considering scalar and fermionic thermal dark matter interacting with the visible sector through the vector portal the 90% C.L. limits 10-11≲y ≲10-6 on the dark-matter parameter y =ɛ2αD(m/χmA')4 are obtained for the dark coupling constant αD=0.5 and dark-matter masses 0.001 ≲mχ≲0.5 GeV . The lower limits αD≳10-3 for pseudo-Dirac dark matter in the mass region mχ≲0.05 GeV are more stringent than the corresponding bounds from beam dump experiments. The results are obtained by using exact tree level calculations of the A' production cross sections, which turn out to be significantly smaller compared to the one obtained in the Weizsäcker-Williams approximation for the mass region mA'≳0.1 GeV .
Current Status of the dark matter experiment DarkSide-50
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marini, L.; Pagani, Ioanna; Agnes, P.
2016-07-12
DarkSide-50 is a dark matter direct search experiment at LNGS, searching for rare nuclear recoils possibly induced by WIMPs. It has two nested vetoes and a dual phase liquid argon TPC as dark matter detector. Key features of this experiment are the use of underground argon as radio-pure target and of muon and neutron active vetoes to suppress the background. The first data-taking campaign was running from November 2013 to April 2015 with an atmospheric argon target and a reduced efficiency neutron veto due to internal contamination. However, an upper limit on the WIMP-nucleon cross section of 6.1×10-44 cm2 atmore » 90% CL was obtained for a WIMP mass of 100 GeV/c2 and an exposure of (1422 ± 67) kg·d. At present DarkSide-50 started a 3 years run, intended to be background-free because the neutron veto was successfully recovered and underground argon replaced the atmospheric one. Additionally calibration campaigns for both the TPC and the neutron veto were completed. Thanks to the good performance of the background rejection, the results obtained so far suggest the scalability of DarkSide-50 to a ton-scale detector, which will play a key role into the dark matter search scenario.« less
Current status of the dark matter experiment DarkSide-50
NASA Astrophysics Data System (ADS)
Marini, L.; Pagani, L.; Agnes, P.; Alexander, T.; Alton, A.; Arisaka, K.; Back, H. O.; Baldin, B.; Biery, K.; Bonfini, G.; Bossa, M.; Brigatti, A.; Brodsky, J.; Budano, F.; Cadonati, L.; Calaprice, F.; Canci, N.; Candela, A.; Cao, H.; Cariello, M.; Cavalcante, P.; Chavarria, A.; Chepurnov, A.; Cocco, A. G.; D'Angelo, D.; D'Incecco, M.; Davini, S.; De Deo, M.; Derbin, A.; Devoto, A.; Di Eusanio, F.; Di Pietro, G.; Edkins, E.; Empl, A.; Fan, A.; Fiorillo, G.; Fomenko, K.; Forster, G.; Franco, D.; Gabriele, F.; Galbiati, C.; Goretti, A.; Grandi, L.; Gromov, M.; Guan, M. Y.; Guardincerri, Y.; Hackett, B.; Herner, K.; Humble, P.; Hungerford, E. V.; Ianni, Al.; Ianni, An.; Jollet, C.; Keeter, K.; Kendziora, C.; Kidner, S.; Kobychev, V.; Koh, G.; Korablev, D.; Korga, G.; Kurlej, A.; Li, P. X.; Lombardi, P.; Love, C.; Ludhova, L.; Luitz, S.; Ma, Y. Q.; Machulin, I.; Mandarano, A.; Mari, S.; Maricic, J.; Martoff, C. J.; Meregaglia, A.; Meroni, E.; Meyers, P. D.; Milincic, R.; Montanari, D.; Montuschi, M.; Monzani, M. E.; Mosteiro, P.; Mount, B.; Muratova, V.; Musico, P.; Nelson, A.; Odrowski, S.; Okounkova, M.; Orsini, M.; Ortica, F.; Pallavicini, M.; Pantic, E.; Papp, L.; Parmeggiano, S.; Parsells, R.; Pelczar, K.; Pelliccia, N.; Perasso, S.; Pocar, A.; Pordes, S.; Pugachev, D.; Qian, H.; Randle, K.; Ranucci, G.; Razeto, A.; Reinhold, B.; Renshaw, A.; Romani, A.; Rossi, B.; Rossi, N.; Rountree, S. D.; Sablone, D.; Saggese, P.; Saldanha, R.; Sands, W.; Sangiorgio, S.; Segreto, E.; Semenov, D.; Shields, E.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Stanford, C.; Suvorov, Y.; Tartaglia, R.; Tatarowicz, J.; Testera, G.; Tonazzo, A.; Unzhakov, E.; Vogelaar, R. B.; Wada, M.; Walker, S.; Wang, H.; Wang, Y.; Watson, A.; Westerdale, S.; Wojcik, M.; Wright, A.; Xiang, X.; Xu, J.; Yang, C. G.; Yoo, J.; Zavatarelli, S.; Zec, A.; Zhu, C.; Zuzel, G.; DarkSide Collaboration
2016-01-01
DarkSide-50 is a dark matter direct search experiment at LNGS, searching for rare nuclear recoils possibly induced by WIMPs. It has two nested vetoes and a dual phase liquid argon TPC as dark matter detector. Key features of this experiment are the use of underground argon as radio-pure target and of muon and neutron active vetoes to suppress the background. The first data-taking campaign was running from November 2013 to April 2015 with an atmospheric argon target and a reduced efficiency neutron veto due to internal contamination. However, an upper limit on the WIMP-nucleon cross section of 6.1×10-44 cm2 at 90% CL was obtained for a WIMP mass of 100 GeV/c2 and an exposure of (1422±67) kg . d . At present DarkSide-50 started a 3 years run, intended to be background-free because the neutron veto was successfully recovered and underground argon replaced the atmospheric one. Additionally calibration campaigns for both the TPC and the neutron veto were completed. Thanks to the good performance of the background rejection, the results obtained so far suggest the scalability of DarkSide-50 to a ton-scale detector, which will play a key role into the dark matter search scenario.
NASA Astrophysics Data System (ADS)
Bottino, B.; Aalseth, C. E.; Acconcia, G.; Acerbi, F.; Agnes, P.; Agostino, L.; Albuquerque, I. F. M.; Alexander, T.; Alton, A.; Ampudia, P.; Ardito, R.; Arisaka, K.; Arnquist, I. J.; Asner, D. M.; Back, H. O.; Baldin, B.; Batignani, G.; Biery, K.; Bisogni, M. G.; Bocci, V.; Bondar, A.; Bonfini, G.; Bonivento, W.; Bossa, M.; Brigatti, A.; Brodsky, J.; Budano, F.; Bunker, R.; Bussino, S.; Buttafava, M.; Buzulutskov, A.; Cadeddu, M.; Cadoni, M.; Calandri, N.; Calaprice, F.; Calvo, J.; Campajola, L.; Canci, N.; Candela, A.; Cantini, C.; Cao, H.; Caravati, M.; Cariello, M.; Carlini, M.; Carpinelli, M.; Castellani, A.; Catalanotti, S.; Cavalcante, P.; Chepurnov, A.; Cicalò, C.; Citterio, M.; Cocco, A. G.; Corgiolu, S.; Covone, G.; Crivelli, P.; D'Angelo, D.; D'Incecco, M.; Daniel, M.; Davini, S.; De Cecco, S.; De Deo, M.; De Guido, G.; De Vincenzi, M.; Demontis, P.; Derbin, A.; Devoto, A.; Di Eusanio, F.; Di Pietro, G.; Dionisi, C.; Dolgov, A.; Dromia, I.; Dussoni, S.; Edkins, E.; Empl, A.; Fan, A.; Ferri, A.; Filip, C. O.; Fiorillo, G.; Fomenko, K.; Forster, G.; Franco, D.; Froudakis, G. E.; Gabriele, F.; Gabrieli, A.; Galbiati, C.; Gendotti, A.; Ghioni, M.; Ghisi, A.; Giagu, S.; Gibertoni, G.; Giganti, C.; Giorgi, M.; Giovannetti, G. K.; Gligan, M. L.; Gola, A.; Goretti, A.; Granato, F.; Grassi, M.; Grate, J. W.; Gromov, M.; Guan, M.; Guardincerri, Y.; Gulinatti, A.; Haaland, R. K.; Hackett, B.; Harrop, B.; Herner, K.; Hoppe, E. W.; Horikawa, S.; Hungerford, E.; Ianni, Al.; Ianni, An.; Ivashchuk, O.; James, I.; Johnson, T. N.; Jollet, C.; Keeter, K.; Kendziora, C.; Kobychev, V.; Koh, G.; Korablev, D.; Korga, G.; Kubankin, A.; Kuss, M. W.; Lissia, M.; Li, X.; Lodi, G. U.; Lombardi, P.; Longo, G.; Loverre, P.; Luitz, S.; Lussana, R.; Luzzi, L.; Ma, Y.; Machado, A. A.; Machulin, I.; Mais, L.; Mandarano, A.; Mapelli, L.; Marcante, M.; Mari, S.; Mariani, M.; Maricic, J.; Marinelli, M.; Marini, L.; Martoff, C. J.; Mascia, M.; Meregaglia, A.; Meyers, P. D.; Miletic, T.; Milincic, R.; Miller, J. D.; Moioli, S.; Monasterio, S.; Montanari, D.; Monte, A.; Montuschi, M.; Monzani, M. E.; Morrocchi, M.; Mosteiro, P.; Mount, B.; Mu, W.; Muratova, V. N.; Murphy, S.; Musico, P.; Napolitano, J.; Nelson, A.; Nosov, V.; Nurakhov, N. N.; Odrowski, S.; Oleinik, A.; Orsini, M.; Ortica, F.; Pagani, L.; Pallavicini, M.; Palmas, S.; Pantic, E.; Paoloni, E.; Parmeggiano, S.; Paternoster, G.; Pazzona, F.; Pelczar, K.; Pellegrini, L. A.; Pelliccia, N.; Perasso, S.; Peronio, P.; Perotti, F.; Perruzza, R.; Piemonte, C.; Pilo, F.; Pocar, A.; Pordes, S.; Pugachev, D.; Qian, H.; Radics, B.; Randle, K.; Ranucci, G.; Razeti, M.; Razeto, A.; Rech, I.; Regazzoni, V.; Regenfus, C.; Reinhold, B.; Renshaw, A.; Rescigno, M.; Ricotti, M.; Riffard, Q.; Rizzardini, S.; Romani, A.; Romero, L.; Rossi, B.; Rossi, N.; Rountree, D.; Rubbia, A.; Ruggeri, A.; Sablone, D.; Saggese, P.; Salatino, P.; Salemme, L.; Sands, W.; Sangiorgio, S.; Sant, M.; Santorelli, R.; Sanzaro, M.; Savarese, C.; Sechi, E.; Segreto, E.; Semenov, D.; Shchagin, A.; Shekhtman, L.; Shemyakina, E.; Shields, E.; Simeone, M.; Singh, P. N.; Skorokhvatov, M.; Smallcomb, M.; Smirnov, O.; Sokolov, A.; Sotnikov, A.; Stanford, C.; Suffritti, G. B.; Suvorov, Y.; Tamborini, D.; Tartaglia, R.; Tatarowicz, J.; Testera, G.; Tonazzo, A.; Tosi, A.; Trinchese, P.; Unzhakov, E.; Vacca, A.; Verducci, M.; Viant, T.; Villa, F.; Vishneva, A.; Vogelaar, B.; Wada, M.; Walker, S.; Wang, H.; Wang, Y.; Watson, A.; Westerdale, S.; Wilhelmi, J.; Wojcik, M.; Wu, S.; Xiang, X.; Xu, J.; Yang, C.; Yoo, J.; Zappa, F.; Zappalà, G.; Zavatarelli, S.; Zec, A.; Zhong, W.; Zhu, C.; Zullo, A.; Zullo, M.; Zuzel, G.
2017-01-01
DarkSide is a dark matter direct search experiment at Laboratori Nazionali del Gran Sasso (LNGS). DarkSide is based on the detection of rare nuclear recoils possibly induced by hypothetical dark matter particles, which are supposed to be neutral, massive (m>10{ GeV}) and weakly interactive (WIMP). The dark matter detector is a two-phase time projection chamber (TPC) filled with ultra-pure liquid argon. The TPC is placed inside a muon and a neutron active vetoes to suppress the background. Using argon as active target has many advantages, the key features are the strong discriminant power between nuclear and electron recoils, the spatial reconstruction and easy scalability to multi-tons size. At the moment DarkSide-50 is filled with ultra-pure argon, extracted from underground sources, and from April 2015 it is taking data in its final configuration. When combined with the preceding search with an atmospheric argon target, it is possible to set a 90% CL upper limit on the WIMP-nucleon spin-independent cross section of 2.0×10^{-44} cm ^2 for a WIMP mass of 100 GeV/ c^2 . The next phase of the experiment, DarkSide-20k, will be the construction of a new detector with an active mass of ˜20 tons.
ERIC Educational Resources Information Center
Zhou, Liu; He, Zijiang J.; Ooi, Teng Leng
2013-01-01
Dimly lit targets in the dark are perceived as located about an implicit slanted surface that delineates the visual system's intrinsic bias (Ooi, Wu, & He, 2001). If the intrinsic bias reflects the internal model of visual space--as proposed here--its influence should extend beyond target localization. Our first 2 experiments demonstrated that…
A Quantitative Methodology for Vetting Dark Network Intelligence Sources for Social Network Analysis
2012-06-01
first algorithm by Erdös and Rényi (Erdös & Renyi , 1959). This earliest algorithm suffers from the fact that its degree distribution is not scale...Fundamental Media Understanding. Norderstedt: atpress. Erdös, P., & Renyi , A. (1959). On random graphs. Publicationes Mathematicae , 6, 290- 297. Erdös, P
NASA Astrophysics Data System (ADS)
Diakogiannis, Foivos I.; Lewis, Geraint F.; Ibata, Rodrigo A.; Guglielmo, Magda; Kafle, Prajwal R.; Wilkinson, Mark I.; Power, Chris
2017-09-01
Dwarf galaxies, among the most dark matter dominated structures of our Universe, are excellent test-beds for dark matter theories. Unfortunately, mass modelling of these systems suffers from the well-documented mass-velocity anisotropy degeneracy. For the case of spherically symmetric systems, we describe a method for non-parametric modelling of the radial and tangential velocity moments. The method is a numerical velocity anisotropy 'inversion', with parametric mass models, where the radial velocity dispersion profile, σrr2, is modelled as a B-spline, and the optimization is a three-step process that consists of (I) an evolutionary modelling to determine the mass model form and the best B-spline basis to represent σrr2; (II) an optimization of the smoothing parameters and (III) a Markov chain Monte Carlo analysis to determine the physical parameters. The mass-anisotropy degeneracy is reduced into mass model inference, irrespective of kinematics. We test our method using synthetic data. Our algorithm constructs the best kinematic profile and discriminates between competing dark matter models. We apply our method to the Fornax dwarf spheroidal galaxy. Using a King brightness profile and testing various dark matter mass models, our model inference favours a simple mass-follows-light system. We find that the anisotropy profile of Fornax is tangential (β(r) < 0) and we estimate a total mass of M_{tot} = 1.613^{+0.050}_{-0.075} × 10^8 M_{⊙}, and a mass-to-light ratio of Υ_V = 8.93 ^{+0.32}_{-0.47} (M_{⊙}/L_{⊙}). The algorithm we present is a robust and computationally inexpensive method for non-parametric modelling of spherical clusters independent of the mass-anisotropy degeneracy.
ERRATUM: “Automated Transient Identi cation in the Dark Energy Survey” (2015, AJ, 150, 82)
Goldstein, D. A.; D’Andrea, C. B.; Fischer, J. A.; ...
2015-08-20
Here, we describe an algorithm for identifying point-source transients and moving objects on reference-subtracted optical images containing artifacts of processing and instrumentation. The algorithm makes use of the supervised machine learning technique known as Random Forest. We present results from its use in the Dark Energy Survey Supernova program (DES-SN), where it was trained using a sample of 898,963 signal and background events generated by the transient detection pipeline. After reprocessing the data collected during the first DES-SN observing season (2013 September through 2014 February) using the algorithm, the number of transient candidates eligible for human scanning decreased by amore » factor of 13.4, while only 1.0% of the artificial Type Ia supernovae (SNe) injected into search images to monitor survey efficiency were lost, most of which were very faint events. Here we characterize the algorithm's performance in detail, and we discuss how it can inform pipeline design decisions for future time-domain imaging surveys, such as the Large Synoptic Survey Telescope and the Zwicky Transient Facility. An implementation of the algorithm and the training data used in this paper are available at at http://portal.nersc.gov/project/dessn/autoscan.« less
The DarkLight Experiment at the JLab FEL
NASA Astrophysics Data System (ADS)
Fisher, Peter
2013-10-01
DarkLight will study the production of gauge bosons associated with Dark Forces theories in the scattering of 100 MeV electrons on proton a target. DarkLight is a spectrometer to measure all the final state particles in e- + p -->e- + p +e- +e+ . QED allows this process and the invariant mass distribution of the e+e- pair is a continuum from nearly zero to nearly the electron beam energy. Dark Forces theories, which allow the dark matter mass scale to be over 1 TeV, predict a gauge boson A' in the mass range of 10-1,000 MeV and decays to an electron-positron pair with an invariant mass of mA'. We aim to search for this process using the 100 MeV, 10 mA electron beam at the JLab Free Electron Laser impinging on a hydrogen target with a 1019 cm-2 density. The resulting luminosity of 6 ×1035/cm2-s gives the experiment enough sensitivity to probe A' couplings of 10-9 α . DarkLight is unique in its design to detect all four particles in the final state. The leptons will be measured in a large high-rate TPC and a silicon sensor will measure the protons. A 0.5 T solenoidal magnetic field provides the momentum resolution and focuses the copious Møller scattering background down the beam line, away from the detectors. A first beam test has shown the FEL beam is compatible with the target design and that the hall backgrounds are manageable. The experiment has been approved by Jefferson Lab for first running in 2017.
Direct detection of sub-GeV dark matter with scintillating targets
Derenzo, Stephen; Essig, Rouven; Massari, Andrea; ...
2017-07-28
We suggest a novel experimental concept for detecting MeV-to-GeV-mass dark matter, in which the dark matter scatters off electrons in a scintillating target and produces a signal of one or a few photons. New large-area photodetectors are needed to measure the photon signal with negligible dark counts, which could be constructed from transition edge sensor (TES) or microwave kinetic inductance detector (MKID) technology. Alternatively, detecting two photons in coincidence may allow the use of conventional photodetectors like photomultiplier tubes. Here we describe why scintillators may have distinct advantages over other experiments searching for a low ionization signal from sub-GeV darkmore » matter, as there are fewer potential sources of spurious backgrounds. We discuss various target choices, but focus on calculating the expected dark matter-electron scattering rates in three scintillating crystals: sodium iodide (NaI), cesium iodide (CsI), and gallium arsenide (GaAs). Among these, GaAs has the lowest band gap (1.52 eV) compared to NaI (5.9 eV) or CsI (6.4 eV), which in principle allows it to probe dark matter masses as low as ~0.5 MeV, compared to ~1.5 MeV with NaI or CsI. We compare these scattering rates with those expected in silicon (Si) and germanium (Ge). The proposed experimental concept presents an important complementary path to existing efforts, and its potential advantages may make it the most sensitive direct-detection probe of dark matter down to MeV masses.« less
Direct detection of sub-GeV dark matter with scintillating targets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derenzo, Stephen; Essig, Rouven; Massari, Andrea
We suggest a novel experimental concept for detecting MeV-to-GeV-mass dark matter, in which the dark matter scatters off electrons in a scintillating target and produces a signal of one or a few photons. New large-area photodetectors are needed to measure the photon signal with negligible dark counts, which could be constructed from transition edge sensor (TES) or microwave kinetic inductance detector (MKID) technology. Alternatively, detecting two photons in coincidence may allow the use of conventional photodetectors like photomultiplier tubes. Here we describe why scintillators may have distinct advantages over other experiments searching for a low ionization signal from sub-GeV darkmore » matter, as there are fewer potential sources of spurious backgrounds. We discuss various target choices, but focus on calculating the expected dark matter-electron scattering rates in three scintillating crystals: sodium iodide (NaI), cesium iodide (CsI), and gallium arsenide (GaAs). Among these, GaAs has the lowest band gap (1.52 eV) compared to NaI (5.9 eV) or CsI (6.4 eV), which in principle allows it to probe dark matter masses as low as ~0.5 MeV, compared to ~1.5 MeV with NaI or CsI. We compare these scattering rates with those expected in silicon (Si) and germanium (Ge). The proposed experimental concept presents an important complementary path to existing efforts, and its potential advantages may make it the most sensitive direct-detection probe of dark matter down to MeV masses.« less
Quantum partial search for uneven distribution of multiple target items
NASA Astrophysics Data System (ADS)
Zhang, Kun; Korepin, Vladimir
2018-06-01
Quantum partial search algorithm is an approximate search. It aims to find a target block (which has the target items). It runs a little faster than full Grover search. In this paper, we consider quantum partial search algorithm for multiple target items unevenly distributed in a database (target blocks have different number of target items). The algorithm we describe can locate one of the target blocks. Efficiency of the algorithm is measured by number of queries to the oracle. We optimize the algorithm in order to improve efficiency. By perturbation method, we find that the algorithm runs the fastest when target items are evenly distributed in database.
NASA Astrophysics Data System (ADS)
Wu, Yerong; de Graaf, Martin; Menenti, Massimo
2017-08-01
Global quantitative aerosol information has been derived from MODerate Resolution Imaging SpectroRadiometer (MODIS) observations for decades since early 2000 and widely used for air quality and climate change research. However, the operational MODIS Aerosol Optical Depth (AOD) products Collection 6 (C6) can still be biased, because of uncertainty in assumed aerosol optical properties and aerosol vertical distribution. This study investigates the impact of aerosol vertical distribution on the AOD retrieval. We developed a new algorithm by considering dynamic vertical profiles, which is an adaptation of MODIS C6 Dark Target (C6_DT) algorithm over land. The new algorithm makes use of the aerosol vertical profile extracted from Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) measurements to generate an accurate top of the atmosphere (TOA) reflectance for the AOD retrieval, where the profile is assumed to be a single layer and represented as a Gaussian function with the mean height as single variable. To test the impact, a comparison was made between MODIS DT and Aerosol Robotic Network (AERONET) AOD, over dust and smoke regions. The results show that the aerosol vertical distribution has a strong impact on the AOD retrieval. The assumed aerosol layers close to the ground can negatively bias the retrievals in C6_DT. Regarding the evaluated smoke and dust layers, the new algorithm can improve the retrieval by reducing the negative biases by 3-5%.
Suppression of vegetation in LANDSAT ETM+ remote sensing images
NASA Astrophysics Data System (ADS)
Yu, Le; Porwal, Alok; Holden, Eun-Jung; Dentith, Michael
2010-05-01
Vegetation cover is an impediment to the interpretation of multispectral remote sensing images for geological applications, especially in densely vegetated terrains. In order to enhance the underlying geological information in such terrains, it is desirable to suppress the reflectance component of vegetation. One form of spectral unmixing that has been successfully used for vegetation reflectance suppression in multispectral images is called "forced invariance". It is based on segregating components of the reflectance spectrum that are invariant with respect to a specific spectral index such as the NDVI. The forced invariance method uses algorithms such as software defoliation. However, the outputs of software defoliation are single channel data, which are not amenable to geological interpretations. Crippen and Blom (2001) proposed a new forced invariance algorithm that utilizes band statistics, rather than band ratios. The authors demonstrated the effectiveness of their algorithms on a LANDSAT TM scene from Nevada, USA, especially in open canopy areas in mixed and semi-arid terrains. In this presentation, we report the results of our experimentation with this algorithm on a densely to sparsely vegetated Landsat ETM+ scene. We selected a scene (Path 119, Row 39) acquired on 18th July, 2004. Two study areas located around the city of Hangzhou, eastern China were tested. One of them covers uninhabited hilly terrain characterized by low rugged topography, parts of the hills are densely vegetated; another one covers both inhabited urban areas and uninhabited hilly terrain, which is densely vegetated. Crippen and Blom's algorithm is implemented in the following sequential steps: (1) dark pixel correction; (2) vegetation index calculation; (3) estimation of statistical relationship between vegetation index and digital number (DN) values for each band; (4) calculation of a smooth best-fit curve for the above relationships; and finally, (5) selection of a target average DN value and scaling all pixels at each vegetation index level by an amount that shifts the curve to the target digital number (DN). The main drawback of their algorithm is severe distortions of the DN values of non-vegetated areas, a suggested solution is masking outliers such as cloud, water, etc. We therefore extend this algorithm by masking non-vegetated areas. Our algorithm comprises the following three steps: (1) masking of barren or sparsely vegetated areas using a threshold based on a vegetation index that is calculated after atmosphere correction (dark pixel correction and ACTOR were compared) in order to conserve their original spectral information through the subsequent processing; (2) applying Crippen and Blom's forced invariance algorithm to suppress the spectral response of vegetation only in vegetated areas; and (3) combining the processed vegetated areas with the masked barren or sparsely vegetated areas followed by histogram equalization to eliminate the differences in color-scales between these two types of areas, and enhance the integrated image. The output images of both study areas showed significant improvement over the original images in terms of suppression of vegetation reflectance and enhancement of the underlying geological information. The processed images show clear banding, probably associated with lithological variations in the underlying rock formations. The colors of non-vegetated pixels are distorted in the unmasked results but in the same location the pixels in the masked results show regions of higher contrast. We conclude that the algorithm offers an effective way to enhance geological information in LANDSAT TM/ETM+ images of terrains with significant vegetation cover. It is also suitable to other multispectral satellite data have bands in similar wavelength regions. In addition, an application of this method to hyperspectral data may be possible as long as it can provide the vegetation band ratios.
Automated Transient Identification in the Dark Energy Survey
NASA Astrophysics Data System (ADS)
Goldstein, D. A.; D'Andrea, C. B.; Fischer, J. A.; Foley, R. J.; Gupta, R. R.; Kessler, R.; Kim, A. G.; Nichol, R. C.; Nugent, P. E.; Papadopoulos, A.; Sako, M.; Smith, M.; Sullivan, M.; Thomas, R. C.; Wester, W.; Wolf, R. C.; Abdalla, F. B.; Banerji, M.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Carnero Rosell, A.; Castander, F. J.; da Costa, L. N.; Covarrubias, R.; DePoy, D. L.; Desai, S.; Diehl, H. T.; Doel, P.; Eifler, T. F.; Fausti Neto, A.; Finley, D. A.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gerdes, D.; Gruen, D.; Gruendl, R. A.; James, D.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Li, T. S.; Maia, M. A. G.; Makler, M.; March, M.; Marshall, J. L.; Martini, P.; Merritt, K. W.; Miquel, R.; Nord, B.; Ogando, R.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thaler, J.; Walker, A. R.
2015-09-01
We describe an algorithm for identifying point-source transients and moving objects on reference-subtracted optical images containing artifacts of processing and instrumentation. The algorithm makes use of the supervised machine learning technique known as Random Forest. We present results from its use in the Dark Energy Survey Supernova program (DES-SN), where it was trained using a sample of 898,963 signal and background events generated by the transient detection pipeline. After reprocessing the data collected during the first DES-SN observing season (2013 September through 2014 February) using the algorithm, the number of transient candidates eligible for human scanning decreased by a factor of 13.4, while only 1.0% of the artificial Type Ia supernovae (SNe) injected into search images to monitor survey efficiency were lost, most of which were very faint events. Here we characterize the algorithm’s performance in detail, and we discuss how it can inform pipeline design decisions for future time-domain imaging surveys, such as the Large Synoptic Survey Telescope and the Zwicky Transient Facility. An implementation of the algorithm and the training data used in this paper are available at at http://portal.nersc.gov/project/dessn/autoscan.
Searching for dark absorption with direct detection experiments
Bloch, Itay M.; Essig, Rouven; Tobioka, Kohsaku; ...
2017-06-16
We consider the absorption by bound electrons of dark matter in the form of dark photons and axion-like particles, as well as of dark photons from the Sun, in current and next-generation direct detection experiments. Experiments sensitive to electron recoils can detect such particles with masses between a few eV to more than 10 keV. For dark photon dark matter, we update a previous bound based on XENON10 data and derive new bounds based on data from XENON100 and CDMSlite. We find these experiments to disfavor previously allowed parameter space. Moreover, we derive sensitivity projections for SuperCDMS at SNOLAB formore » silicon and germanium targets, as well as for various possible experiments with scintillating targets (cesium iodide, sodium iodide, and gallium arsenide). The projected sensitivity can probe large new regions of parameter space. For axion-like particles, the same current direction detection data improves on previously known direct-detection constraints but does not bound new parameter space beyond known stellar cooling bounds. However, projected sensitivities of the upcoming SuperCDMS SNOLAB using germanium can go beyond these and even probe parameter space consistent with possible hints from the white dwarf luminosity function. We find similar results for dark photons from the sun. For all cases, direct-detection experiments can have unprecedented sensitivity to dark-sector particles.« less
Searching for dark absorption with direct detection experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bloch, Itay M.; Essig, Rouven; Tobioka, Kohsaku
We consider the absorption by bound electrons of dark matter in the form of dark photons and axion-like particles, as well as of dark photons from the Sun, in current and next-generation direct detection experiments. Experiments sensitive to electron recoils can detect such particles with masses between a few eV to more than 10 keV. For dark photon dark matter, we update a previous bound based on XENON10 data and derive new bounds based on data from XENON100 and CDMSlite. We find these experiments to disfavor previously allowed parameter space. Moreover, we derive sensitivity projections for SuperCDMS at SNOLAB formore » silicon and germanium targets, as well as for various possible experiments with scintillating targets (cesium iodide, sodium iodide, and gallium arsenide). The projected sensitivity can probe large new regions of parameter space. For axion-like particles, the same current direction detection data improves on previously known direct-detection constraints but does not bound new parameter space beyond known stellar cooling bounds. However, projected sensitivities of the upcoming SuperCDMS SNOLAB using germanium can go beyond these and even probe parameter space consistent with possible hints from the white dwarf luminosity function. We find similar results for dark photons from the sun. For all cases, direct-detection experiments can have unprecedented sensitivity to dark-sector particles.« less
NASA Astrophysics Data System (ADS)
Bal, A.; Alam, M. S.; Aslan, M. S.
2006-05-01
Often sensor ego-motion or fast target movement causes the target to temporarily go out of the field-of-view leading to reappearing target detection problem in target tracking applications. Since the target goes out of the current frame and reenters at a later frame, the reentering location and variations in rotation, scale, and other 3D orientations of the target are not known thus complicating the detection algorithm has been developed using Fukunaga-Koontz Transform (FKT) and distance classifier correlation filter (DCCF). The detection algorithm uses target and background information, extracted from training samples, to detect possible candidate target images. The detected candidate target images are then introduced into the second algorithm, DCCF, called clutter rejection module, to determine the target coordinates are detected and tracking algorithm is initiated. The performance of the proposed FKT-DCCF based target detection algorithm has been tested using real-world forward looking infrared (FLIR) video sequences.
Comparison of human and algorithmic target detection in passive infrared imagery
NASA Astrophysics Data System (ADS)
Weber, Bruce A.; Hutchinson, Meredith
2003-09-01
We have designed an experiment that compares the performance of human observers and a scale-insensitive target detection algorithm that uses pixel level information for the detection of ground targets in passive infrared imagery. The test database contains targets near clutter whose detectability ranged from easy to very difficult. Results indicate that human observers detect more "easy-to-detect" targets, and with far fewer false alarms, than the algorithm. For "difficult-to-detect" targets, human and algorithm detection rates are considerably degraded, and algorithm false alarms excessive. Analysis of detections as a function of observer confidence shows that algorithm confidence attribution does not correspond to human attribution, and does not adequately correlate with correct detections. The best target detection score for any human observer was 84%, as compared to 55% for the algorithm for the same false alarm rate. At 81%, the maximum detection score for the algorithm, the same human observer had 6 false alarms per frame as compared to 29 for the algorithm. Detector ROC curves and observer-confidence analysis benchmarks the algorithm and provides insights into algorithm deficiencies and possible paths to improvement.
Results from the first use of low radioactivity argon in a dark matter search
NASA Astrophysics Data System (ADS)
Agnes, P.; Agostino, L.; Albuquerque, I. F. M.; Alexander, T.; Alton, A. K.; Arisaka, K.; Back, H. O.; Baldin, B.; Biery, K.; Bonfini, G.; Bossa, M.; Bottino, B.; Brigatti, A.; Brodsky, J.; Budano, F.; Bussino, S.; Cadeddu, M.; Cadonati, L.; Cadoni, M.; Calaprice, F.; Canci, N.; Candela, A.; Cao, H.; Cariello, M.; Carlini, M.; Catalanotti, S.; Cavalcante, P.; Chepurnov, A.; Cocco, A. G.; Covone, G.; Crippa, L.; D'Angelo, D.; D'Incecco, M.; Davini, S.; De Cecco, S.; De Deo, M.; De Vincenzi, M.; Derbin, A.; Devoto, A.; Di Eusanio, F.; Di Pietro, G.; Edkins, E.; Empl, A.; Fan, A.; Fiorillo, G.; Fomenko, K.; Forster, G.; Franco, D.; Gabriele, F.; Galbiati, C.; Giganti, C.; Goretti, A. M.; Granato, F.; Grandi, L.; Gromov, M.; Guan, M.; Guardincerri, Y.; Hackett, B. R.; Herner, K.; Hungerford, E. V.; Ianni, Al.; Ianni, An.; James, I.; Jollet, C.; Keeter, K.; Kendziora, C. L.; Kobychev, V.; Koh, G.; Korablev, D.; Korga, G.; Kubankin, A.; Li, X.; Lissia, M.; Lombardi, P.; Luitz, S.; Ma, Y.; Machulin, I. N.; Mandarano, A.; Mari, S. M.; Maricic, J.; Marini, L.; Martoff, C. J.; Meregaglia, A.; Meyers, P. D.; Miletic, T.; Milincic, R.; Montanari, D.; Monte, A.; Montuschi, M.; Monzani, M.; Mosteiro, P.; Mount, B. J.; Muratova, V. N.; Musico, P.; Napolitano, J.; Nelson, A.; Odrowski, S.; Orsini, M.; Ortica, F.; Pagani, L.; Pallavicini, M.; Pantic, E.; Parmeggiano, S.; Pelczar, K.; Pelliccia, N.; Perasso, S.; Pocar, A.; Pordes, S.; Pugachev, D. A.; Qian, H.; Randle, K.; Ranucci, G.; Razeto, A.; Reinhold, B.; Renshaw, A. L.; Romani, A.; Rossi, B.; Rossi, N.; Rountree, D.; Sablone, D.; Saggese, P.; Saldanha, R.; Sands, W.; Sangiorgio, S.; Savarese, C.; Segreto, E.; Semenov, D. A.; Shields, E.; Singh, P. N.; Skorokhvatov, M. D.; Smirnov, O.; Sotnikov, A.; Stanford, C.; Suvorov, Y.; Tartaglia, R.; Tatarowicz, J.; Testera, G.; Tonazzo, A.; Trinchese, P.; Unzhakov, E. V.; Vishneva, A.; Vogelaar, B.; Wada, M.; Walker, S.; Wang, H.; Wang, Y.; Watson, A. W.; Westerdale, S.; Wilhelmi, J.; Wojcik, M. M.; Xiang, X.; Xu, J.; Yang, C.; Yoo, J.; Zavatarelli, S.; Zec, A.; Zhong, W.; Zhu, C.; Zuzel, G.; DarkSide Collaboration
2016-04-01
Liquid argon is a bright scintillator with potent particle identification properties, making it an attractive target for direct-detection dark matter searches. The DarkSide-50 dark matter search here reports the first WIMP search results obtained using a target of low-radioactivity argon. DarkSide-50 is a dark matter detector, using a two-phase liquid argon time projection chamber, located at the Laboratori Nazionali del Gran Sasso. The underground argon is shown to contain 39Ar at a level reduced by a factor (1.4 ±0.2 )×103 relative to atmospheric argon. We report a background-free null result from (2616 ±43 ) kg d of data, accumulated over 70.9 live days. When combined with our previous search using an atmospheric argon, the 90% C.L. upper limit on the WIMP-nucleon spin-independent cross section, based on zero events found in the WIMP search regions, is 2.0 ×10-44 cm2 (8.6 ×10-44 cm2 , 8.0 ×10-43 cm2 ) for a WIMP mass of 100 GeV /c2 (1 TeV /c2 , 10 TeV /c2 ).
Results from the first use of low radioactivity argon in a dark matter search
Agnes, P.
2016-04-08
Liquid argon is a bright scintillator with potent particle identification properties, making it an attractive target for direct-detection dark matter searches. The DarkSide-50 dark matter search here reports the first WIMP search results obtained using a target of low-radioactivity argon. DarkSide-50 is a dark matter detector, using two-phase liquid argon time projection chamber, located at the Laboratori Nazionali del Gran Sasso. The underground argon is shown to contain Ar-39 at a level reduced by a factor (1.4 +- 0.2) x 10 3 relative to atmospheric argon. We report a background-free null result from (2616 +- 43) kg d of data,more » accumulated over 70.9 live-days. When combined with our previous search using an atmospheric argon, the 90 % C.L. upper limit on the WIMP-nucleon spin-independent cross section based on zero events found in the WIMP search regions, is 2.0 x 10 -44 cm 2 (8.6 x 10 -44 cm 2, 8.0 x 10 -43 cm 2) for a WIMP mass of 100 GeV/c 2 (1 TeV/c 2 , 10 TeV/c 2).« less
A feature-preserving hair removal algorithm for dermoscopy images.
Abbas, Qaisar; Garcia, Irene Fondón; Emre Celebi, M; Ahmad, Waqar
2013-02-01
Accurate segmentation and repair of hair-occluded information from dermoscopy images are challenging tasks for computer-aided detection (CAD) of melanoma. Currently, many hair-restoration algorithms have been developed, but most of these fail to identify hairs accurately and their removal technique is slow and disturbs the lesion's pattern. In this article, a novel hair-restoration algorithm is presented, which has a capability to preserve the skin lesion features such as color and texture and able to segment both dark and light hairs. Our algorithm is based on three major steps: the rough hairs are segmented using a matched filtering with first derivative of gaussian (MF-FDOG) with thresholding that generate strong responses for both dark and light hairs, refinement of hairs by morphological edge-based techniques, which are repaired through a fast marching inpainting method. Diagnostic accuracy (DA) and texture-quality measure (TQM) metrics are utilized based on dermatologist-drawn manual hair masks that were used as a ground truth to evaluate the performance of the system. The hair-restoration algorithm is tested on 100 dermoscopy images. The comparisons have been done among (i) linear interpolation, inpainting by (ii) non-linear partial differential equation (PDE), and (iii) exemplar-based repairing techniques. Among different hair detection and removal techniques, our proposed algorithm obtained the highest value of DA: 93.3% and TQM: 90%. The experimental results indicate that the proposed algorithm is highly accurate, robust and able to restore hair pixels without damaging the lesion texture. This method is fully automatic and can be easily integrated into a CAD system. © 2011 John Wiley & Sons A/S.
Inelastic Boosted Dark Matter at direct detection experiments
NASA Astrophysics Data System (ADS)
Giudice, Gian F.; Kim, Doojin; Park, Jong-Chul; Shin, Seodong
2018-05-01
We explore a novel class of multi-particle dark sectors, called Inelastic Boosted Dark Matter (iBDM). These models are constructed by combining properties of particles that scatter off matter by making transitions to heavier states (Inelastic Dark Matter) with properties of particles that are produced with a large Lorentz boost in annihilation processes in the galactic halo (Boosted Dark Matter). This combination leads to new signals that can be observed at ordinary direct detection experiments, but require unconventional searches for energetic recoil electrons in coincidence with displaced multi-track events. Related experimental strategies can also be used to probe MeV-range boosted dark matter via their interactions with electrons inside the target material.
A difference tracking algorithm based on discrete sine transform
NASA Astrophysics Data System (ADS)
Liu, HaoPeng; Yao, Yong; Lei, HeBing; Wu, HaoKun
2018-04-01
Target tracking is an important field of computer vision. The template matching tracking algorithm based on squared difference matching (SSD) and standard correlation coefficient (NCC) matching is very sensitive to the gray change of image. When the brightness or gray change, the tracking algorithm will be affected by high-frequency information. Tracking accuracy is reduced, resulting in loss of tracking target. In this paper, a differential tracking algorithm based on discrete sine transform is proposed to reduce the influence of image gray or brightness change. The algorithm that combines the discrete sine transform and the difference algorithm maps the target image into a image digital sequence. The Kalman filter predicts the target position. Using the Hamming distance determines the degree of similarity between the target and the template. The window closest to the template is determined the target to be tracked. The target to be tracked updates the template. Based on the above achieve target tracking. The algorithm is tested in this paper. Compared with SSD and NCC template matching algorithms, the algorithm tracks target stably when image gray or brightness change. And the tracking speed can meet the read-time requirement.
Research on target tracking algorithm based on spatio-temporal context
NASA Astrophysics Data System (ADS)
Li, Baiping; Xu, Sanmei; Kang, Hongjuan
2017-07-01
In this paper, a novel target tracking algorithm based on spatio-temporal context is proposed. During the tracking process, the camera shaking or occlusion may lead to the failure of tracking. The proposed algorithm can solve this problem effectively. The method use the spatio-temporal context algorithm as the main research object. We get the first frame's target region via mouse. Then the spatio-temporal context algorithm is used to get the tracking targets of the sequence of frames. During this process a similarity measure function based on perceptual hash algorithm is used to judge the tracking results. If tracking failed, reset the initial value of Mean Shift algorithm for the subsequent target tracking. Experiment results show that the proposed algorithm can achieve real-time and stable tracking when camera shaking or target occlusion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calvo, J.; Cantini, C.; Crivelli, P.
The Argon Dark Matter (ArDM) experiment consists of a liquid argon (LAr) time projection chamber (TPC) sensitive to nuclear recoils, resulting from scattering of hypothetical Weakly Interacting Massive Particles (WIMPs) on argon targets. With an active target mass of 850 kg ArDM represents an important milestone towards developments for large LAr Dark Matter detectors. Here we present the experimental apparatus currently installed underground at the Laboratorio Subterráneo de Canfranc (LSC), Spain. We show data on gaseous or liquid argon targets recorded in 2015 during the commissioning of ArDM in single phase at zero E-field (ArDM Run I). The data confirmsmore » the overall good and stable performance of the ArDM tonne-scale LAr detector.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laloum, D., E-mail: david.laloum@cea.fr; CEA, LETI, MINATEC Campus, 17 rue des Martyrs, 38054 Grenoble Cedex 9; STMicroelectronics, 850 rue Jean Monnet, 38926 Crolles
2015-01-15
X-ray tomography is widely used in materials science. However, X-ray scanners are often based on polychromatic radiation that creates artifacts such as dark streaks. We show this artifact is not always due to beam hardening. It may appear when scanning samples with high-Z elements inside a low-Z matrix because of the high-Z element absorption edge: X-rays whose energy is above this edge are strongly absorbed, violating the exponential decay assumption for reconstruction algorithms and generating dark streaks. A method is proposed to limit the absorption edge effect and is applied on a microelectronic case to suppress dark streaks between interconnections.
DarkSide search for dark matter
NASA Astrophysics Data System (ADS)
Alexander, T.; Alton, D.; Arisaka, K.; Back, H. O.; Beltrame, P.; Benziger, J.; Bonfini, G.; Brigatti, A.; Brodsky, J.; Bussino, S.; Cadonati, L.; Calaprice, F.; Candela, A.; Cao, H.; Cavalcante, P.; Chepurnov, A.; Chidzik, S.; Cocco, A. G.; Condon, C.; D'Angelo, D.; Davini, S.; De Vincenzi, M.; De Haas, E.; Derbin, A.; Di Pietro, G.; Dratchnev, I.; Durben, D.; Empl, A.; Etenko, A.; Fan, A.; Fiorillo, G.; Franco, D.; Fomenko, K.; Forster, G.; Gabriele, F.; Galbiati, C.; Gazzana, S.; Ghiano, C.; Goretti, A.; Grandi, L.; Gromov, M.; Guan, M.; Guo, C.; Guray, G.; Hungerford, E. V.; Ianni, Al; Ianni, An; Joliet, C.; Kayunov, A.; Keeter, K.; Kendziora, C.; Kidner, S.; Klemmer, R.; Kobychev, V.; Koh, G.; Komor, M.; Korablev, D.; Korga, G.; Li, P.; Loer, B.; Lombardi, P.; Love, C.; Ludhova, L.; Luitz, S.; Lukyanchenko, L.; Lund, A.; Lung, K.; Ma, Y.; Machulin, I.; Mari, S.; Maricic, J.; Martoff, C. J.; Meregaglia, A.; Meroni, E.; Meyers, P.; Mohayai, T.; Montanari, D.; Montuschi, M.; Monzani, M. E.; Mosteiro, P.; Mount, B.; Muratova, V.; Nelson, A.; Nemtzow, A.; Nurakhov, N.; Orsini, M.; Ortica, F.; Pallavicini, M.; Pantic, E.; Parmeggiano, S.; Parsells, R.; Pelliccia, N.; Perasso, L.; Perasso, S.; Perfetto, F.; Pinsky, L.; Pocar, A.; Pordes, S.; Randle, K.; Ranucci, G.; Razeto, A.; Romani, A.; Rossi, B.; Rossi, N.; Rountree, S. D.; Saggese, P.; Saldanha, R.; Salvo, C.; Sands, W.; Seigar, M.; Semenov, D.; Shields, E.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Sukhotin, S.; Suvarov, Y.; Tartaglia, R.; Tatarowicz, J.; Testera, G.; Thompson, J.; Tonazzo, A.; Unzhakov, E.; Vogelaar, R. B.; Wang, H.; Westerdale, S.; Wojcik, M.; Wright, A.; Xu, J.; Yang, C.; Zavatarelli, S.; Zehfus, M.; Zhong, W.; Zuzel, G.
2013-11-01
The DarkSide staged program utilizes a two-phase time projection chamber (TPC) with liquid argon as the target material for the scattering of dark matter particles. Efficient background reduction is achieved using low radioactivity underground argon as well as several experimental handles such as pulse shape, ratio of ionization over scintillation signal, 3D event reconstruction, and active neutron and muon vetos. The DarkSide-10 prototype detector has proven high scintillation light yield, which is a particularly important parameter as it sets the energy threshold for the pulse shape discrimination technique. The DarkSide-50 detector system, currently in commissioning phase at the Gran Sasso Underground Laboratory, will reach a sensitivity to dark matter spin-independent scattering cross section of 10-45 cm2 within 3 years of operation.
Next Generation of Air Quality Measurements from Geo Orbits: Breaking The Temporal Barrier
NASA Astrophysics Data System (ADS)
Gupta, P.; Levy, R. C.; Mattoo, S.; Remer, L.; Heidinger, A.
2017-12-01
NASA's dark target (DT) aerosol algorithm provides operational retrieval of atmospheric aerosols from multiple polar orbiting satellites. The DT algorithm, initially developed for MODIS observations, has been continuously improved since the first MODIS launch in early 2000. Now, we are adapting the DT algorithm to retrieve on new-generation geostationary (GEO) sensors, including the Advanced Himawari Imager (AHI) on Japan's Himawari-8 (H8) satellite and Advanced Baseline Imager (ABI) on NOAA's GOES-16 (or GOES-R). H8 is a weather geostationary satellite operating since July 2015, and AHI observes earth-atmosphere system over the Asia-Pacific region at spatial resolutions of 1km or less. GOES-R is launched in Nov 2016 and provides high temporal resolution observations over Americas. With 16 spectral channels, including 7 bands that observe similar wavelengths as the MODIS bands used for DT aerosol retrieval. Most exciting, however, is that both ABI and AHI provides full disk observations every 10-15 minutes and zoom mode observations every 30 second to 2.5 minutes. Therefore, spectral, spatial and temporal resolution observations from these GEO satellites provide opportunity to monitor atmospheric aerosols in the region, plus a new capability to monitor aerosol transport and aerosol/cloud diurnal cycles. In this paper, we will introduce retrieval results from AHI using the DT algorithm during the KORUS-AQ field campaign during summer 2016. These results are evaluated against surface measurements (e.g. AERONET). . We will also discuss, its potential applications in monitoring diurnal cycles of urban pollution, smoke and dust in the region. The same DT algorithm will also be adapted to retrieve aerosol properties using GOES-16 over Americas.
Dark Targets, Aerosols, Clouds and Toys
NASA Astrophysics Data System (ADS)
Remer, L. A.
2015-12-01
Today if you use the Thomson-Reuters Science Citations Index to search for "aerosol*", across all scientific disciplines and years, with no constraints, and you sort by number of citations, you will find a 2005 paper published in the Journal of the Atmospheric Sciences in the top 20. This is the "The MODIS Aerosol Algorithm, Products and Validation". Although I am the first author, there are in total 12 co-authors who each made a significant intellectual contribution to the paper or to the algorithm, products and validation described. This paper, that algorithm, those people lie at the heart of a lineage of scientists whose collaborations and linked individual pursuits have made a significant contribution to our understanding of radiative transfer and climate, of aerosol properties and the global aerosol system, of cloud physics and aerosol-cloud interaction, and how to measure these parameters and maximize the science that can be obtained from those measurements. The 'lineage' had its origins across the globe, from Soviet Russia to France, from the U.S. to Israel, from the Himalayas, the Sahel, the metropolises of Sao Paulo, Taipei, and the cities of east and south Asia. It came together in the 1990s and 2000s at the NASA Goddard Space Flight Center, using cultural diversity as a strength to form a common culture of scientific creativity that continues to this day. The original algorithm has spawned daughter algorithms that are being applied to new satellite and airborne sensors. The original MODIS products have been fundamental to analyses as diverse as air quality monitoring and aerosol-cloud forcing. AERONET, designed originally for the need of validation, is now its own thriving institution, and the lineage continues to push forward to provide new technology for the coming generations.
NASA Astrophysics Data System (ADS)
Inc, Mustafa; Aliyu, Aliyu Isa; Yusuf, Abdullahi; Baleanu, Dumitru
2018-01-01
This paper obtains the dark, bright, dark-bright or combined optical and singular solitons to the perturbed nonlinear Schrödinger-Hirota equation (SHE) with spatio-temporal dispersion (STD) and Kerr law nonlinearity in optical fibers. The integration algorithm is the Sine-Gordon equation method (SGEM). Furthermore, the modulation instability analysis (MI) of the equation is studied based on the standard linear-stability analysis and the MI gain spectrum is got.
Coloma, Pilar; Dobrescu, Bogdan A.; Frugiuele, Claudia; ...
2016-04-08
High-intensity neutrino beam facilities may produce a beam of light dark matter when protons strike the target. Searches for such a dark matter beam using its scattering in a nearby detector must overcome the large neutrino background. We characterize the spatial and energy distributions of the dark matter and neutrino beams, focusing on their differences to enhance the sensitivity to dark matter. We find that a dark matter beam produced by a Zmore » $$^{'}$$ boson in the GeV mass range is both broader and more energetic than the neutrino beam. The reach for dark matter is maximized for a detector sensitive to hard neutral-current scatterings, placed at a sizable angle off the neutrino beam axis. In the case of the Long-Baseline Neutrino Facility (LBNF), a detector placed at roughly 6 degrees off axis and at a distance of about 200 m from the target would be sensitive to Z$$^{'}$$ couplings as low as 0.05. This search can proceed symbiotically with neutrino measurements. We also show that the MiniBooNE and MicroBooNE detectors, which are on Fermilab’s Booster beamline, happen to be at an optimal angle from the NuMI beam and could perform searches with existing data. As a result, this illustrates potential synergies between LBNF and the short-baseline neutrino program if the detectors are positioned appropriately.« less
Laha, Ranjan
2018-02-01
Directional detection is an important way to detect dark matter. An input for these experiments is the dark matter velocity distribution. Recent hydrodynamical simulations have shown that the dark matter velocity distribution differs substantially from the Standard Halo Model. We study the impact of some of these updated velocity distributions in dark matter directional detection experiments. Here, we calculate the ratio of events required to confirm the forward-backward asymmetry and the existence of the ring of maximum recoil rate using different dark matter velocity distributions for 19F and Xe targets. We show that with the use of updated dark mattermore » velocity profiles, the forward-backward asymmetry and the ring of maximum recoil rate can be confirmed using a factor of ~ 2– 3 less events when compared to that using the Standard Halo Model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laha, Ranjan
Directional detection is an important way to detect dark matter. An input for these experiments is the dark matter velocity distribution. Recent hydrodynamical simulations have shown that the dark matter velocity distribution differs substantially from the Standard Halo Model. We study the impact of some of these updated velocity distributions in dark matter directional detection experiments. Here, we calculate the ratio of events required to confirm the forward-backward asymmetry and the existence of the ring of maximum recoil rate using different dark matter velocity distributions for 19F and Xe targets. We show that with the use of updated dark mattermore » velocity profiles, the forward-backward asymmetry and the ring of maximum recoil rate can be confirmed using a factor of ~ 2– 3 less events when compared to that using the Standard Halo Model.« less
Final Technical Report for ``Paths to Discovery at the LHC : Dark Matter and Track Triggering"
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hahn, Kristian
Particle Dark Matter (DM) is perhaps the most compelling and experimentally well-motivated new physics scenario anticipated at the Large Hadron Collider (LHC). The DE-SC0014073 award allowed the PI to define and pursue a path to the discovery of Dark Matter in Run-2 of the LHC with the Compact Muon Solenoid (CMS) experiment. CMS can probe regions of Dark Matter phase-space that direct and indirect detection experiments are unable to constrain. The PI’s team initiated the exploration of these regions, searching specifically for the associated production of Dark Matter with top quarks. The effort focuses on the high-yield, hadronic decays ofmore » W bosons produced in top decay, which provides the highest sensitivity to DM produced via through low-mass spin-0 mediators. The group developed identification algorithms that achieve high efficiency and purity in the selection of hadronic top decays, and analysis techniques that provide powerful signal discrimination in Run-2. The ultimate reach of new physics searches with CMS will be established at the high-luminosity LHC (HL-LHC). To fully realize the sensitivity the HL-LHC promises, CMS must minimize the impact of soft, inelastic (“pileup”) interactions on the real-time “trigger” system the experiment uses for data refinement. Charged particle trajectory information (“tracking”) will be essential for pileup mitigation at the HL-LHC. The award allowed the PI’s team to develop firmware-based data delivery and track fitting algorithms for an unprecedented, real-time tracking trigger to sustain the experiment’s sensitivity to new physics in the next decade.« less
Dark Skies Rangers - Fighting light pollution and simulating dark skies
NASA Astrophysics Data System (ADS)
Doran, Rosa; Correia, Nelson; Guerra, Rita; Costa, Ana
2015-03-01
Dark Skies Rangers is an awareness program aimed at students of all ages to stimulate them to make an audit of light pollution in their school/district. The young light pollution fighters evaluate the level of light pollution, how much energy is being wasted, and produce a report to be delivered to the local authorities. They are also advised to promote a light pollution awareness campaign to the local community targeting not only the dark skies but also other implications such as effects in our health, to the flora and fauna, etc.
2015-04-08
The target of this observation as seen by ASA Mars Reconnaissance Orbiter is a circular depression in a dark-toned unit associated with a field of cones to the northeast. At the image scale of a Context Camera image, the depression appears to expose layers especially on the sides or walls of the depression, which are overlain by dark sands presumably associated with the dark-toned unit. HiRISE resolution, which is far higher than that of the Context Camera and its larger footprint, can help identify possible layers. http://photojournal.jpl.nasa.gov/catalog/PIA19358
Texture orientation-based algorithm for detecting infrared maritime targets.
Wang, Bin; Dong, Lili; Zhao, Ming; Wu, Houde; Xu, Wenhai
2015-05-20
Infrared maritime target detection is a key technology for maritime target searching systems. However, in infrared maritime images (IMIs) taken under complicated sea conditions, background clutters, such as ocean waves, clouds or sea fog, usually have high intensity that can easily overwhelm the brightness of real targets, which is difficult for traditional target detection algorithms to deal with. To mitigate this problem, this paper proposes a novel target detection algorithm based on texture orientation. This algorithm first extracts suspected targets by analyzing the intersubband correlation between horizontal and vertical wavelet subbands of the original IMI on the first scale. Then the self-adaptive wavelet threshold denoising and local singularity analysis of the original IMI is combined to remove false alarms further. Experiments show that compared with traditional algorithms, this algorithm can suppress background clutter much better and realize better single-frame detection for infrared maritime targets. Besides, in order to guarantee accurate target extraction further, the pipeline-filtering algorithm is adopted to eliminate residual false alarms. The high practical value and applicability of this proposed strategy is backed strongly by experimental data acquired under different environmental conditions.
2015-12-01
use of social network analysis (SNA) has allowed the military to map dark networks of terrorist organizations and selectively target key elements...data to improve SC. 14. SUBJECT TERMS social network analysis, dark networks, light networks, dim networks, security cooperation, Southeast Asia...task may already exist. Recently, the use of social network analysis (SNA) has allowed the military to map dark networks of terrorist organizations
Pons, Carmen; Mazade, Reece; Jin, Jianzhong; Dul, Mitchell W; Zaidi, Qasim; Alonso, Jose-Manuel
2017-12-01
Artists and astronomers noticed centuries ago that humans perceive dark features in an image differently from light ones; however, the neuronal mechanisms underlying these dark/light asymmetries remained unknown. Based on computational modeling of neuronal responses, we have previously proposed that such perceptual dark/light asymmetries originate from a luminance/response saturation within the ON retinal pathway. Consistent with this prediction, here we show that stimulus conditions that increase ON luminance/response saturation (e.g., dark backgrounds) or its effect on light stimuli (e.g., optical blur) impair the perceptual discrimination and salience of light targets more than dark targets in human vision. We also show that, in cat visual cortex, the magnitude of the ON luminance/response saturation remains relatively constant under a wide range of luminance conditions that are common indoors, and only shifts away from the lowest luminance contrasts under low mesopic light. Finally, we show that the ON luminance/response saturation affects visual salience mostly when the high spatial frequencies of the image are reduced by poor illumination or optical blur. Because both low luminance and optical blur are risk factors in myopia, our results suggest a possible neuronal mechanism linking myopia progression with the function of the ON visual pathway.
Pons, Carmen; Mazade, Reece; Jin, Jianzhong; Dul, Mitchell W.; Zaidi, Qasim; Alonso, Jose-Manuel
2017-01-01
Artists and astronomers noticed centuries ago that humans perceive dark features in an image differently from light ones; however, the neuronal mechanisms underlying these dark/light asymmetries remained unknown. Based on computational modeling of neuronal responses, we have previously proposed that such perceptual dark/light asymmetries originate from a luminance/response saturation within the ON retinal pathway. Consistent with this prediction, here we show that stimulus conditions that increase ON luminance/response saturation (e.g., dark backgrounds) or its effect on light stimuli (e.g., optical blur) impair the perceptual discrimination and salience of light targets more than dark targets in human vision. We also show that, in cat visual cortex, the magnitude of the ON luminance/response saturation remains relatively constant under a wide range of luminance conditions that are common indoors, and only shifts away from the lowest luminance contrasts under low mesopic light. Finally, we show that the ON luminance/response saturation affects visual salience mostly when the high spatial frequencies of the image are reduced by poor illumination or optical blur. Because both low luminance and optical blur are risk factors in myopia, our results suggest a possible neuronal mechanism linking myopia progression with the function of the ON visual pathway. PMID:29196762
Infrared small target tracking based on SOPC
NASA Astrophysics Data System (ADS)
Hu, Taotao; Fan, Xiang; Zhang, Yu-Jin; Cheng, Zheng-dong; Zhu, Bin
2011-01-01
The paper presents a low cost FPGA based solution for a real-time infrared small target tracking system. A specialized architecture is presented based on a soft RISC processor capable of running kernel based mean shift tracking algorithm. Mean shift tracking algorithm is realized in NIOS II soft-core with SOPC (System on a Programmable Chip) technology. Though mean shift algorithm is widely used for target tracking, the original mean shift algorithm can not be directly used for infrared small target tracking. As infrared small target only has intensity information, so an improved mean shift algorithm is presented in this paper. How to describe target will determine whether target can be tracked by mean shift algorithm. Because color target can be tracked well by mean shift algorithm, imitating color image expression, spatial component and temporal component are advanced to describe target, which forms pseudo-color image. In order to improve the processing speed parallel technology and pipeline technology are taken. Two RAM are taken to stored images separately by ping-pong technology. A FLASH is used to store mass temp data. The experimental results show that infrared small target is tracked stably in complicated background.
NASA Astrophysics Data System (ADS)
Levy, R. C.; Munchak, L. A.; Mattoo, S.; Patadia, F.; Remer, L. A.; Holz, R. E.
2015-10-01
To answer fundamental questions about aerosols in our changing climate, we must quantify both the current state of aerosols and how they are changing. Although NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) sensors have provided quantitative information about global aerosol optical depth (AOD) for more than a decade, this period is still too short to create an aerosol climate data record (CDR). The Visible Infrared Imaging Radiometer Suite (VIIRS) was launched on the Suomi-NPP satellite in late 2011, with additional copies planned for future satellites. Can the MODIS aerosol data record be continued with VIIRS to create a consistent CDR? When compared to ground-based AERONET data, the VIIRS Environmental Data Record (V_EDR) has similar validation statistics as the MODIS Collection 6 (M_C6) product. However, the V_EDR and M_C6 are offset in regards to global AOD magnitudes, and tend to provide different maps of 0.55 μm AOD and 0.55/0.86 μm-based Ångström Exponent (AE). One reason is that the retrieval algorithms are different. Using the Intermediate File Format (IFF) for both MODIS and VIIRS data, we have tested whether we can apply a single MODIS-like (ML) dark-target algorithm on both sensors that leads to product convergence. Except for catering the radiative transfer and aerosol lookup tables to each sensor's specific wavelength bands, the ML algorithm is the same for both. We run the ML algorithm on both sensors between March 2012 and May 2014, and compare monthly mean AOD time series with each other and with M_C6 and V_EDR products. Focusing on the March-April-May (MAM) 2013 period, we compared additional statistics that include global and gridded 1° × 1° AOD and AE, histograms, sampling frequencies, and collocations with ground-based AERONET. Over land, use of the ML algorithm clearly reduces the differences between the MODIS and VIIRS-based AOD. However, although global offsets are near zero, some regional biases remain, especially in cloud fields and over brighter surface targets. Over ocean, use of the ML algorithm actually increases the offset between VIIRS and MODIS-based AOD (to ~ 0.025), while reducing the differences between AE. We characterize algorithm retrievability through statistics of retrieval fraction. In spite of differences between retrieved AOD magnitudes, the ML algorithm will lead to similar decisions about "whether to retrieve" on each sensor. Finally, we discuss how issues of calibration, as well as instrument spatial resolution may be contributing to the statistics and the ability to create a consistent MODIS → VIIRS aerosol CDR.
NASA Astrophysics Data System (ADS)
Levy, R. C.; Munchak, L. A.; Mattoo, S.; Patadia, F.; Remer, L. A.; Holz, R. E.
2015-07-01
To answer fundamental questions about aerosols in our changing climate, we must quantify both the current state of aerosols and how they are changing. Although NASA's Moderate resolution Imaging Spectroradiometer (MODIS) sensors have provided quantitative information about global aerosol optical depth (AOD) for more than a decade, this period is still too short to create an aerosol climate data record (CDR). The Visible Infrared Imaging Radiometer Suite (VIIRS) was launched on the Suomi-NPP satellite in late 2011, with additional copies planned for future satellites. Can the MODIS aerosol data record be continued with VIIRS to create a consistent CDR? When compared to ground-based AERONET data, the VIIRS Environmental Data Record (V_EDR) has similar validation statistics as the MODIS Collection 6 (M_C6) product. However, the V_EDR and M_C6 are offset in regards to global AOD magnitudes, and tend to provide different maps of 0.55 μm AOD and 0.55/0.86 μm-based Ångstrom Exponent (AE). One reason is that the retrieval algorithms are different. Using the Intermediate File Format (IFF) for both MODIS and VIIRS data, we have tested whether we can apply a single MODIS-like (ML) dark-target algorithm on both sensors that leads to product convergence. Except for catering the radiative transfer and aerosol lookup tables to each sensor's specific wavelength bands, the ML algorithm is the same for both. We run the ML algorithm on both sensors between March 2012 and May 2014, and compare monthly mean AOD time series with each other and with M_C6 and V_EDR products. Focusing on the March-April-May (MAM) 2013 period, we compared additional statistics that include global and gridded 1° × 1° AOD and AE, histograms, sampling frequencies, and collocations with ground-based AERONET. Over land, use of the ML algorithm clearly reduces the differences between the MODIS and VIIRS-based AOD. However, although global offsets are near zero, some regional biases remain, especially in cloud fields and over brighter surface targets. Over ocean, use of the ML algorithm actually increases the offset between VIIRS and MODIS-based AOD (to ∼ 0.025), while reducing the differences between AE. We characterize algorithm retrievibility through statistics of retrieval fraction. In spite of differences between retrieved AOD magnitudes, the ML algorithm will lead to similar decisions about "whether to retrieve" on each sensor. Finally, we discuss how issues of calibration, as well as instrument spatial resolution may be contributing to the statistics and the ability to create a consistent MODIS → VIIRS aerosol CDR.
Toward Optimal Target Placement for Neural Prosthetic Devices
Cunningham, John P.; Yu, Byron M.; Gilja, Vikash; Ryu, Stephen I.; Shenoy, Krishna V.
2008-01-01
Neural prosthetic systems have been designed to estimate continuous reach trajectories (motor prostheses) and to predict discrete reach targets (communication prostheses). In the latter case, reach targets are typically decoded from neural spiking activity during an instructed delay period before the reach begins. Such systems use targets placed in radially symmetric geometries independent of the tuning properties of the neurons available. Here we seek to automate the target placement process and increase decode accuracy in communication prostheses by selecting target locations based on the neural population at hand. Motor prostheses that incorporate intended target information could also benefit from this consideration. We present an optimal target placement algorithm that approximately maximizes decode accuracy with respect to target locations. In simulated neural spiking data fit from two monkeys, the optimal target placement algorithm yielded statistically significant improvements up to 8 and 9% for two and sixteen targets, respectively. For four and eight targets, gains were more modest, as the target layouts found by the algorithm closely resembled the canonical layouts. We trained a monkey in this paradigm and tested the algorithm with experimental neural data to confirm some of the results found in simulation. In all, the algorithm can serve not only to create new target layouts that outperform canonical layouts, but it can also confirm or help select among multiple canonical layouts. The optimal target placement algorithm developed here is the first algorithm of its kind, and it should both improve decode accuracy and help automate target placement for neural prostheses. PMID:18829845
The Effects of Enhanced Disparity on Manual Control Stereopsis and Tracking Performance.
1981-06-22
Modestino for her help in drawing the figures, and Margaret Armour for typing the manuscript. This research was supported by the U.S. Air Force Office of...had both eyes open, but only one of the optical channels transmitted an image. ------- DRAPE 4’ 4-1 Go, IPD JOYSTICK Figure 1. Diagram of the...eliminated. To enhance target detectability, a dark blue cloth was draped behind the targets. This dark background spanned ± 50 and had a luminance of
GEANT4-based full simulation of the PADME experiment at the DAΦNE BTF
NASA Astrophysics Data System (ADS)
Leonardi, E.; Kozhuharov, V.; Raggi, M.; Valente, P.
2017-10-01
A possible solution to the dark matter problem postulates that dark particles can interact with Standard Model particles only through a new force mediated by a “portal”. If the new force has a U(1) gauge structure, the “portal” is a massive photon-like vector particle, called dark photon or A‧. The PADME experiment at the DAΦNE Beam-Test Facility (BTF) in Frascati is designed to detect dark photons produced in positron on fixed target annihilations decaying to dark matter (e+e-→γA‧) by measuring the final state missing mass. The experiment will be composed of a thin active diamond target where a 550 MeV positron beam will impinge to produce e+e- annihilation events. The surviving beam will be deflected with a magnet while the photons produced in the annihilation will be measured by a calorimeter composed of BGO crystals. To reject the background from Bremsstrahlung gamma production, a set of segmented plastic scintillator vetoes will be used to detect positrons exiting the target with an energy lower than that of the beam, while a fast small angle calorimeter will be used to reject the e+e-→γγ(γ) background. To optimize the experimental layout in terms of signal acceptance and background rejection, the full layout of the experiment was modelled with the GEANT4 simulation package. In this paper we will describe the details of the simulation and report on the results obtained with the software.
Dark matter in dwarf spheroidal galaxies and indirect detection: a review
NASA Astrophysics Data System (ADS)
Strigari, Louis E.
2018-05-01
Indirect dark matter searches targeting dwarf spheroidal galaxies (dSphs) have matured rapidly during the past decade. This has been because of the substantial increase in kinematic data sets from the dSphs, the new dSphs that have been discovered, and the operation of the Fermi-LAT and many ground-based gamma-ray experiments. Here we review the analysis methods that have been used to determine the dSph dark matter distributions, in particular the ‘J-factors’, comparing and contrasting them, and detailing the underlying systematics that still affect the analysis. We discuss prospects for improving measurements of dark matter distributions, and how these interplay with future indirect dark matter searches.
Dark matter in dwarf spheroidal galaxies and indirect detection: a review.
Strigari, Louis E
2018-05-01
Indirect dark matter searches targeting dwarf spheroidal galaxies (dSphs) have matured rapidly during the past decade. This has been because of the substantial increase in kinematic data sets from the dSphs, the new dSphs that have been discovered, and the operation of the Fermi-LAT and many ground-based gamma-ray experiments. Here we review the analysis methods that have been used to determine the dSph dark matter distributions, in particular the 'J-factors', comparing and contrasting them, and detailing the underlying systematics that still affect the analysis. We discuss prospects for improving measurements of dark matter distributions, and how these interplay with future indirect dark matter searches.
MadDM: Computation of dark matter relic abundance
NASA Astrophysics Data System (ADS)
Backović, Mihailo; Kong, Kyoungchul; McCaskey, Mathew
2017-12-01
MadDM computes dark matter relic abundance and dark matter nucleus scattering rates in a generic model. The code is based on the existing MadGraph 5 architecture and as such is easily integrable into any MadGraph collider study. A simple Python interface offers a level of user-friendliness characteristic of MadGraph 5 without sacrificing functionality. MadDM is able to calculate the dark matter relic abundance in models which include a multi-component dark sector, resonance annihilation channels and co-annihilations. The direct detection module of MadDM calculates spin independent / spin dependent dark matter-nucleon cross sections and differential recoil rates as a function of recoil energy, angle and time. The code provides a simplified simulation of detector effects for a wide range of target materials and volumes.
GeV-scale dark matter: Production at the main injector
Dobrescu, Bogdan A.; Frugiuele, Claudia
2015-02-03
In this study, assuming that dark matter particles interact with quarks via a GeV-scale mediator, we study dark matter production in fixed target collisions. The ensuing signal in a neutrino near detector consists of neutral-current events with an energy distribution peaked at higher values than the neutrino background. We find that for a Z' boson of mass around a few GeV that decays to dark matter particles, the dark matter beam produced by the Main Injector at Fermilab allows the exploration of a range of values for the gauge coupling that currently satisfy all experimental constraints. The NOνA near detectormore » is well positioned for probing the presence of a dark matter beam, and future LBNF near detectors would provide more sensitive probes.« less
Model-independent comparison of annual modulation and total rate with direct detection experiments
NASA Astrophysics Data System (ADS)
Kahlhoefer, Felix; Reindl, Florian; Schäffner, Karoline; Schmidt-Hoberg, Kai; Wild, Sebastian
2018-05-01
The relative sensitivity of different direct detection experiments depends sensitively on the astrophysical distribution and particle physics nature of dark matter, prohibiting a model-independent comparison. The situation changes fundamentally if two experiments employ the same target material. We show that in this case one can compare measurements of an annual modulation and exclusion bounds on the total rate while making no assumptions on astrophysics and no (or only very general) assumptions on particle physics. In particular, we show that the dark matter interpretation of the DAMA/LIBRA signal can be conclusively tested with COSINUS, a future experiment employing the same target material. We find that if COSINUS excludes a dark matter scattering rate of about 0.01 kg‑1 days‑1 with an energy threshold of 1.8 keV and resolution of 0.2 keV, it will rule out all explanations of DAMA/LIBRA in terms of dark matter scattering off sodium and/or iodine.
Biased normalized cuts for target detection in hyperspectral imagery
NASA Astrophysics Data System (ADS)
Zhang, Xuewen; Dorado-Munoz, Leidy P.; Messinger, David W.; Cahill, Nathan D.
2016-05-01
The Biased Normalized Cuts (BNC) algorithm is a useful technique for detecting targets or objects in RGB imagery. In this paper, we propose modifying BNC for the purpose of target detection in hyperspectral imagery. As opposed to other target detection algorithms that typically encode target information prior to dimensionality reduction, our proposed algorithm encodes target information after dimensionality reduction, enabling a user to detect different targets in interactive mode. To assess the proposed BNC algorithm, we utilize hyperspectral imagery (HSI) from the SHARE 2012 data campaign, and we explore the relationship between the number and the position of expert-provided target labels and the precision/recall of the remaining targets in the scene.
The darkside multiton detector for the direct dark matter search
Aalseth, C. E.; Agnes, P.; Alton, A.; ...
2015-01-01
Although the existence of dark matter is supported by many evidences, based on astrophysical measurements, its nature is still completely unknown. One major candidate is represented by weakly interacting massive particles (WIMPs), which could in principle be detected through their collisions with ordinary nuclei in a sensitive target, producing observable low-energy (<100 keV) nuclear recoils. The DarkSide program aims at the WIPMs detection using a liquid argon time projection chamber (LAr-TPC). In this paper we quickly review the DarkSide program focusing in particular on the next generation experiment DarkSide-G2, a 3.6-ton LAr-TPC. The different detector components are described as wellmore » as the improvements needed to scale the detector from DarkSide-50 (50 kg LAr-TPC) up to DarkSide-G2. Finally, the preliminary results on background suppression and expected sensitivity are presented.« less
Coherent photon scattering background in sub- GeV / c 2 direct dark matter searches
Robinson, Alan E.
2017-01-18
Here, proposed dark matter detectors with eV-scale sensitivities will detect a large background of atomic (nuclear) recoils from coherent photon scattering of MeV-scale photons. This background climbs steeply below ~10 eV, far exceeding the declining rate of low-energy Compton recoils. The upcoming generation of dark matter detectors will not be limited by this background, but further development of eV-scale and sub-eV detectors will require strategies, including the use of low nuclear mass target materials, to maximize dark matter sensitivity while minimizing the coherent photon scattering background.
Final Technical Report for DE-SC0012297
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dell'Antonio, Ian
This is the final report on the work performed in award DE-SC0012297, Cosmic Frontier work in support of the LSST Dark Energy Science Collaboration's work to develop algorithms, simulations, and statistical tests to ensure optimal extraction of the dark energy properties from galaxy clusters observed with LSST. This work focused on effects that could produce a systematic error on the measurement of cluster masses (that will be used to probe the effects of dark energy on the growth of structure). These effects stem from the deviations from pure ellipticity of the gravitational lensing signal and from the blending of lightmore » of neighboring galaxies. Both these effects are expected to be more significant for LSST than for the stage III experiments such as the Dark Energy Survey. We calculate the magnitude of the mass error (or bias) for the first time and demonstrate that it can be treated as a multiplicative correction and calibrated out, allowing mass measurements of clusters from gravitational lensing to meet the requirements of LSST's dark energy investigation.« less
NASA Astrophysics Data System (ADS)
Chen, Xiao; Li, Yaan; Yu, Jing; Li, Yuxing
2018-01-01
For fast and more effective implementation of tracking multiple targets in a cluttered environment, we propose a multiple targets tracking (MTT) algorithm called maximum entropy fuzzy c-means clustering joint probabilistic data association that combines fuzzy c-means clustering and the joint probabilistic data association (PDA) algorithm. The algorithm uses the membership value to express the probability of the target originating from measurement. The membership value is obtained through fuzzy c-means clustering objective function optimized by the maximum entropy principle. When considering the effect of the public measurement, we use a correction factor to adjust the association probability matrix to estimate the state of the target. As this algorithm avoids confirmation matrix splitting, it can solve the high computational load problem of the joint PDA algorithm. The results of simulations and analysis conducted for tracking neighbor parallel targets and cross targets in a different density cluttered environment show that the proposed algorithm can realize MTT quickly and efficiently in a cluttered environment. Further, the performance of the proposed algorithm remains constant with increasing process noise variance. The proposed algorithm has the advantages of efficiency and low computational load, which can ensure optimum performance when tracking multiple targets in a dense cluttered environment.
Serendipity in dark photon searches
NASA Astrophysics Data System (ADS)
Ilten, Philip; Soreq, Yotam; Williams, Mike; Xue, Wei
2018-06-01
Searches for dark photons provide serendipitous discovery potential for other types of vector particles. We develop a framework for recasting dark photon searches to obtain constraints on more general theories, which includes a data-driven method for determining hadronic decay rates. We demonstrate our approach by deriving constraints on a vector that couples to the B-L current, a leptophobic B boson that couples directly to baryon number and to leptons via B- γ kinetic mixing, and on a vector that mediates a protophobic force. Our approach can easily be generalized to any massive gauge boson with vector couplings to the Standard Model fermions, and software to perform any such recasting is provided at
NEWSdm: Nuclear Emulsions for WIMP Search with directional measurement
NASA Astrophysics Data System (ADS)
Di Crescenzo, A.
2017-12-01
Direct Dark Matter searches are nowadays one of the most exciting research topics. Several experimental efforts are concentrated on the development, construction, and operation of detectors looking for the scattering of target nuclei with Weakly Interactive Massive Particles (WIMPs). The measurement of the direction of WIMP-induced nuclear recoils is a challenging strategy to extend dark matter searches beyond the neutrino floor and provide an unambiguous signature of the detection of Galactic dark matter. Current directional experiments are based on the use of gas TPC whose sensitivity is strongly limited by the small achievable detector mass. We present an innovative directional experiment based on the use of a solid target made by newly developed nuclear emulsions and read-out systems reaching a position resolution of the order of 10 nm.
Phenomenology of ELDER dark matter
NASA Astrophysics Data System (ADS)
Kuflik, Eric; Perelstein, Maxim; Lorier, Nicolas Rey-Le; Tsai, Yu-Dai
2017-08-01
We explore the phenomenology of Elastically Decoupling Relic (ELDER) dark matter. ELDER is a thermal relic whose present density is determined primarily by the cross-section of its elastic scattering off Standard Model (SM) particles. Assuming that this scattering is mediated by a kinetically mixed dark photon, we argue that the ELDER scenario makes robust predictions for electron-recoil direct-detection experiments, as well as for dark photon searches. These predictions are independent of the details of interactions within the dark sector. Together with the closely related Strongly-Interacting Massive Particle (SIMP) scenario, the ELDER predictions provide a physically motivated, well-defined target region, which will be almost entirely accessible to the next generation of searches for sub-GeV dark matter and dark photons. We provide useful analytic approximations for various quantities of interest in the ELDER scenario, and discuss two simple renormalizable toy models which incorporate the required strong number-changing interactions among the ELDERs, as well as explicitly implement the coupling to electrons via the dark photon portal.
M$^3$: A New Muon Missing Momentum Experiment to Probe $$(g-2)_{\\mu}$$ and Dark Matter at Fermilab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kahn, Yonatan; Krnjaic, Gordan; Tran, Nhan
New light, weakly-coupled particles are commonly invoked to address the persistentmore » $$\\sim 4\\sigma$$ anomaly in $$(g-2)_\\mu$$ and serve as mediators between dark and visible matter. If such particles couple predominantly to heavier generations and decay invisibly, much of their best-motivated parameter space is inaccessible with existing experimental techniques. In this paper, we present a new fixed-target, missing-momentum search strategy to probe invisibly decaying particles that couple preferentially to muons. In our setup, a relativistic muon beam impinges on a thick active target. The signal consists of events in which a muon loses a large fraction of its incident momentum inside the target without initiating any detectable electromagnetic or hadronic activity in downstream veto systems. We propose a two-phase experiment, M$^3$ (Muon Missing Momentum), based at Fermilab. Phase 1 with $$\\sim 10^{10}$$ muons on target can test the remaining parameter space for which light invisibly-decaying particles can resolve the $$(g-2)_\\mu$$ anomaly, while Phase 2 with $$\\sim 10^{13}$$ muons on target can test much of the predictive parameter space over which sub-GeV dark matter achieves freeze-out via muon-philic forces, including gauged $$U(1)_{L_\\mu - L_\\tau}$$.« less
NASA Astrophysics Data System (ADS)
Ly, Canh
2004-08-01
Scan-MUSIC algorithm, developed by the U.S. Army Research Laboratory (ARL), improves angular resolution for target detection with the use of a single rotatable radar scanning the angular region of interest. This algorithm has been adapted and extended from the MUSIC algorithm that has been used for a linear sensor array. Previously, it was shown that the SMUSIC algorithm and a Millimeter Wave radar can be used to resolve two closely spaced point targets that exhibited constructive interference, but not for the targets that exhibited destructive interference. Therefore, there were some limitations of the algorithm for the point targets. In this paper, the SMUSIC algorithm is applied to a problem of resolving real complex scatterer-type targets, which is more useful and of greater practical interest, particular for the future Army radar system. The paper presents results of the angular resolution of the targets, an M60 tank and an M113 Armored Personnel Carrier (APC), that are within the mainlobe of a Κα-band radar antenna. In particular, we applied the algorithm to resolve centroids of the targets that were placed within the beamwidth of the antenna. The collected coherent data using the stepped-frequency radar were compute magnitudely for the SMUSIC calculation. Even though there were significantly different signal returns for different orientations and offsets of the two targets, we resolved those two target centroids when they were as close as about 1/3 of the antenna beamwidth.
Prospects for distinguishing dark matter models using annual modulation
Witte, Samuel J.; Gluscevic, Vera; McDermott, Samuel D.
2017-02-24
It has recently been demonstrated that, in the event of a putative signal in dark matter direct detection experiments, properly identifying the underlying dark matter-nuclei interaction promises to be a challenging task. Given the most optimistic expectations for the number counts of recoil events in the forthcoming Generation 2 experiments, differentiating between interactions that produce distinct features in the recoil energy spectra will only be possible if a strong signal is observed simultaneously on a variety of complementary targets. However, there is a wide range of viable theories that give rise to virtually identical energy spectra, and may only differmore » by the dependence of the recoil rate on the dark matter velocity. In this work, we investigate how degeneracy between such competing models may be broken by analyzing the time dependence of nuclear recoils, i.e. the annual modulation of the rate. For this purpose, we simulate dark matter events for a variety of interactions and experiments, and perform a Bayesian model-selection analysis on all simulated data sets, evaluating the chance of correctly identifying the input model for a given experimental setup. Lastly, we find that including information on the annual modulation of the rate may significantly enhance the ability of a single target to distinguish dark matter models with nearly degenerate recoil spectra, but only with exposures beyond the expectations of Generation 2 experiments.« less
MULTINEST: an efficient and robust Bayesian inference tool for cosmology and particle physics
NASA Astrophysics Data System (ADS)
Feroz, F.; Hobson, M. P.; Bridges, M.
2009-10-01
We present further development and the first public release of our multimodal nested sampling algorithm, called MULTINEST. This Bayesian inference tool calculates the evidence, with an associated error estimate, and produces posterior samples from distributions that may contain multiple modes and pronounced (curving) degeneracies in high dimensions. The developments presented here lead to further substantial improvements in sampling efficiency and robustness, as compared to the original algorithm presented in Feroz & Hobson, which itself significantly outperformed existing Markov chain Monte Carlo techniques in a wide range of astrophysical inference problems. The accuracy and economy of the MULTINEST algorithm are demonstrated by application to two toy problems and to a cosmological inference problem focusing on the extension of the vanilla Λ cold dark matter model to include spatial curvature and a varying equation of state for dark energy. The MULTINEST software, which is fully parallelized using MPI and includes an interface to COSMOMC, is available at http://www.mrao.cam.ac.uk/software/multinest/. It will also be released as part of the SUPERBAYES package, for the analysis of supersymmetric theories of particle physics, at http://www.superbayes.org.
Global Long-Term SeaWiFS Deep Blue Aerosol Products available at NASA GES DISC
NASA Technical Reports Server (NTRS)
Shen, Suhung; Sayer, A. M.; Bettenhausen, Corey; Wei, Jennifer C.; Ostrenga, Dana M.; Vollmer, Bruce E.; Hsu, Nai-Yung; Kempler, Steven J.
2012-01-01
Long-term climate data records about aerosols are needed in order to improve understanding of air quality, radiative forcing, and for many other applications. The Sea-viewing Wide Field-of-view Sensor (SeaWiFS) provides a global well-calibrated 13- year (1997-2010) record of top-of-atmosphere radiance, suitable for use in retrieval of atmospheric aerosol optical depth (AOD). Recently, global aerosol products derived from SeaWiFS with Deep Blue algorithm (SWDB) have become available for the entire mission, as part of the NASA Making Earth Science data records for Use in Research for Earth Science (MEaSUREs) program. The latest Deep Blue algorithm retrieves aerosol properties not only over bright desert surfaces, but also vegetated surfaces, oceans, and inland water bodies. Comparisons with AERONET observations have shown that the data are suitable for quantitative scientific use [1],[2]. The resolution of Level 2 pixels is 13.5x13.5 km2 at the center of the swath. Level 3 daily and monthly data are composed by using best quality level 2 pixels at resolution of both 0.5ox0.5o and 1.0ox1.0o. Focusing on the southwest Asia region, this presentation shows seasonal variations of AOD, and the result of comparisons of 5-years (2003- 2007) of AOD from SWDB (Version 3) and MODIS Aqua (Version 5.1) for Dark Target (MYD-DT) and Deep Blue (MYD-DB) algorithms.
An Augmented Reality Endoscope System for Ureter Position Detection.
Yu, Feng; Song, Enmin; Liu, Hong; Li, Yunlong; Zhu, Jun; Hung, Chih-Cheng
2018-06-25
Iatrogenic injury of ureter in the clinical operation may cause the serious complication and kidney damage. To avoid such a medical accident, it is necessary to provide the ureter position information to the doctor. For the detection of ureter position, an ureter position detection and display system with the augmented ris proposed to detect the ureter that is covered by human tissue. There are two key issues which should be considered in this new system. One is how to detect the covered ureter that cannot be captured by the electronic endoscope and the other is how to display the ureter position that provides stable and high-quality images. Simultaneously, any delayed processing of the system should disturb the surgery. The aided hardware detection method and target detection algorithms are proposed in this system. To mark the ureter position, a surface-lighting plastic optical fiber (POF) with the encoded light-emitting diode (LED) light is used to indicate the ureter position. The monochrome channel filtering algorithm (MCFA) is proposed to locate the ureter region more precisely. The ureter position is extracted using the proposed automatic region growing algorithm (ARGA) that utilizes the statistical information of the monochrome channel for the selection of growing seed point. In addition, according to the pulse signal of encoded light, the recognition of bright and dark frames based on the aided hardware (BDAH) is proposed to expedite the processing speed. Experimental results demonstrate that the proposed endoscope system can identify 92.04% ureter region in average.
Multi exposure image fusion algorithm based on YCbCr space
NASA Astrophysics Data System (ADS)
Yang, T. T.; Fang, P. Y.
2018-05-01
To solve the problem that scene details and visual effects are difficult to be optimized in high dynamic image synthesis, we proposes a multi exposure image fusion algorithm for processing low dynamic range images in YCbCr space, and weighted blending of luminance and chromatic aberration components respectively. The experimental results show that the method can retain color effect of the fused image while balancing details of the bright and dark regions of the high dynamic image.
Strong constraints on sub-GeV dark sectors from SLAC beam dump E137.
Batell, Brian; Essig, Rouven; Surujon, Ze'ev
2014-10-24
We present new constraints on sub-GeV dark matter and dark photons from the electron beam-dump experiment E137 conducted at SLAC in 1980-1982. Dark matter interacting with electrons (e.g., via a dark photon) could have been produced in the electron-target collisions and scattered off electrons in the E137 detector, producing the striking, zero-background signature of a high-energy electromagnetic shower that points back to the beam dump. E137 probes new and significant ranges of parameter space and constrains the well-motivated possibility that dark photons that decay to light dark-sector particles can explain the ∼3.6σ discrepancy between the measured and standard model value of the muon anomalous magnetic moment. It also restricts the parameter space in which the relic density of dark matter in these models is obtained from thermal freeze-out. E137 also convincingly demonstrates that (cosmic) backgrounds can be controlled and thus serves as a powerful proof of principle for future beam-dump searches for sub-GeV dark-sector particles scattering off electrons in the detector.
Analyzing the Discovery Potential for Light Dark Matter.
Izaguirre, Eder; Krnjaic, Gordan; Schuster, Philip; Toro, Natalia
2015-12-18
In this Letter, we determine the present status of sub-GeV thermal dark matter annihilating through standard model mixing, with special emphasis on interactions through the vector portal. Within representative simple models, we carry out a complete and precise calculation of the dark matter abundance and of all available constraints. We also introduce a concise framework for comparing different experimental approaches, and use this comparison to identify important ranges of dark matter mass and couplings to better explore in future experiments. The requirement that dark matter be a thermal relic sets a sharp sensitivity target for terrestrial experiments, and so we highlight complementary experimental approaches that can decisively reach this milestone sensitivity over the entire sub-GeV mass range.
Clustering analysis of moving target signatures
NASA Astrophysics Data System (ADS)
Martone, Anthony; Ranney, Kenneth; Innocenti, Roberto
2010-04-01
Previously, we developed a moving target indication (MTI) processing approach to detect and track slow-moving targets inside buildings, which successfully detected moving targets (MTs) from data collected by a low-frequency, ultra-wideband radar. Our MTI algorithms include change detection, automatic target detection (ATD), clustering, and tracking. The MTI algorithms can be implemented in a real-time or near-real-time system; however, a person-in-the-loop is needed to select input parameters for the clustering algorithm. Specifically, the number of clusters to input into the cluster algorithm is unknown and requires manual selection. A critical need exists to automate all aspects of the MTI processing formulation. In this paper, we investigate two techniques that automatically determine the number of clusters: the adaptive knee-point (KP) algorithm and the recursive pixel finding (RPF) algorithm. The KP algorithm is based on a well-known heuristic approach for determining the number of clusters. The RPF algorithm is analogous to the image processing, pixel labeling procedure. Both algorithms are used to analyze the false alarm and detection rates of three operational scenarios of personnel walking inside wood and cinderblock buildings.
An Improved Vision-based Algorithm for Unmanned Aerial Vehicles Autonomous Landing
NASA Astrophysics Data System (ADS)
Zhao, Yunji; Pei, Hailong
In vision-based autonomous landing system of UAV, the efficiency of target detecting and tracking will directly affect the control system. The improved algorithm of SURF(Speed Up Robust Features) will resolve the problem which is the inefficiency of the SURF algorithm in the autonomous landing system. The improved algorithm is composed of three steps: first, detect the region of the target using the Camshift; second, detect the feature points in the region of the above acquired using the SURF algorithm; third, do the matching between the template target and the region of target in frame. The results of experiment and theoretical analysis testify the efficiency of the algorithm.
NASA Astrophysics Data System (ADS)
Pinales, J. C.; Graber, H. C.; Hargrove, J. T.; Caruso, M. J.
2016-02-01
Previous studies have demonstrated the ability to detect and classify marine hydrocarbon films with spaceborne synthetic aperture radar (SAR) imagery. The dampening effects of hydrocarbon discharges on small surface capillary-gravity waves renders the ocean surface "radar dark" compared with the standard wind-borne ocean surfaces. Given the scope and impact of events like the Deepwater Horizon oil spill, the need for improved, automated and expedient monitoring of hydrocarbon-related marine anomalies has become a pressing and complex issue for governments and the extraction industry. The research presented here describes the development, training, and utilization of an algorithm that detects marine oil spills in an automated, semi-supervised manner, utilizing X-, C-, or L-band SAR data as the primary input. Ancillary datasets include related radar-borne variables (incidence angle, etc.), environmental data (wind speed, etc.) and textural descriptors. Shapefiles produced by an experienced human-analyst served as targets (validation) during the training portion of the investigation. Training and testing datasets were chosen for development and assessment of algorithm effectiveness as well as optimal conditions for oil detection in SAR data. The algorithm detects oil spills by following a 3-step methodology: object detection, feature extraction, and classification. Previous oil spill detection and classification methodologies such as machine learning algorithms, artificial neural networks (ANN), and multivariate classification methods like partial least squares-discriminant analysis (PLS-DA) are evaluated and compared. Statistical, transform, and model-based image texture techniques, commonly used for object mapping directly or as inputs for more complex methodologies, are explored to determine optimal textures for an oil spill detection system. The influence of the ancillary variables is explored, with a particular focus on the role of strong vs. weak wind forcing.
A Simple and Universal Aerosol Retrieval Algorithm for Landsat Series Images Over Complex Surfaces
NASA Astrophysics Data System (ADS)
Wei, Jing; Huang, Bo; Sun, Lin; Zhang, Zhaoyang; Wang, Lunche; Bilal, Muhammad
2017-12-01
Operational aerosol optical depth (AOD) products are available at coarse spatial resolutions from several to tens of kilometers. These resolutions limit the application of these products for monitoring atmospheric pollutants at the city level. Therefore, a simple, universal, and high-resolution (30 m) Landsat aerosol retrieval algorithm over complex urban surfaces is developed. The surface reflectance is estimated from a combination of top of atmosphere reflectance at short-wave infrared (2.22 μm) and Landsat 4-7 surface reflectance climate data records over densely vegetated areas and bright areas. The aerosol type is determined using the historical aerosol optical properties derived from the local urban Aerosol Robotic Network (AERONET) site (Beijing). AERONET ground-based sun photometer AOD measurements from five sites located in urban and rural areas are obtained to validate the AOD retrievals. Terra MODerate resolution Imaging Spectrometer Collection (C) 6 AOD products (MOD04) including the dark target (DT), the deep blue (DB), and the combined DT and DB (DT&DB) retrievals at 10 km spatial resolution are obtained for comparison purposes. Validation results show that the Landsat AOD retrievals at a 30 m resolution are well correlated with the AERONET AOD measurements (R2 = 0.932) and that approximately 77.46% of the retrievals fall within the expected error with a low mean absolute error of 0.090 and a root-mean-square error of 0.126. Comparison results show that Landsat AOD retrievals are overall better and less biased than MOD04 AOD products, indicating that the new algorithm is robust and performs well in AOD retrieval over complex surfaces. The new algorithm can provide continuous and detailed spatial distributions of AOD during both low and high aerosol loadings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaskowiak, J; Ahmad, S; Ali, I
Purpose: To investigate quantitatively the performance of different deformable-image-registration algorithms (DIR) with helical (HCT), axial (ACT) and cone-beam CT (CBCT) by evaluating the variations in the CT-numbers and lengths of targets moving with controlled motion-patterns. Methods: Four DIR-algorithms including demons, fast-demons, Horn-Schunk and Locas-Kanade from the DIRART-software are used to register CT-images of a mobile-phantom. A mobile-phantom is scanned with different imaging techniques that include helical, axial and cone-beam CT. The phantom includes three targets with different lengths that are made from water-equivalent material and inserted in low-density-foam which is moved with adjustable motion-amplitudes and frequencies. Results: Most of themore » DIR-algorithms are able to produce the lengths of the stationary-targets, however, they do not produce the CT-number values in CBCT. The image-artifacts induced by motion are more regular in CBCT imaging where the mobile-target elongation increases linearly with motion-amplitude. In ACT and HCT, the motion-artifacts are irregular where some mobile -targets are elongated or shrunk depending on the motion-phase during imaging. The DIR-algorithms are successful in deforming the images of the mobile-targets to the images of the stationary-targets producing the CT-number values and length of the target for motion-amplitudes < 20 mm. Similarly in ACT, all DIR-algorithms produced the actual CT-number and length of the stationary-targets for motion-amplitudes < 15 mm. As stronger motion-artifacts are induced in HCT and ACT, DIR-algorithms fail to produce CT-values and shape of the stationary-targets and fast-demons-algorithm has worst performance. Conclusion: Most of DIR-algorithms produce the CT-number values and lengths of the stationary-targets in HCT and ACT images that has motion-artifacts induced by small motion-amplitudes. As motion-amplitudes increase, the DIR-algorithms fail to deform mobile-target images to the stationary-images in HCT and ACT. In CBCT, DIR-algorithms are successful in producing length and shape of the stationary-targets, however, they fail to produce the accurate CT-number level.« less
Adaptive block online learning target tracking based on super pixel segmentation
NASA Astrophysics Data System (ADS)
Cheng, Yue; Li, Jianzeng
2018-04-01
Video target tracking technology under the unremitting exploration of predecessors has made big progress, but there are still lots of problems not solved. This paper proposed a new algorithm of target tracking based on image segmentation technology. Firstly we divide the selected region using simple linear iterative clustering (SLIC) algorithm, after that, we block the area with the improved density-based spatial clustering of applications with noise (DBSCAN) clustering algorithm. Each sub-block independently trained classifier and tracked, then the algorithm ignore the failed tracking sub-block while reintegrate the rest of the sub-blocks into tracking box to complete the target tracking. The experimental results show that our algorithm can work effectively under occlusion interference, rotation change, scale change and many other problems in target tracking compared with the current mainstream algorithms.
Fuzzy Neural Network-Based Interacting Multiple Model for Multi-Node Target Tracking Algorithm
Sun, Baoliang; Jiang, Chunlan; Li, Ming
2016-01-01
An interacting multiple model for multi-node target tracking algorithm was proposed based on a fuzzy neural network (FNN) to solve the multi-node target tracking problem of wireless sensor networks (WSNs). Measured error variance was adaptively adjusted during the multiple model interacting output stage using the difference between the theoretical and estimated values of the measured error covariance matrix. The FNN fusion system was established during multi-node fusion to integrate with the target state estimated data from different nodes and consequently obtain network target state estimation. The feasibility of the algorithm was verified based on a network of nine detection nodes. Experimental results indicated that the proposed algorithm could trace the maneuvering target effectively under sensor failure and unknown system measurement errors. The proposed algorithm exhibited great practicability in the multi-node target tracking of WSNs. PMID:27809271
Performance of resonant radar target identification algorithms using intra-class weighting functions
NASA Astrophysics Data System (ADS)
Mustafa, A.
The use of calibrated resonant-region radar cross section (RCS) measurements of targets for the classification of large aircraft is discussed. Errors in the RCS estimate of full scale aircraft flying over an ocean, introduced by the ionospheric variability and the sea conditions were studied. The Weighted Target Representative (WTR) classification algorithm was developed, implemented, tested and compared with the nearest neighbor (NN) algorithm. The WTR-algorithm has a low sensitivity to the uncertainty in the aspect angle of the unknown target returns. In addition, this algorithm was based on the development of a new catalog of representative data which reduces the storage requirements and increases the computational efficiency of the classification system compared to the NN-algorithm. Experiments were designed to study and evaluate the characteristics of the WTR- and the NN-algorithms, investigate the classifiability of targets and study the relative behavior of the number of misclassifications as a function of the target backscatter features. The classification results and statistics were shown in the form of performance curves, performance tables and confusion tables.
The Spin and Orientation of Dark Matter Halos Within Cosmic Filaments
NASA Astrophysics Data System (ADS)
Zhang, Youcai; Yang, Xiaohu; Faltenbacher, Andreas; Springel, Volker; Lin, Weipeng; Wang, Huiyuan
2009-11-01
Clusters, filaments, sheets, and voids are the building blocks of the cosmic web. Forming dark matter halos respond to these different large-scale environments, and this in turn affects the properties of galaxies hosted by the halos. It is therefore important to understand the systematic correlations of halo properties with the morphology of the cosmic web, as this informs both about galaxy formation physics and possible systematics of weak lensing studies. In this study, we present and compare two distinct algorithms for finding cosmic filaments and sheets, a task which is far less well established than the identification of dark matter halos or voids. One method is based on the smoothed dark matter density field and the other uses the halo distributions directly. We apply both techniques to one high-resolution N-body simulation and reconstruct the filamentary/sheet like network of the dark matter density field. We focus on investigating the properties of the dark matter halos inside these structures, in particular, on the directions of their spins and the orientation of their shapes with respect to the directions of the filaments and sheets. We find that both the spin and the major axes of filament halos with masses lsim1013 h -1 M sun are preferentially aligned with the direction of the filaments. The spins and major axes of halos in sheets tend to lie parallel to the sheets. There is an opposite mass dependence of the alignment strength for the spin (negative) and major (positive) axes, i.e. with increasing halo mass the major axis tends to be more strongly aligned with the direction of the filament, whereas the alignment between halo spin and filament becomes weaker with increasing halo mass. The alignment strength as a function of the distance to the most massive node halo indicates that there is a transit large-scale environment impact: from the two-dimensional collapse phase of the filament to the three-dimensional collapse phase of the cluster/node halo at small separation. Overall, the two algorithms for filament/sheet identification investigated here agree well with each other. The method based on halos alone can be easily adapted for use with observational data sets.
Alsbou, Nesreen; Ahmad, Salahuddin; Ali, Imad
2016-05-17
A motion algorithm has been developed to extract length, CT number level and motion amplitude of a mobile target from cone-beam CT (CBCT) images. The algorithm uses three measurable parameters: Apparent length and blurred CT number distribution of a mobile target obtained from CBCT images to determine length, CT-number value of the stationary target, and motion amplitude. The predictions of this algorithm are tested with mobile targets having different well-known sizes that are made from tissue-equivalent gel which is inserted into a thorax phantom. The phantom moves sinusoidally in one-direction to simulate respiratory motion using eight amplitudes ranging 0-20 mm. Using this motion algorithm, three unknown parameters are extracted that include: Length of the target, CT number level, speed or motion amplitude for the mobile targets from CBCT images. The motion algorithm solves for the three unknown parameters using measured length, CT number level and gradient for a well-defined mobile target obtained from CBCT images. The motion model agrees with the measured lengths which are dependent on the target length and motion amplitude. The gradient of the CT number distribution of the mobile target is dependent on the stationary CT number level, the target length and motion amplitude. Motion frequency and phase do not affect the elongation and CT number distribution of the mobile target and could not be determined. A motion algorithm has been developed to extract three parameters that include length, CT number level and motion amplitude or speed of mobile targets directly from reconstructed CBCT images without prior knowledge of the stationary target parameters. This algorithm provides alternative to 4D-CBCT without requirement of motion tracking and sorting of the images into different breathing phases. The motion model developed here works well for tumors that have simple shapes, high contrast relative to surrounding tissues and move nearly in regular motion pattern that can be approximated with a simple sinusoidal function. This algorithm has potential applications in diagnostic CT imaging and radiotherapy in terms of motion management.
First direct detection limits on sub-GeV dark matter from XENON10.
Essig, Rouven; Manalaysay, Aaron; Mardon, Jeremy; Sorensen, Peter; Volansky, Tomer
2012-07-13
The first direct detection limits on dark matter in the MeV to GeV mass range are presented, using XENON10 data. Such light dark matter can scatter with electrons, causing ionization of atoms in a detector target material and leading to single- or few-electron events. We use 15 kg day of data acquired in 2006 to set limits on the dark-matter-electron scattering cross section. The strongest bound is obtained at 100 MeV where σ(e)<3×10(-38) cm2 at 90% C.L., while dark-matter masses between 20 MeV and 1 GeV are bounded by σ(e)<10(-37) cm2 at 90% C.L. This analysis provides a first proof of principle that direct detection experiments can be sensitive to dark-matter candidates with masses well below the GeV scale.
Low-Mass Dark Matter Search with the DarkSide-50 Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agnes, P.; et al.
We present the results of a search for dark matter WIMPs in the mass range below 20 GeV/c^2 using a target of low-radioactivity argon. The data were obtained using the DarkSide-50 apparatus at Laboratori Nazionali del Gran Sasso (LNGS). The analysis is based on the ionization signal, for which the DarkSide-50 time projection chamber is fully efficient at 0.1 keVee. The observed rate in the detector at 0.5 keVee is about 1.5 events/keVee/kg/day and is almost entirely accounted for by known background sources. We obtain a 90% C.L. exclusion limit above 1.8 GeV/c^2 for the spin-independent cross section of darkmore » matter WIMPs on nucleons, extending the exclusion region for dark matter below previous limits in the range 1.8-6 GeV/c^2.« less
NASA Technical Reports Server (NTRS)
Ackermann, M.; Ajello, M.; Albert, A.; Atwood, W. B.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.;
2011-01-01
Satellite galaxies of the Milky Way are among the most promising targets for dark matter searches in gamma rays. We present a search for dark matter consisting of weakly interacting massive particles, applying a joint likelihood analysis to 10 satellite galaxies with 24 months of data of the Fermi Large Area Telescope. No dark matter signal is detected. Including the uncertainty in the dark matter distribution, robust upper limits are placed on dark matter annihilation cross sections. The 95% confidence level upper limits range from about 10(exp -26) cm(exp 3) / s at 5 GeV to about 5 X 10(exp -23) cm(exp 3)/ s at 1 TeV, depending on the dark matter annihilation final state. For the first time, using gamma rays, we are able to rule out models with the most generic cross section (approx 3 X 10(exp -26) cm(exp 3)/s for a purely s-wave cross section), without assuming additional boost factors.
First results from the DarkSide-50 dark matter experiment at Laboratori Nazionali del Gran Sasso
NASA Astrophysics Data System (ADS)
Agnes, P.; Alexander, T.; Alton, A.; Arisaka, K.; Back, H. O.; Baldin, B.; Biery, K.; Bonfini, G.; Bossa, M.; Brigatti, A.; Brodsky, J.; Budano, F.; Cadonati, L.; Calaprice, F.; Canci, N.; Candela, A.; Cao, H.; Cariello, M.; Cavalcante, P.; Chavarria, A.; Chepurnov, A.; Cocco, A. G.; Crippa, L.; D'Angelo, D.; D'Incecco, M.; Davini, S.; De Deo, M.; Derbin, A.; Devoto, A.; Di Eusanio, F.; Di Pietro, G.; Edkins, E.; Empl, A.; Fan, A.; Fiorillo, G.; Fomenko, K.; Forster, G.; Franco, D.; Gabriele, F.; Galbiati, C.; Goretti, A.; Grandi, L.; Gromov, M.; Guan, M. Y.; Guardincerri, Y.; Hackett, B.; Herner, K.; Hungerford, E. V.; Ianni, Al.; Ianni, An.; Jollet, C.; Keeter, K.; Kendziora, C.; Kidner, S.; Kobychev, V.; Koh, G.; Korablev, D.; Korga, G.; Kurlej, A.; Li, P. X.; Loer, B.; Lombardi, P.; Love, C.; Ludhova, L.; Luitz, S.; Ma, Y. Q.; Machulin, I.; Mandarano, A.; Mari, S.; Maricic, J.; Marini, L.; Martoff, C. J.; Meregaglia, A.; Meroni, E.; Meyers, P. D.; Milincic, R.; Montanari, D.; Monte, A.; Montuschi, M.; Monzani, M. E.; Mosteiro, P.; Mount, B.; Muratova, V.; Musico, P.; Nelson, A.; Odrowski, S.; Okounkova, M.; Orsini, M.; Ortica, F.; Pagani, L.; Pallavicini, M.; Pantic, E.; Papp, L.; Parmeggiano, S.; Parsells, R.; Pelczar, K.; Pelliccia, N.; Perasso, S.; Pocar, A.; Pordes, S.; Pugachev, D.; Qian, H.; Randle, K.; Ranucci, G.; Razeto, A.; Reinhold, B.; Renshaw, A.; Romani, A.; Rossi, B.; Rossi, N.; Rountree, S. D.; Sablone, D.; Saggese, P.; Saldanha, R.; Sands, W.; Sangiorgio, S.; Segreto, E.; Semenov, D.; Shields, E.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Stanford, C.; Suvorov, Y.; Tartaglia, R.; Tatarowicz, J.; Testera, G.; Tonazzo, A.; Unzhakov, E.; Vogelaar, R. B.; Wada, M.; Walker, S.; Wang, H.; Wang, Y.; Watson, A.; Westerdale, S.; Wojcik, M.; Wright, A.; Xiang, X.; Xu, J.; Yang, C. G.; Yoo, J.; Zavatarelli, S.; Zec, A.; Zhu, C.; Zuzel, G.
2015-04-01
We report the first results of DarkSide-50, a direct search for dark matter operating in the underground Laboratori Nazionali del Gran Sasso (LNGS) and searching for the rare nuclear recoils possibly induced by weakly interacting massive particles (WIMPs). The dark matter detector is a Liquid Argon Time Projection Chamber with a (46.4 ± 0.7) kg active mass, operated inside a 30 t organic liquid scintillator neutron veto, which is in turn installed at the center of a 1 kt water Cherenkov veto for the residual flux of cosmic rays. We report here the null results of a dark matter search for a (1422 ± 67) kgd exposure with an atmospheric argon fill. This is the most sensitive dark matter search performed with an argon target, corresponding to a 90% CL upper limit on the WIMP-nucleon spin-independent cross section of 6.1 ×10-44 cm2 for a WIMP mass of 100 Gev /c2.
NASA Astrophysics Data System (ADS)
Ackermann, M.; Ajello, M.; Albert, A.; Atwood, W. B.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Berenji, B.; Blandford, R. D.; Bloom, E. D.; Bonamente, E.; Borgland, A. W.; Bregeon, J.; Brigida, M.; Bruel, P.; Buehler, R.; Burnett, T. H.; Buson, S.; Caliandro, G. A.; Cameron, R. A.; Cañadas, B.; Caraveo, P. A.; Casandjian, J. M.; Cecchi, C.; Charles, E.; Chekhtman, A.; Chiang, J.; Ciprini, S.; Claus, R.; Cohen-Tanugi, J.; Conrad, J.; Cutini, S.; de Angelis, A.; de Palma, F.; Dermer, C. D.; Digel, S. W.; Do Couto E Silva, E.; Drell, P. S.; Drlica-Wagner, A.; Falletti, L.; Favuzzi, C.; Fegan, S. J.; Ferrara, E. C.; Fukazawa, Y.; Funk, S.; Fusco, P.; Gargano, F.; Gasparrini, D.; Gehrels, N.; Germani, S.; Giglietto, N.; Giordano, F.; Giroletti, M.; Glanzman, T.; Godfrey, G.; Grenier, I. A.; Guiriec, S.; Gustafsson, M.; Hadasch, D.; Hayashida, M.; Hays, E.; Hughes, R. E.; Jeltema, T. E.; Jóhannesson, G.; Johnson, R. P.; Johnson, A. S.; Kamae, T.; Katagiri, H.; Kataoka, J.; Knödlseder, J.; Kuss, M.; Lande, J.; Latronico, L.; Lionetto, A. M.; Llena Garde, M.; Longo, F.; Loparco, F.; Lott, B.; Lovellette, M. N.; Lubrano, P.; Madejski, G. M.; Mazziotta, M. N.; McEnery, J. E.; Mehault, J.; Michelson, P. F.; Mitthumsiri, W.; Mizuno, T.; Monte, C.; Monzani, M. E.; Morselli, A.; Moskalenko, I. V.; Murgia, S.; Naumann-Godo, M.; Norris, J. P.; Nuss, E.; Ohsugi, T.; Okumura, A.; Omodei, N.; Orlando, E.; Ormes, J. F.; Ozaki, M.; Paneque, D.; Parent, D.; Pesce-Rollins, M.; Pierbattista, M.; Piron, F.; Pivato, G.; Porter, T. A.; Profumo, S.; Rainò, S.; Razzano, M.; Reimer, A.; Reimer, O.; Ritz, S.; Roth, M.; Sadrozinski, H. F.-W.; Sbarra, C.; Scargle, J. D.; Schalk, T. L.; Sgrò, C.; Siskind, E. J.; Spandre, G.; Spinelli, P.; Strigari, L.; Suson, D. J.; Tajima, H.; Takahashi, H.; Tanaka, T.; Thayer, J. G.; Thayer, J. B.; Thompson, D. J.; Tibaldo, L.; Tinivella, M.; Torres, D. F.; Troja, E.; Uchiyama, Y.; Vandenbroucke, J.; Vasileiou, V.; Vianello, G.; Vitale, V.; Waite, A. P.; Wang, P.; Winer, B. L.; Wood, K. S.; Wood, M.; Yang, Z.; Zimmer, S.; Kaplinghat, M.; Martinez, G. D.
2011-12-01
Satellite galaxies of the Milky Way are among the most promising targets for dark matter searches in gamma rays. We present a search for dark matter consisting of weakly interacting massive particles, applying a joint likelihood analysis to 10 satellite galaxies with 24 months of data of the Fermi Large Area Telescope. No dark matter signal is detected. Including the uncertainty in the dark matter distribution, robust upper limits are placed on dark matter annihilation cross sections. The 95% confidence level upper limits range from about 10-26cm3s-1 at 5 GeV to about 5×10-23cm3s-1 at 1 TeV, depending on the dark matter annihilation final state. For the first time, using gamma rays, we are able to rule out models with the most generic cross section (˜3×10-26cm3s-1 for a purely s-wave cross section), without assuming additional boost factors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ackermann, M.
Satellite galaxies of the Milky Way are among the most promising targets for dark matter searches in gamma rays. We present a search for dark matter consisting of weakly interacting massive particles, applying a joint likelihood analysis to 10 satellite galaxies with 24 months of data of the Fermi Large Area Telescope. No dark matter signal is detected. Including the uncertainty in the dark matter distribution, robust upper limits are placed on dark matter annihilation cross sections. The 95% con dence level upper limits range from about 10 -26 cm3s -1 at 5 GeV to about 5 X10 -23 cm3smore » -1 at 1 TeV, depending on the dark matter annihilation nal state. For the rst time, using gamma rays, we are able to rule out models with the most generic cross section (~ 3 X 10 -26 cm 3s -1 for a purely s-wave cross section), without assuming additional boost factors.« less
Ackermann, M.
2011-12-01
Satellite galaxies of the Milky Way are among the most promising targets for dark matter searches in gamma rays. We present a search for dark matter consisting of weakly interacting massive particles, applying a joint likelihood analysis to 10 satellite galaxies with 24 months of data of the Fermi Large Area Telescope. No dark matter signal is detected. Including the uncertainty in the dark matter distribution, robust upper limits are placed on dark matter annihilation cross sections. The 95% con dence level upper limits range from about 10 -26 cm3s -1 at 5 GeV to about 5 X10 -23 cm3smore » -1 at 1 TeV, depending on the dark matter annihilation nal state. For the rst time, using gamma rays, we are able to rule out models with the most generic cross section (~ 3 X 10 -26 cm 3s -1 for a purely s-wave cross section), without assuming additional boost factors.« less
Iterative initial condition reconstruction
NASA Astrophysics Data System (ADS)
Schmittfull, Marcel; Baldauf, Tobias; Zaldarriaga, Matias
2017-07-01
Motivated by recent developments in perturbative calculations of the nonlinear evolution of large-scale structure, we present an iterative algorithm to reconstruct the initial conditions in a given volume starting from the dark matter distribution in real space. In our algorithm, objects are first moved back iteratively along estimated potential gradients, with a progressively reduced smoothing scale, until a nearly uniform catalog is obtained. The linear initial density is then estimated as the divergence of the cumulative displacement, with an optional second-order correction. This algorithm should undo nonlinear effects up to one-loop order, including the higher-order infrared resummation piece. We test the method using dark matter simulations in real space. At redshift z =0 , we find that after eight iterations the reconstructed density is more than 95% correlated with the initial density at k ≤0.35 h Mpc-1 . The reconstruction also reduces the power in the difference between reconstructed and initial fields by more than 2 orders of magnitude at k ≤0.2 h Mpc-1 , and it extends the range of scales where the full broadband shape of the power spectrum matches linear theory by a factor of 2-3. As a specific application, we consider measurements of the baryonic acoustic oscillation (BAO) scale that can be improved by reducing the degradation effects of large-scale flows. In our idealized dark matter simulations, the method improves the BAO signal-to-noise ratio by a factor of 2.7 at z =0 and by a factor of 2.5 at z =0.6 , improving standard BAO reconstruction by 70% at z =0 and 30% at z =0.6 , and matching the optimal BAO signal and signal-to-noise ratio of the linear density in the same volume. For BAO, the iterative nature of the reconstruction is the most important aspect.
Increasing the sensitivity of LXe TPCs to dark matter by doping with helium or neon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lippincott, W. Hugh; Alexander, Thomas R.; Hime, Andrew
Next generation liquid xenon TPCs are poised to increase our sensitivity to dark matter by two orders of magnitude over a wide range of possible dark matter candidates. This proceedings describes an idea to expand the reach and flexibility of such detectors even further, by adding helium and neon to the xenon to enable searches for very light dark matter and combining high and low Z targets in the same detector. Adding helium or neon to LXe-TPCs has many advantages. First, the helium or neon target benefits from the excellent self-shielding provided by a large liquid xenon detector. Second, themore » same instrumentation, PMTs, and data acquisition can be used. Third, light nuclei are more robust to the systematic uncertainties that affect light WIMP searches. Fourth, helium and neon recoils will likely produce larger signals in liquid xenon than xenon recoils, achieving lower energy thresholds, and further increasing the sensitivity to light WIMPs. Finally, by adding He/Ne in sequence after a Xe-only run, the source of any observed signal can be isolated.« less
Increasing the sensitivity of LXe TPCs to dark matter by doping with helium or neon
Lippincott, W. Hugh; Alexander, Thomas R.; Hime, Andrew
2017-02-03
Next generation liquid xenon TPCs are poised to increase our sensitivity to dark matter by two orders of magnitude over a wide range of possible dark matter candidates. This proceedings describes an idea to expand the reach and flexibility of such detectors even further, by adding helium and neon to the xenon to enable searches for very light dark matter and combining high and low Z targets in the same detector. Adding helium or neon to LXe-TPCs has many advantages. First, the helium or neon target benefits from the excellent self-shielding provided by a large liquid xenon detector. Second, themore » same instrumentation, PMTs, and data acquisition can be used. Third, light nuclei are more robust to the systematic uncertainties that affect light WIMP searches. Fourth, helium and neon recoils will likely produce larger signals in liquid xenon than xenon recoils, achieving lower energy thresholds, and further increasing the sensitivity to light WIMPs. Finally, by adding He/Ne in sequence after a Xe-only run, the source of any observed signal can be isolated.« less
LSST Probes of Dark Energy: New Energy vs New Gravity
NASA Astrophysics Data System (ADS)
Bradshaw, Andrew; Tyson, A.; Jee, M. J.; Zhan, H.; Bard, D.; Bean, R.; Bosch, J.; Chang, C.; Clowe, D.; Dell'Antonio, I.; Gawiser, E.; Jain, B.; Jarvis, M.; Kahn, S.; Knox, L.; Newman, J.; Wittman, D.; Weak Lensing, LSST; LSS Science Collaborations
2012-01-01
Is the late time acceleration of the universe due to new physics in the form of stress-energy or a departure from General Relativity? LSST will measure the shape, magnitude, and color of 4x109 galaxies to high S/N over 18,000 square degrees. These data will be used to separately measure the gravitational growth of mass structure and distance vs redshift to unprecedented precision by combining multiple probes in a joint analysis. Of the five LSST probes of dark energy, weak gravitational lensing (WL) and baryon acoustic oscillation (BAO) probes are particularly effective in combination. By measuring the 2-D BAO scale in ugrizy-band photometric redshift-selected samples, LSST will determine the angular diameter distance to a dozen redshifts with sub percent-level errors. Reconstruction of the WL shear power spectrum on linear and weakly non-linear scales, and of the cross-correlation of shear measured in different photometric redshift bins provides a constraint on the evolution of dark energy that is complementary to the purely geometric measures provided by supernovae and BAO. Cross-correlation of the WL shear and BAO signal within redshift shells minimizes the sensitivity to systematics. LSST will also detect shear peaks, providing independent constraints. Tomographic study of the shear of background galaxies as a function of redshift allows a geometric test of dark energy. To extract the dark energy signal and distinguish between the two forms of new physics, LSST will rely on accurate stellar point-spread functions (PSF) and unbiased reconstruction of galaxy image shapes from hundreds of exposures. Although a weighted co-added deep image has high S/N, it is a form of lossy compression. Bayesian forward modeling algorithms can in principle use all the information. We explore systematic effects on shape measurements and present tests of an algorithm called Multi-Fit, which appears to avoid PSF-induced shear systematics in a computationally efficient way.
Optimization of coronagraph design for segmented aperture telescopes
NASA Astrophysics Data System (ADS)
Jewell, Jeffrey; Ruane, Garreth; Shaklan, Stuart; Mawet, Dimitri; Redding, Dave
2017-09-01
The goal of directly imaging Earth-like planets in the habitable zone of other stars has motivated the design of coronagraphs for use with large segmented aperture space telescopes. In order to achieve an optimal trade-off between planet light throughput and diffracted starlight suppression, we consider coronagraphs comprised of a stage of phase control implemented with deformable mirrors (or other optical elements), pupil plane apodization masks (gray scale or complex valued), and focal plane masks (either amplitude only or complex-valued, including phase only such as the vector vortex coronagraph). The optimization of these optical elements, with the goal of achieving 10 or more orders of magnitude in the suppression of on-axis (starlight) diffracted light, represents a challenging non-convex optimization problem with a nonlinear dependence on control degrees of freedom. We develop a new algorithmic approach to the design optimization problem, which we call the "Auxiliary Field Optimization" (AFO) algorithm. The central idea of the algorithm is to embed the original optimization problem, for either phase or amplitude (apodization) in various planes of the coronagraph, into a problem containing additional degrees of freedom, specifically fictitious "auxiliary" electric fields which serve as targets to inform the variation of our phase or amplitude parameters leading to good feasible designs. We present the algorithm, discuss details of its numerical implementation, and prove convergence to local minima of the objective function (here taken to be the intensity of the on-axis source in a "dark hole" region in the science focal plane). Finally, we present results showing application of the algorithm to both unobscured off-axis and obscured on-axis segmented telescope aperture designs. The application of the AFO algorithm to the coronagraph design problem has produced solutions which are capable of directly imaging planets in the habitable zone, provided end-to-end telescope system stability requirements can be met. Ongoing work includes advances of the AFO algorithm reported here to design in additional robustness to a resolved star, and other phase or amplitude aberrations to be encountered in a real segmented aperture space telescope.
NASA Astrophysics Data System (ADS)
Qian, Kun; Zhou, Huixin; Rong, Shenghui; Wang, Bingjian; Cheng, Kuanhong
2017-05-01
Infrared small target tracking plays an important role in applications including military reconnaissance, early warning and terminal guidance. In this paper, an effective algorithm based on the Singular Value Decomposition (SVD) and the improved Kernelized Correlation Filter (KCF) is presented for infrared small target tracking. Firstly, the super performance of the SVD-based algorithm is that it takes advantage of the target's global information and obtains a background estimation of an infrared image. A dim target is enhanced by subtracting the corresponding estimated background with update from the original image. Secondly, the KCF algorithm is combined with Gaussian Curvature Filter (GCF) to eliminate the excursion problem. The GCF technology is adopted to preserve the edge and eliminate the noise of the base sample in the KCF algorithm, helping to calculate the classifier parameter for a small target. At last, the target position is estimated with a response map, which is obtained via the kernelized classifier. Experimental results demonstrate that the presented algorithm performs favorably in terms of efficiency and accuracy, compared with several state-of-the-art algorithms.
Buckley, Matthew R.; Charles, Eric; Gaskins, Jennifer M.; ...
2015-05-05
At a distance of 50 kpc and with a dark matter mass of ~10 10 M ⊙, the large magellanic cloud (LMC) is a natural target for indirect dark matter searches. We use five years of data from the Fermi Large Area Telescope (LAT) and updated models of the gamma-ray emission from standard astrophysical components to search for a dark matter annihilation signal from the LMC. We perform a rotation curve analysis to determine the dark matter distribution, setting a robust minimum on the amount of dark matter in the LMC, which we use to set conservative bounds on the annihilationmore » cross section. The LMC emission is generally very well described by the standard astrophysical sources, with at most a 1–2σ excess identified near the kinematic center of the LMC once systematic uncertainties are taken into account. As a result, we place competitive bounds on the dark matter annihilation cross section as a function of dark matter particle mass and annihilation channel.« less
Rogers, Katherine H; Le, Marina T; Buckels, Erin E; Kim, Mikayla; Biesanz, Jeremy C
2018-02-19
The Dark Tetrad traits (subclinical psychopathy, narcissism, Machiavellianism, and everyday sadism) have interpersonal consequences. At present, however, how these traits are associated with the accuracy and positivity of first impressions is not well understood. The present article addresses three primary questions. First, to what extent are perceiver levels of Dark Tetrad traits associated with differing levels of perceptive accuracy? Second, to what extent are target levels of Dark Tetrad traits associated with differing levels of expressive accuracy? Finally, to what extent can Dark Tetrad traits be differentiated when examining perceptions of and by others? In a round-robin design, undergraduate participants (N = 412) in small groups engaged in brief, naturalistic, unstructured dyadic interactions before providing impressions of their partner. Dark Tetrad traits were associated with being viewed and viewing others less distinctively accurately and more negatively. Interpersonal perceptions that included an individual scoring highly on one of the Dark Tetrad traits differed in important ways from interactions among individuals with more benevolent personalities. Notably, despite the similarities between the Dark Tetrad, traits had unique associations with interpersonal perceptions. © 2018 Wiley Periodicals, Inc.
Open reading frames associated with cancer in the dark matter of the human genome.
Delgado, Ana Paula; Brandao, Pamela; Chapado, Maria Julia; Hamid, Sheilin; Narayanan, Ramaswamy
2014-01-01
The uncharacterized proteins (open reading frames, ORFs) in the human genome offer an opportunity to discover novel targets for cancer. A systematic analysis of the dark matter of the human proteome for druggability and biomarker discovery is crucial to mining the genome. Numerous data mining tools are available to mine these ORFs to develop a comprehensive knowledge base for future target discovery and validation. Using the Genetic Association Database, the ORFs of the human dark matter proteome were screened for evidence of association with neoplasms. The Phenome-Genome Integrator tool was used to establish phenotypic association with disease traits including cancer. Batch analysis of the tools for protein expression analysis, gene ontology and motifs and domains was used to characterize the ORFs. Sixty-two ORFs were identified for neoplasm association. The expression Quantitative Trait Loci (eQTL) analysis identified thirteen ORFs related to cancer traits. Protein expression, motifs and domain analysis and genome-wide association studies verified the relevance of these OncoORFs in diverse tumors. The OncoORFs are also associated with a wide variety of human diseases and disorders. Our results link the OncoORFs to diverse diseases and disorders. This suggests a complex landscape of the uncharacterized proteome in human diseases. These results open the dark matter of the proteome to novel cancer target research. Copyright© 2014, International Institute of Anticancer Research (Dr. John G. Delinasios), All rights reserved.
Chen, Yuantao; Xu, Weihong; Kuang, Fangjun; Gao, Shangbing
2013-01-01
The efficient target tracking algorithm researches have become current research focus of intelligent robots. The main problems of target tracking process in mobile robot face environmental uncertainty. They are very difficult to estimate the target states, illumination change, target shape changes, complex backgrounds, and other factors and all affect the occlusion in tracking robustness. To further improve the target tracking's accuracy and reliability, we present a novel target tracking algorithm to use visual saliency and adaptive support vector machine (ASVM). Furthermore, the paper's algorithm has been based on the mixture saliency of image features. These features include color, brightness, and sport feature. The execution process used visual saliency features and those common characteristics have been expressed as the target's saliency. Numerous experiments demonstrate the effectiveness and timeliness of the proposed target tracking algorithm in video sequences where the target objects undergo large changes in pose, scale, and illumination.
A diamond active target for the PADME experiment
NASA Astrophysics Data System (ADS)
Chiodini, G.
2017-02-01
The PADME (Positron Annihilation into Dark Mediator Experiment) collaboration searches for dark photons produced in the annihilation e++e-→γ+A' of accelerated positrons with atomic electrons of a fixed target at the Beam Test Facility of Laboratori Nazionali di Frascati. The apparatus can detect dark photons decaying into visible A'→e+e- and invisible A'→χχ channels, where χ's are particles of a secluded sector weakly interacting and therefore undetected. In order to improve the missing mass resolution and to measure the beam flux, PADME has an active target able to reconstruct the beam spot position and the bunch multiplicity. In this work the active target is described, which is made of a detector grade polycrystalline synthetic diamond with strip electrodes on both surfaces. The electrodes segmentation allows to measure the beam profile along X and Y and evaluate the average beam position bunch per bunch. The results of beam tests for the first two diamond detector prototypes are shown. One of them holds innovative graphitic electrodes built with a custom process developed in the laboratory, and the other one with commercially available traditional Cr-Au electrodes. The front-end electronics used in the test beam is discussed and the performance observed is presented. Finally, the final design of the target to be realized at the beginning of 2017 to be ready for data taking in 2018 is illustrated.
Maneuver Algorithm for Bearings-Only Target Tracking with Acceleration and Field of View Constraints
NASA Astrophysics Data System (ADS)
Roh, Heekun; Shim, Sang-Wook; Tahk, Min-Jea
2018-05-01
This paper proposes a maneuver algorithm for the agent performing target tracking with bearing angle information only. The goal of the agent is to estimate the target position and velocity based only on the bearing angle data. The methods of bearings-only target state estimation are outlined. The nature of bearings-only target tracking problem is then addressed. Based on the insight from above-mentioned properties, the maneuver algorithm for the agent is suggested. The proposed algorithm is composed of a nonlinear, hysteresis guidance law and the estimation accuracy assessment criteria based on the theory of Cramer-Rao bound. The proposed guidance law generates lateral acceleration command based on current field of view angle. The accuracy criteria supply the expected estimation variance, which acts as a terminal criterion for the proposed algorithm. The aforementioned algorithm is verified with a two-dimensional simulation.
NASA Astrophysics Data System (ADS)
Forero-Romero, J. E.
2017-07-01
This talk summarizes different algorithms that can be used to trace the cosmic web both in simulations and observations. We present different applications in galaxy formation and cosmology. To finalize, we show how the Dark Energy Spectroscopic Instrument (DESI) could be a good place to apply these techniques.
NASA Astrophysics Data System (ADS)
Xu, Zhipeng; Wei, Jun; Li, Jianwei; Zhou, Qianting
2010-11-01
An image spectrometer of a spatial remote sensing satellite requires shortwave band range from 2.1μm to 3μm which is one of the most important bands in remote sensing. We designed an infrared sub-system of the image spectrometer using a homemade 640x1 InGaAs shortwave infrared sensor working on FPA system which requires high uniformity and low level of dark current. The working temperature should be -15+/-0.2 Degree Celsius. This paper studies the model of noise for focal plane array (FPA) system, investigated the relationship with temperature and dark current noise, and adopts Incremental PID algorithm to generate PWM wave in order to control the temperature of the sensor. There are four modules compose of the FPGA module design. All of the modules are coded by VHDL and implemented in FPGA device APA300. Experiment shows the intelligent temperature control system succeeds in controlling the temperature of the sensor.
CHAM: a fast algorithm of modelling non-linear matter power spectrum in the sCreened HAlo Model
NASA Astrophysics Data System (ADS)
Hu, Bin; Liu, Xue-Wen; Cai, Rong-Gen
2018-05-01
We present a fast numerical screened halo model algorithm (CHAM, which stands for the sCreened HAlo Model) for modelling non-linear power spectrum for the alternative models to Λ cold dark matter. This method has three obvious advantages. First of all, it is not being restricted to a specific dark energy/modified gravity model. In principle, all of the screened scalar-tensor theories can be applied. Secondly, the least assumptions are made in the calculation. Hence, the physical picture is very easily understandable. Thirdly, it is very predictable and does not rely on the calibration from N-body simulation. As an example, we show the case of the Hu-Sawicki f(R) gravity. In this case, the typical CPU time with the current parallel PYTHON script (eight threads) is roughly within 10 min. The resulting spectra are in a good agreement with N-body data within a few percentage accuracy up to k ˜ 1 h Mpc-1.
Refining atmosphere light to improve the dark channel prior algorithm
NASA Astrophysics Data System (ADS)
Gan, Ling; Li, Dagang; Zhou, Can
2017-05-01
The defogging image gotten through dark channel prior algorithm has some shortcomings, such like color distortion, dimmer light and detail-loss near the observer. The main reasons are that the atmosphere light is estimated as one value and its change in different scene depth is not considered. So we modeled the atmosphere, one parameter of the defogging model. Firstly, we scatter the atmosphere light into equivalent point and build discrete model of the light. Secondly, we build some rough and possible models through analyzing the relationship between the atmosphere light and the medium transmission. Finally, by analyzing the results of many experiments qualitatively and quantitatively, we get the selected and optimized model. Although using this method causes the time-consuming to increase slightly, the evaluations, histogram correlation coefficient and peak signal-to-noise ratio are improved significantly and the defogging result is more conformed to human visual. And the color and the details near the observer in the defogging image are better than that achieved by the primal method.
Sparse Reconstruction of the Merging A520 Cluster System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peel, Austin; Lanusse, François; Starck, Jean-Luc, E-mail: austin.peel@cea.fr
2017-09-20
Merging galaxy clusters present a unique opportunity to study the properties of dark matter in an astrophysical context. These are rare and extreme cosmic events in which the bulk of the baryonic matter becomes displaced from the dark matter halos of the colliding subclusters. Since all mass bends light, weak gravitational lensing is a primary tool to study the total mass distribution in such systems. Combined with X-ray and optical analyses, mass maps of cluster mergers reconstructed from weak-lensing observations have been used to constrain the self-interaction cross-section of dark matter. The dynamically complex Abell 520 (A520) cluster is anmore » exceptional case, even among merging systems: multi-wavelength observations have revealed a surprising high mass-to-light concentration of dark mass, the interpretation of which is difficult under the standard assumption of effectively collisionless dark matter. We revisit A520 using a new sparsity-based mass-mapping algorithm to independently assess the presence of the puzzling dark core. We obtain high-resolution mass reconstructions from two separate galaxy shape catalogs derived from Hubble Space Telescope observations of the system. Our mass maps agree well overall with the results of previous studies, but we find important differences. In particular, although we are able to identify the dark core at a certain level in both data sets, it is at much lower significance than has been reported before using the same data. As we cannot confirm the detection in our analysis, we do not consider A520 as posing a significant challenge to the collisionless dark matter scenario.« less
Light detection and the wavelength shifter deposition in DEAP-3600
NASA Astrophysics Data System (ADS)
Broerman, B.; Retière, F.
2016-02-01
The Dark matter Experiment using Argon Pulse-shape discrimination (DEAP) uses liquid argon as a target medium to perform a direct-detection dark matter search. The 3600 kg liquid argon target volume is housed in a spherical acrylic vessel and viewed by a surrounding array of photomultiplier tubes. Ionizing particles in the argon volume produce scintillation light which must be wavelength shifted to be detected by the photomultiplier tubes. Argon scintillation and wavelength shifting, along with details on the application of the wavelength shifter to the inner surface of the acrylic vessel are presented.
2012-08-17
This image shows the calibration target for the Chemistry and Camera ChemCam instrument on NASA Curiosity rover. The calibration target is one square and a group of nine circles that look dark in the black-and-white image.
Enabling Super-Nyquist Wavefront Control on WFIRST
NASA Astrophysics Data System (ADS)
Bendek, Eduardo; Belikov, Ruslan; Sirbu, Dan; Shaklan, Stuart B.; Eldorado Riggs, A. J.
2018-01-01
A large fraction of sun-like stars is contained in Binary systems. Within 10pc there are 70 FGK stars from which, 43 belong to a multi-star system, and 28 of them have companion leak that is greater than 1e-9 contrast assuming typical Hubble-quality space optics. Currently, those binary stars are not included in the WFIRST-CGI target list, but they could be observed if high-contrast imaging around binary star systems using WFIRST is possible, increasing by 70% the number of possible FGK targets for the mission. The Multi-Star Wavefront Control (MSWC) algorithm can be used to suppress the companion star leakage. If the targets have angular separations larger than the Nyquist controllable region of the Deformable Mirror the MSWC must operate in its Super-Nyquist (SN) mode. This mode requires a target star replica within the SN region in order to provide the energy, and coherent light necessary to null speckles at SN angular separations. For the case of WFIRST, about half of the targets that can be observed using MSWC have angular separations larger than the Nyquist controllable region of the 48x48 actuator Deformable Mirror (DM) to be used. Here, we discuss multiple alternatives to generate those PSF replicas with minimal or no impact to the WFIRST Coronagraph instrument such as 1) the addition of a movable diffractive pupil mounted of the Shape Pupil wheel. 2) Design of a modified Shape Pupil design able to create a dark zone and at the same time diffract a small fraction of the starlight on the SN region. 3) Predict the minimum residual quilting on Xinetics DM that would allow observing a given target.
Heterogeneous Vision Data Fusion for Independently Moving Cameras
2010-03-01
target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY
Sentiment analysis enhancement with target variable in Kumar’s Algorithm
NASA Astrophysics Data System (ADS)
Arman, A. A.; Kawi, A. B.; Hurriyati, R.
2016-04-01
Sentiment analysis (also known as opinion mining) refers to the use of text analysis and computational linguistics to identify and extract subjective information in source materials. Sentiment analysis is widely applied to reviews discussion that is being talked in social media for many purposes, ranging from marketing, customer service, or public opinion of public policy. One of the popular algorithm for Sentiment Analysis implementation is Kumar algorithm that developed by Kumar and Sebastian. Kumar algorithm can identify the sentiment score of the statement, sentence or tweet, but cannot determine the relationship of the object or target related to the sentiment being analysed. This research proposed solution for that challenge by adding additional component that represent object or target to the existing algorithm (Kumar algorithm). The result of this research is a modified algorithm that can give sentiment score based on a given object or target.
Campos, Andre N.; Souza, Efren L.; Nakamura, Fabiola G.; Nakamura, Eduardo F.; Rodrigues, Joel J. P. C.
2012-01-01
Target tracking is an important application of wireless sensor networks. The networks' ability to locate and track an object is directed linked to the nodes' ability to locate themselves. Consequently, localization systems are essential for target tracking applications. In addition, sensor networks are often deployed in remote or hostile environments. Therefore, density control algorithms are used to increase network lifetime while maintaining its sensing capabilities. In this work, we analyze the impact of localization algorithms (RPE and DPE) and density control algorithms (GAF, A3 and OGDC) on target tracking applications. We adapt the density control algorithms to address the k-coverage problem. In addition, we analyze the impact of network density, residual integration with density control, and k-coverage on both target tracking accuracy and network lifetime. Our results show that DPE is a better choice for target tracking applications than RPE. Moreover, among the evaluated density control algorithms, OGDC is the best option among the three. Although the choice of the density control algorithm has little impact on the tracking precision, OGDC outperforms GAF and A3 in terms of tracking time. PMID:22969329
Synthetic aperture radar image formation for the moving-target and near-field bistatic cases
NASA Astrophysics Data System (ADS)
Ding, Yu
This dissertation addresses topics in two areas of synthetic aperture radar (SAR) image formation: time-frequency based SAR imaging of moving targets and a fast backprojection (BP) algorithm for near-field bistatic SAR imaging. SAR imaging of a moving target is a challenging task due to unknown motion of the target. We approach this problem in a theoretical way, by analyzing the Wigner-Ville distribution (WVD) based SAR imaging technique. We derive approximate closed-form expressions for the point-target response of the SAR imaging system, which quantify the image resolution, and show how the blurring in conventional SAR imaging can be eliminated, while the target shift still remains. Our analyses lead to accurate prediction of the target position in the reconstructed images. The derived expressions also enable us to further study additional aspects of WVD-based SAR imaging. Bistatic SAR imaging is more involved than the monostatic SAR case, because of the separation of the transmitter and the receiver, and possibly the changing bistatic geometry. For near-field bistatic SAR imaging, we develop a novel fast BP algorithm, motivated by a newly proposed fast BP algorithm in computer tomography. First we show that the BP algorithm is the spatial-domain counterpart of the benchmark o -- k algorithm in bistatic SAR imaging, yet it avoids the frequency-domain interpolation in the o -- k algorithm, which may cause artifacts in the reconstructed image. We then derive the band-limited property for BP methods in both monostatic and bistatic SAR imaging, which is the basis for developing the fast BP algorithm. We compare our algorithm with other frequency-domain based algorithms, and show that it achieves better reconstructed image quality, while having the same computational complexity as that of the frequency-domain based algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, Alan E.
Here, proposed dark matter detectors with eV-scale sensitivities will detect a large background of atomic (nuclear) recoils from coherent photon scattering of MeV-scale photons. This background climbs steeply below ~10 eV, far exceeding the declining rate of low-energy Compton recoils. The upcoming generation of dark matter detectors will not be limited by this background, but further development of eV-scale and sub-eV detectors will require strategies, including the use of low nuclear mass target materials, to maximize dark matter sensitivity while minimizing the coherent photon scattering background.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, I; Ahmad, S; Alsbou, N
Purpose: To develop 4D-cone-beam CT (CBCT) algorithm by motion modeling that extracts actual length, CT numbers level and motion amplitude of a mobile target retrospective to image reconstruction by motion modeling. Methods: The algorithm used three measurable parameters: apparent length and blurred CT number distribution of a mobile target obtained from CBCT images to determine actual length, CT-number value of the stationary target, and motion amplitude. The predictions of this algorithm were tested with mobile targets that with different well-known sizes made from tissue-equivalent gel which was inserted into a thorax phantom. The phantom moved sinusoidally in one-direction to simulatemore » respiratory motion using eight amplitudes ranging 0–20mm. Results: Using this 4D-CBCT algorithm, three unknown parameters were extracted that include: length of the target, CT number level, speed or motion amplitude for the mobile targets retrospective to image reconstruction. The motion algorithms solved for the three unknown parameters using measurable apparent length, CT number level and gradient for a well-defined mobile target obtained from CBCT images. The motion model agreed with measured apparent lengths which were dependent on the actual target length and motion amplitude. The gradient of the CT number distribution of the mobile target is dependent on the stationary CT number level, actual target length and motion amplitude. Motion frequency and phase did not affect the elongation and CT number distribution of the mobile target and could not be determined. Conclusion: A 4D-CBCT motion algorithm was developed to extract three parameters that include actual length, CT number level and motion amplitude or speed of mobile targets directly from reconstructed CBCT images without prior knowledge of the stationary target parameters. This algorithm provides alternative to 4D-CBCT without requirement to motion tracking and sorting of the images into different breathing phases which has potential applications in diagnostic CT imaging and radiotherapy.« less
Inversion method applied to the rotation curves of galaxies
NASA Astrophysics Data System (ADS)
Márquez-Caicedo, L. A.; Lora-Clavijo, F. D.; Sanabria-Gómez, J. D.
2017-07-01
We used simulated annealing, Montecarlo and genetic algorithm methods for matching both numerical data of density and velocity profiles in some low surface brigthness galaxies with theoretical models of Boehmer-Harko, Navarro-Frenk-White and Pseudo Isothermal Profiles for galaxies with dark matter halos. We found that Navarro-Frenk-White model does not fit at all in contrast with the other two models which fit very well. Inversion methods have been widely used in various branches of science including astrophysics (Charbonneau 1995, ApJS, 101, 309). In this work we have used three different parametric inversion methods (MonteCarlo, Genetic Algorithm and Simmulated Annealing) in order to determine the best fit of the observed data of the density and velocity profiles of a set of low surface brigthness galaxies (De Block et al. 2001, ApJ, 122, 2396) with three models of galaxies containing dark mattter. The parameters adjusted by the inversion methods were the central density and a characteristic distance in the Boehmer-Harko BH (Boehmer & Harko 2007, JCAP, 6, 25), Navarro-Frenk-White NFW (Navarro et al. 2007, ApJ, 490, 493) and Pseudo Isothermal Profile PI (Robles & Matos 2012, MNRAS, 422, 282). The results obtained showed that the BH and PI Profile dark matter galaxies fit very well for both the density and the velocity profiles, in contrast the NFW model did not make good adjustments to the profiles in any analized galaxy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horiuchi, Shunsaku, E-mail: horiuchi@vt.edu
2016-06-21
The cold dark matter paradigm has been extremely successful in explaining the large-scale structure of the Universe. However, it continues to face issues when confronted by observations on sub-Galactic scales. A major caveat, now being addressed, has been the incomplete treatment of baryon physics. We first summarize the small-scale issues surrounding cold dark matter and discuss the solutions explored by modern state-of-the-art numerical simulations including treatment of baryonic physics. We identify the too big to fail in field galaxies as among the best targets to study modifications to dark matter, and discuss the particular connection with sterile neutrino warm darkmore » matter. We also discuss how the recently detected anomalous 3.55 keV X-ray lines, when interpreted as sterile neutrino dark matter decay, provide a very good description of small-scale observations of the Local Group.« less
Direct Search for Dark Matter with DarkSide
NASA Astrophysics Data System (ADS)
Agnes, P.; Alexander, T.; Alton, A.; Arisaka, K.; Back, H. O.; Baldin, B.; Biery, K.; Bonfini, G.; Bossa, M.; Brigatti, A.; Brodsky, J.; Budano, F.; Cadonati, L.; Calaprice, F.; Canci, N.; Candela, A.; Cao, H.; Cariello, M.; Cavalcante, P.; Chavarria, A.; Chepurnov, A.; Cocco, A. G.; Crippa, L.; D'Angelo, D.; D'Incecco, M.; Davini, S.; De Deo, M.; Derbin, A.; Devoto, A.; Di Eusanio, F.; Di Pietro, G.; Edkins, E.; Empl, A.; Fan, A.; Fiorillo, G.; Fomenko, K.; Forster, G.; Franco, D.; Gabriele, F.; Galbiati, C.; Goretti, A.; Grandi, L.; Gromov, M.; Guan, M. Y.; Guardincerri, Y.; Hackett, B.; Herner, K.; Hungerford, E. V.; Ianni, Al; Ianni, An; Jollet, C.; Keeter, K.; Kendziora, C.; Kidner, S.; Kobychev, V.; Koh, G.; Korablev, D.; Korga, G.; Kurlej, A.; Li, P. X.; Loer, B.; Lombardi, P.; Love, C.; Ludhova, L.; Luitz, S.; Ma, Y. Q.; Machulin, I.; Mandarano, A.; Mari, S.; Maricic, J.; Marini, L.; Martoff, C. J.; Meregaglia, A.; Meroni, E.; Meyers, P. D.; Milincic, R.; Montanari, D.; Montuschi, M.; Monzani, M. E.; Mosteiro, P.; Mount, B.; Muratova, V.; Musico, P.; Nelson, A.; Odrowski, S.; Okounkova, M.; Orsini, M.; Ortica, F.; Pagani, L.; Pallavicini, M.; Pantic, E.; Papp, L.; Parmeggiano, S.; Parsells, R.; Pelczar, K.; Pelliccia, N.; Perasso, S.; Pocar, A.; Pordes, S.; Pugachev, D.; Qian, H.; Randle, K.; Ranucci, G.; Razeto, A.; Reinhold, B.; Renshaw, A.; Romani, A.; Rossi, B.; Rossi, N.; Rountree, S. D.; Sablone, D.; Saggese, P.; Saldanha, R.; Sands, W.; Sangiorgio, S.; Segreto, E.; Semenov, D.; Shields, E.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Stanford, C.; Suvorov, Y.; Tartaglia, R.; Tatarowicz, J.; Testera, G.; Tonazzo, A.; Unzhakov, E.; Vogelaar, R. B.; Wada, M.; Walker, S.; Wang, H.; Wang, Y.; Watson, A.; Westerdale, S.; Wojcik, M.; Wright, A.; Xiang, X.; Xu, J.; Yang, C. G.; Yoo, J.; Zavatarelli, S.; Zec, A.; Zhu, C.; Zuzel, G.
2015-11-01
The DarkSide experiment is designed for the direct detection of Dark Matter with a double phase liquid Argon TPC operating underground at Laboratori Nazionali del Gran Sasso. The TPC is placed inside a 30 tons liquid organic scintillator sphere, acting as a neutron veto, which is in turn installed inside a 1 kt water Cherenkov detector. The current detector is running since November 2013 with a 50 kg atmospheric Argon fill and we report here the first null results of a Dark Matter search for a (1422 ± 67) kg.d exposure. This result correspond to a 90% CL upper limit on the WIMP-nucleon cross section of 6.1 × 10-44 cm2 (for a WIMP mass of 100 GeV/c2) and it's currently the most sensitive limit obtained with an Argon target.
Infrared measurement and composite tracking algorithm for air-breathing hypersonic vehicles
NASA Astrophysics Data System (ADS)
Zhang, Zhao; Gao, Changsheng; Jing, Wuxing
2018-03-01
Air-breathing hypersonic vehicles have capabilities of hypersonic speed and strong maneuvering, and thus pose a significant challenge to conventional tracking methodologies. To achieve desirable tracking performance for hypersonic targets, this paper investigates the problems related to measurement model design and tracking model mismatching. First, owing to the severe aerothermal effect of hypersonic motion, an infrared measurement model in near space is designed and analyzed based on target infrared radiation and an atmospheric model. Second, using information from infrared sensors, a composite tracking algorithm is proposed via a combination of the interactive multiple models (IMM) algorithm, fitting dynamics model, and strong tracking filter. During the procedure, the IMMs algorithm generates tracking data to establish a fitting dynamics model of the target. Then, the strong tracking unscented Kalman filter is employed to estimate the target states for suppressing the impact of target maneuvers. Simulations are performed to verify the feasibility of the presented composite tracking algorithm. The results demonstrate that the designed infrared measurement model effectively and continuously observes hypersonic vehicles, and the proposed composite tracking algorithm accurately and stably tracks these targets.
Foliage penetration by using 4-D point cloud data
NASA Astrophysics Data System (ADS)
Méndez Rodríguez, Javier; Sánchez-Reyes, Pedro J.; Cruz-Rivera, Sol M.
2012-06-01
Real-time awareness and rapid target detection are critical for the success of military missions. New technologies capable of detecting targets concealed in forest areas are needed in order to track and identify possible threats. Currently, LAser Detection And Ranging (LADAR) systems are capable of detecting obscured targets; however, tracking capabilities are severely limited. Now, a new LADAR-derived technology is under development to generate 4-D datasets (3-D video in a point cloud format). As such, there is a new need for algorithms that are able to process data in real time. We propose an algorithm capable of removing vegetation and other objects that may obfuscate concealed targets in a real 3-D environment. The algorithm is based on wavelets and can be used as a pre-processing step in a target recognition algorithm. Applications of the algorithm in a real-time 3-D system could help make pilots aware of high risk hidden targets such as tanks and weapons, among others. We will be using a 4-D simulated point cloud data to demonstrate the capabilities of our algorithm.
An improved multi-domain convolution tracking algorithm
NASA Astrophysics Data System (ADS)
Sun, Xin; Wang, Haiying; Zeng, Yingsen
2018-04-01
Along with the wide application of the Deep Learning in the field of Computer vision, Deep learning has become a mainstream direction in the field of object tracking. The tracking algorithm in this paper is based on the improved multidomain convolution neural network, and the VOT video set is pre-trained on the network by multi-domain training strategy. In the process of online tracking, the network evaluates candidate targets sampled from vicinity of the prediction target in the previous with Gaussian distribution, and the candidate target with the highest score is recognized as the prediction target of this frame. The Bounding Box Regression model is introduced to make the prediction target closer to the ground-truths target box of the test set. Grouping-update strategy is involved to extract and select useful update samples in each frame, which can effectively prevent over fitting. And adapt to changes in both target and environment. To improve the speed of the algorithm while maintaining the performance, the number of candidate target succeed in adjusting dynamically with the help of Self-adaption parameter Strategy. Finally, the algorithm is tested by OTB set, compared with other high-performance tracking algorithms, and the plot of success rate and the accuracy are drawn. which illustrates outstanding performance of the tracking algorithm in this paper.
FAST-PT: a novel algorithm to calculate convolution integrals in cosmological perturbation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
McEwen, Joseph E.; Fang, Xiao; Hirata, Christopher M.
2016-09-01
We present a novel algorithm, FAST-PT, for performing convolution or mode-coupling integrals that appear in nonlinear cosmological perturbation theory. The algorithm uses several properties of gravitational structure formation—the locality of the dark matter equations and the scale invariance of the problem—as well as Fast Fourier Transforms to describe the input power spectrum as a superposition of power laws. This yields extremely fast performance, enabling mode-coupling integral computations fast enough to embed in Monte Carlo Markov Chain parameter estimation. We describe the algorithm and demonstrate its application to calculating nonlinear corrections to the matter power spectrum, including one-loop standard perturbation theorymore » and the renormalization group approach. We also describe our public code (in Python) to implement this algorithm. The code, along with a user manual and example implementations, is available at https://github.com/JoeMcEwen/FAST-PT.« less
NASA Astrophysics Data System (ADS)
Zhou, Tingting; Gu, Lingjia; Ren, Ruizhi; Cao, Qiong
2016-09-01
With the rapid development of remote sensing technology, the spatial resolution and temporal resolution of satellite imagery also have a huge increase. Meanwhile, High-spatial-resolution images are becoming increasingly popular for commercial applications. The remote sensing image technology has broad application prospects in intelligent traffic. Compared with traditional traffic information collection methods, vehicle information extraction using high-resolution remote sensing image has the advantages of high resolution and wide coverage. This has great guiding significance to urban planning, transportation management, travel route choice and so on. Firstly, this paper preprocessed the acquired high-resolution multi-spectral and panchromatic remote sensing images. After that, on the one hand, in order to get the optimal thresholding for image segmentation, histogram equalization and linear enhancement technologies were applied into the preprocessing results. On the other hand, considering distribution characteristics of road, the normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were used to suppress water and vegetation information of preprocessing results. Then, the above two processing result were combined. Finally, the geometric characteristics were used to completed road information extraction. The road vector extracted was used to limit the target vehicle area. Target vehicle extraction was divided into bright vehicles extraction and dark vehicles extraction. Eventually, the extraction results of the two kinds of vehicles were combined to get the final results. The experiment results demonstrated that the proposed algorithm has a high precision for the vehicle information extraction for different high resolution remote sensing images. Among these results, the average fault detection rate was about 5.36%, the average residual rate was about 13.60% and the average accuracy was approximately 91.26%.
Two novel motion-based algorithms for surveillance video analysis on embedded platforms
NASA Astrophysics Data System (ADS)
Vijverberg, Julien A.; Loomans, Marijn J. H.; Koeleman, Cornelis J.; de With, Peter H. N.
2010-05-01
This paper proposes two novel motion-vector based techniques for target detection and target tracking in surveillance videos. The algorithms are designed to operate on a resource-constrained device, such as a surveillance camera, and to reuse the motion vectors generated by the video encoder. The first novel algorithm for target detection uses motion vectors to construct a consistent motion mask, which is combined with a simple background segmentation technique to obtain a segmentation mask. The second proposed algorithm aims at multi-target tracking and uses motion vectors to assign blocks to targets employing five features. The weights of these features are adapted based on the interaction between targets. These algorithms are combined in one complete analysis application. The performance of this application for target detection has been evaluated for the i-LIDS sterile zone dataset and achieves an F1-score of 0.40-0.69. The performance of the analysis algorithm for multi-target tracking has been evaluated using the CAVIAR dataset and achieves an MOTP of around 9.7 and MOTA of 0.17-0.25. On a selection of targets in videos from other datasets, the achieved MOTP and MOTA are 8.8-10.5 and 0.32-0.49 respectively. The execution time on a PC-based platform is 36 ms. This includes the 20 ms for generating motion vectors, which are also required by the video encoder.
Real time target allocation in cooperative unmanned aerial vehicles
NASA Astrophysics Data System (ADS)
Kudleppanavar, Ganesh
The prolific development of Unmanned Aerial Vehicles (UAV's) in recent years has the potential to provide tremendous advantages in military, commercial and law enforcement applications. While safety and performance take precedence in the development lifecycle, autonomous operations and, in particular, cooperative missions have the ability to significantly enhance the usability of these vehicles. The success of cooperative missions relies on the optimal allocation of targets while taking into consideration the resource limitation of each vehicle. The task allocation process can be centralized or decentralized. This effort presents the development of a real time target allocation algorithm that considers available stored energy in each vehicle while minimizing the communication between each UAV. The algorithm utilizes a nearest neighbor search algorithm to locate new targets with respect to existing targets. Simulations show that this novel algorithm compares favorably to the mixed integer linear programming method, which is computationally more expensive. The implementation of this algorithm on Arduino and Xbee wireless modules shows the capability of the algorithm to execute efficiently on hardware with minimum computation complexity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Witte, Samuel J.; Gluscevic, Vera; McDermott, Samuel D.
It has recently been demonstrated that, in the event of a putative signal in dark matter direct detection experiments, properly identifying the underlying dark matter-nuclei interaction promises to be a challenging task. Given the most optimistic expectations for the number counts of recoil events in the forthcoming Generation 2 experiments, differentiating between interactions that produce distinct features in the recoil energy spectra will only be possible if a strong signal is observed simultaneously on a variety of complementary targets. However, there is a wide range of viable theories that give rise to virtually identical energy spectra, and may only differmore » by the dependence of the recoil rate on the dark matter velocity. In this work, we investigate how degeneracy between such competing models may be broken by analyzing the time dependence of nuclear recoils, i.e. the annual modulation of the rate. For this purpose, we simulate dark matter events for a variety of interactions and experiments, and perform a Bayesian model-selection analysis on all simulated data sets, evaluating the chance of correctly identifying the input model for a given experimental setup. Lastly, we find that including information on the annual modulation of the rate may significantly enhance the ability of a single target to distinguish dark matter models with nearly degenerate recoil spectra, but only with exposures beyond the expectations of Generation 2 experiments.« less
First results from the DarkSide-50 dark matter experiment at Laboratori Nazionali del Gran Sasso
Agnes, P.
2015-03-11
We report the first results of DarkSide-50, a direct search for dark matter operating in the underground Laboratori Nazionali del Gran Sasso (LNGS) and searching for the rare nuclear recoils possibly induced by weakly interacting massive particles (WIMPs). The dark matter detector is a Liquid Argon Time Projection Chamber with a (46.4 ± 0.7) kg active mass, operated inside a 30 t organic liquid scintillator neutron veto, which is in turn installed at the center of a 1 kt water Cherenkov veto for the residual flux of cosmic rays. We report here the null results of a dark matter searchmore » for a (1422 ± 67) kg d exposure with an atmospheric argon fill. As a result, this is the most sensitive dark matter search performed with an argon target, corresponding to a 90% CL upper limit on the WIMP-nucleon spin-independent cross section of 6.1×10 -44 cm 2 for a WIMP mass of 100 Gev/c 2.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ackermann, M.; Ajello, M.; /Stanford U., HEPL /Taiwan, Natl. Taiwan U. /SLAC
Satellite galaxies of the Milky Way are among the most promising targets for dark matter searches in gamma rays. We present a search for dark matter consisting of weakly interacting massive particles, applying a joint likelihood analysis to 10 satellite galaxies with 24 months of data of the Fermi Large Area Telescope. No dark matter signal is detected. Including the uncertainty in the dark matter distribution, robust upper limits are placed on dark matter annihilation cross sections. The 95% confidence level upper limits range from about 10{sup -26} cm{sup 3} s{sup -1} at 5 GeV to about 5 x 10{supmore » -23} cm{sup 3} s{sup -1} at 1 TeV, depending on the dark matter annihilation final state. For the first time, using gamma rays, we are able to rule out models with the most generic cross section ({approx}3 x 10{sup -26} cm{sup 3} s{sup -1} for a purely s-wave cross section), without assuming additional boost factors.« less
Ackermann, M; Ajello, M; Albert, A; Atwood, W B; Baldini, L; Ballet, J; Barbiellini, G; Bastieri, D; Bechtol, K; Bellazzini, R; Berenji, B; Blandford, R D; Bloom, E D; Bonamente, E; Borgland, A W; Bregeon, J; Brigida, M; Bruel, P; Buehler, R; Burnett, T H; Buson, S; Caliandro, G A; Cameron, R A; Cañadas, B; Caraveo, P A; Casandjian, J M; Cecchi, C; Charles, E; Chekhtman, A; Chiang, J; Ciprini, S; Claus, R; Cohen-Tanugi, J; Conrad, J; Cutini, S; de Angelis, A; de Palma, F; Dermer, C D; Digel, S W; do Couto e Silva, E; Drell, P S; Drlica-Wagner, A; Falletti, L; Favuzzi, C; Fegan, S J; Ferrara, E C; Fukazawa, Y; Funk, S; Fusco, P; Gargano, F; Gasparrini, D; Gehrels, N; Germani, S; Giglietto, N; Giordano, F; Giroletti, M; Glanzman, T; Godfrey, G; Grenier, I A; Guiriec, S; Gustafsson, M; Hadasch, D; Hayashida, M; Hays, E; Hughes, R E; Jeltema, T E; Jóhannesson, G; Johnson, R P; Johnson, A S; Kamae, T; Katagiri, H; Kataoka, J; Knödlseder, J; Kuss, M; Lande, J; Latronico, L; Lionetto, A M; Llena Garde, M; Longo, F; Loparco, F; Lott, B; Lovellette, M N; Lubrano, P; Madejski, G M; Mazziotta, M N; McEnery, J E; Mehault, J; Michelson, P F; Mitthumsiri, W; Mizuno, T; Monte, C; Monzani, M E; Morselli, A; Moskalenko, I V; Murgia, S; Naumann-Godo, M; Norris, J P; Nuss, E; Ohsugi, T; Okumura, A; Omodei, N; Orlando, E; Ormes, J F; Ozaki, M; Paneque, D; Parent, D; Pesce-Rollins, M; Pierbattista, M; Piron, F; Pivato, G; Porter, T A; Profumo, S; Rainò, S; Razzano, M; Reimer, A; Reimer, O; Ritz, S; Roth, M; Sadrozinski, H F-W; Sbarra, C; Scargle, J D; Schalk, T L; Sgrò, C; Siskind, E J; Spandre, G; Spinelli, P; Strigari, L; Suson, D J; Tajima, H; Takahashi, H; Tanaka, T; Thayer, J G; Thayer, J B; Thompson, D J; Tibaldo, L; Tinivella, M; Torres, D F; Troja, E; Uchiyama, Y; Vandenbroucke, J; Vasileiou, V; Vianello, G; Vitale, V; Waite, A P; Wang, P; Winer, B L; Wood, K S; Wood, M; Yang, Z; Zimmer, S; Kaplinghat, M; Martinez, G D
2011-12-09
Satellite galaxies of the Milky Way are among the most promising targets for dark matter searches in gamma rays. We present a search for dark matter consisting of weakly interacting massive particles, applying a joint likelihood analysis to 10 satellite galaxies with 24 months of data of the Fermi Large Area Telescope. No dark matter signal is detected. Including the uncertainty in the dark matter distribution, robust upper limits are placed on dark matter annihilation cross sections. The 95% confidence level upper limits range from about 10(-26) cm3 s(-1) at 5 GeV to about 5×10(-23) cm3 s(-1) at 1 TeV, depending on the dark matter annihilation final state. For the first time, using gamma rays, we are able to rule out models with the most generic cross section (∼3×10(-26) cm3 s(-1) for a purely s-wave cross section), without assuming additional boost factors.
Wang, Wensheng; Nie, Ting; Fu, Tianjiao; Ren, Jianyue; Jin, Longxu
2017-05-06
In target detection of optical remote sensing images, two main obstacles for aircraft target detection are how to extract the candidates in complex gray-scale-multi background and how to confirm the targets in case the target shapes are deformed, irregular or asymmetric, such as that caused by natural conditions (low signal-to-noise ratio, illumination condition or swaying photographing) and occlusion by surrounding objects (boarding bridge, equipment). To solve these issues, an improved active contours algorithm, namely region-scalable fitting energy based threshold (TRSF), and a corner-convex hull based segmentation algorithm (CCHS) are proposed in this paper. Firstly, the maximal variance between-cluster algorithm (Otsu's algorithm) and region-scalable fitting energy (RSF) algorithm are combined to solve the difficulty of targets extraction in complex and gray-scale-multi backgrounds. Secondly, based on inherent shapes and prominent corners, aircrafts are divided into five fragments by utilizing convex hulls and Harris corner points. Furthermore, a series of new structure features, which describe the proportion of targets part in the fragment to the whole fragment and the proportion of fragment to the whole hull, are identified to judge whether the targets are true or not. Experimental results show that TRSF algorithm could improve extraction accuracy in complex background, and that it is faster than some traditional active contours algorithms. The CCHS is effective to suppress the detection difficulties caused by the irregular shape.
NASA Technical Reports Server (NTRS)
Ackermann, M.; Albert, A.; Anderson, B.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Bissaldi, E.;
2013-01-01
The dwarf spheroidal satellite galaxies of the Milky Way are some of the most dark-matter-dominated objects known. Due to their proximity, high dark matter content, and lack of astrophysical backgrounds, dwarf spheroidal galaxies are widely considered to be among the most promising targets for the indirect detection of dark matter via gamma rays. Here we report on gamma ray observations of 25 Milky Way dwarf spheroidal satellite galaxies based on 4 years of Fermi Large Area Telescope (LAT) data. None of the dwarf galaxies are significantly detected in gamma rays, and we present gamma ray flux upper limits between 500MeV and 500 GeV. We determine the dark matter content of 18 dwarf spheroidal galaxies from stellar kinematic data and combine LAT observations of 15 dwarf galaxies to constrain the dark matter annihilation cross section. We set some of the tightest constraints to date on the annihilation of dark matter particles with masses between 2 GeV and 10TeV into prototypical standard model channels. We find these results to be robust against systematic uncertainties in the LAT instrument performance, diffuse gamma ray background modeling, and assumed dark matter density profile.
LHC searches for dark sector showers
NASA Astrophysics Data System (ADS)
Cohen, Timothy; Lisanti, Mariangela; Lou, Hou Keong; Mishra-Sharma, Siddharth
2017-11-01
This paper proposes a new search program for dark sector parton showers at the Large Hadron Collider (LHC). These signatures arise in theories characterized by strong dynamics in a hidden sector, such as Hidden Valley models. A dark parton shower can be composed of both invisible dark matter particles as well as dark sector states that decay to Standard Model particles via a portal. The focus here is on the specific case of `semi-visible jets,' jet-like collider objects where the visible states in the shower are Standard Model hadrons. We present a Simplified Model-like parametrization for the LHC observables and propose targeted search strategies for regions of parameter space that are not covered by existing analyses. Following the `mono- X' literature, the portal is modeled using either an effective field theoretic contact operator approach or with one of two ultraviolet completions; sensitivity projections are provided for all three cases. We additionally highlight that the LHC has a unique advantage over direct detection experiments in the search for this class of dark matter theories.
Experimental verification of an interpolation algorithm for improved estimates of animal position
NASA Astrophysics Data System (ADS)
Schell, Chad; Jaffe, Jules S.
2004-07-01
This article presents experimental verification of an interpolation algorithm that was previously proposed in Jaffe [J. Acoust. Soc. Am. 105, 3168-3175 (1999)]. The goal of the algorithm is to improve estimates of both target position and target strength by minimizing a least-squares residual between noise-corrupted target measurement data and the output of a model of the sonar's amplitude response to a target at a set of known locations. Although this positional estimator was shown to be a maximum likelihood estimator, in principle, experimental verification was desired because of interest in understanding its true performance. Here, the accuracy of the algorithm is investigated by analyzing the correspondence between a target's true position and the algorithm's estimate. True target position was measured by precise translation of a small test target (bead) or from the analysis of images of fish from a coregistered optical imaging system. Results with the stationary spherical test bead in a high signal-to-noise environment indicate that a large increase in resolution is possible, while results with commercial aquarium fish indicate a smaller increase is obtainable. However, in both experiments the algorithm provides improved estimates of target position over those obtained by simply accepting the angular positions of the sonar beam with maximum output as target position. In addition, increased accuracy in target strength estimation is possible by considering the effects of the sonar beam patterns relative to the interpolated position. A benefit of the algorithm is that it can be applied ``ex post facto'' to existing data sets from commercial multibeam sonar systems when only the beam intensities have been stored after suitable calibration.
NASA Astrophysics Data System (ADS)
Akashi-Ronquest, M.; Amaudruz, P.-A.; Batygov, M.; Beltran, B.; Bodmer, M.; Boulay, M. G.; Broerman, B.; Buck, B.; Butcher, A.; Cai, B.; Caldwell, T.; Chen, M.; Chen, Y.; Cleveland, B.; Coakley, K.; Dering, K.; Duncan, F. A.; Formaggio, J. A.; Gagnon, R.; Gastler, D.; Giuliani, F.; Gold, M.; Golovko, V. V.; Gorel, P.; Graham, K.; Grace, E.; Guerrero, N.; Guiseppe, V.; Hallin, A. L.; Harvey, P.; Hearns, C.; Henning, R.; Hime, A.; Hofgartner, J.; Jaditz, S.; Jillings, C. J.; Kachulis, C.; Kearns, E.; Kelsey, J.; Klein, J. R.; Kuźniak, M.; LaTorre, A.; Lawson, I.; Li, O.; Lidgard, J. J.; Liimatainen, P.; Linden, S.; McFarlane, K.; McKinsey, D. N.; MacMullin, S.; Mastbaum, A.; Mathew, R.; McDonald, A. B.; Mei, D.-M.; Monroe, J.; Muir, A.; Nantais, C.; Nicolics, K.; Nikkel, J. A.; Noble, T.; O'Dwyer, E.; Olsen, K.; Orebi Gann, G. D.; Ouellet, C.; Palladino, K.; Pasuthip, P.; Perumpilly, G.; Pollmann, T.; Rau, P.; Retière, F.; Rielage, K.; Schnee, R.; Seibert, S.; Skensved, P.; Sonley, T.; Vázquez-Jáuregui, E.; Veloce, L.; Walding, J.; Wang, B.; Wang, J.; Ward, M.; Zhang, C.
2015-05-01
Many current and future dark matter and neutrino detectors are designed to measure scintillation light with a large array of photomultiplier tubes (PMTs). The energy resolution and particle identification capabilities of these detectors depend in part on the ability to accurately identify individual photoelectrons in PMT waveforms despite large variability in pulse amplitudes and pulse pileup. We describe a Bayesian technique that can identify the times of individual photoelectrons in a sampled PMT waveform without deconvolution, even when pileup is present. To demonstrate the technique, we apply it to the general problem of particle identification in single-phase liquid argon dark matter detectors. Using the output of the Bayesian photoelectron counting algorithm described in this paper, we construct several test statistics for rejection of backgrounds for dark matter searches in argon. Compared to simpler methods based on either observed charge or peak finding, the photoelectron counting technique improves both energy resolution and particle identification of low energy events in calibration data from the DEAP-1 detector and simulation of the larger MiniCLEAN dark matter detector.
Testing light dark matter coannihilation with fixed-target experiments
Izaguirre, Eder; Kahn, Yonatan; Krnjaic, Gordan; ...
2017-09-01
In this paper, we introduce a novel program of fixed-target searches for thermal-origin Dark Matter (DM), which couples inelastically to the Standard Model. Since the DM only interacts by transitioning to a heavier state, freeze-out proceeds via coannihilation and the unstable heavier state is depleted at later times. For sufficiently large mass splittings, direct detection is kinematically forbidden and indirect detection is impossible, so this scenario can only be tested with accelerators. Here we propose new searches at proton and electron beam fixed-target experiments to probe sub-GeV coannihilation, exploiting the distinctive signals of up- and downscattering as well as decaymore » of the excited state inside the detector volume. We focus on a representative model in which DM is a pseudo-Dirac fermion coupled to a hidden gauge field (dark photon), which kinetically mixes with the visible photon. We define theoretical targets in this framework and determine the existing bounds by reanalyzing results from previous experiments. We find that LSND, E137, and BaBar data already place strong constraints on the parameter space consistent with a thermal freeze-out origin, and that future searches at Belle II and MiniBooNE, as well as recently-proposed fixed-target experiments such as LDMX and BDX, can cover nearly all remaining gaps. We also briefly comment on the discovery potential for proposed beam dump and neutrino experiments which operate at much higher beam energies.« less
Testing light dark matter coannihilation with fixed-target experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Izaguirre, Eder; Kahn, Yonatan; Krnjaic, Gordan
In this paper, we introduce a novel program of fixed-target searches for thermal-origin Dark Matter (DM), which couples inelastically to the Standard Model. Since the DM only interacts by transitioning to a heavier state, freeze-out proceeds via coannihilation and the unstable heavier state is depleted at later times. For sufficiently large mass splittings, direct detection is kinematically forbidden and indirect detection is impossible, so this scenario can only be tested with accelerators. Here we propose new searches at proton and electron beam fixed-target experiments to probe sub-GeV coannihilation, exploiting the distinctive signals of up- and downscattering as well as decaymore » of the excited state inside the detector volume. We focus on a representative model in which DM is a pseudo-Dirac fermion coupled to a hidden gauge field (dark photon), which kinetically mixes with the visible photon. We define theoretical targets in this framework and determine the existing bounds by reanalyzing results from previous experiments. We find that LSND, E137, and BaBar data already place strong constraints on the parameter space consistent with a thermal freeze-out origin, and that future searches at Belle II and MiniBooNE, as well as recently-proposed fixed-target experiments such as LDMX and BDX, can cover nearly all remaining gaps. We also briefly comment on the discovery potential for proposed beam dump and neutrino experiments which operate at much higher beam energies.« less
Testing light dark matter coannihilation with fixed-target experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Izaguirre, Eder; Kahn, Yonatan; Krnjaic, Gordan
In this paper, we introduce a novel program of fixed-target searches for thermal-origin Dark Matter (DM), which couples inelastically to the Standard Model. Since the DM only interacts by transitioning to a heavier state, freeze-out proceeds via coannihilation and the unstable heavier state is depleted at later times. For sufficiently large mass splittings, direct detection is kinematically forbidden and indirect detection is impossible, so this scenario can only be tested with accelerators. Here we propose new searches at proton and electron beam fixed-target experiments to probe sub-GeV coannihilation, exploiting the distinctive signals of up- and down-scattering as well as decaymore » of the excited state inside the detector volume. We focus on a representative model in which DM is a pseudo-Dirac fermion coupled to a hidden gauge field (dark photon), which kinetically mixes with the visible photon. We define theoretical targets in this framework and determine the existing bounds by reanalyzing results from previous experiments. We find that LSND, E137, and BaBar data already place strong constraints on the parameter space consistent with a thermal freeze-out origin, and that future searches at Belle II and MiniBooNE, as well as recently-proposed fixed-target experiments such as LDMX and BDX, can cover nearly all remaining gaps. We also briefly comment on the discovery potential for proposed beam dump and neutrino experiments which operate at much higher beam energies.« less
The effect of perceptual load on tactile spatial attention: Evidence from event-related potentials.
Gherri, Elena; Berreby, Fiona
2017-10-15
To investigate whether tactile spatial attention is modulated by perceptual load, behavioural and electrophysiological measures were recorded during two spatial cuing tasks in which the difficulty of the target/non-target discrimination was varied (High and Low load tasks). Moreover, to study whether attentional modulations by load are sensitive to the availability of visual information, the High and Low load tasks were carried out under both illuminated and darkness conditions. ERPs to cued and uncued non-targets were compared as a function of task (High vs. Low load) and illumination condition (Light vs. Darkness). Results revealed that the locus of tactile spatial attention was determined by a complex interaction between perceptual load and illumination conditions during sensory-specific stages of processing. In the Darkness, earlier effects of attention were present in the High load than in the Low load task, while no difference between tasks emerged in the Light. By contrast, increased load was associated with stronger attention effects during later post-perceptual processing stages regardless of illumination conditions. These findings demonstrate that ERP correlates of tactile spatial attention are strongly affected by the perceptual load of the target/non-target discrimination. However, differences between illumination conditions show that the impact of load on tactile attention depends on the presence of visual information. Perceptual load is one of the many factors that contribute to determine the effects of spatial selectivity in touch. Copyright © 2017 Elsevier B.V. All rights reserved.
Automated detection of open magnetic field regions in EUV images
NASA Astrophysics Data System (ADS)
Krista, Larisza Diana; Reinard, Alysha
2016-05-01
Open magnetic regions on the Sun are either long-lived (coronal holes) or transient (dimmings) in nature, but both appear as dark regions in EUV images. For this reason their detection can be done in a similar way. As coronal holes are often large and long-lived in comparison to dimmings, their detection is more straightforward. The Coronal Hole Automated Recognition and Monitoring (CHARM) algorithm detects coronal holes using EUV images and a magnetogram. The EUV images are used to identify dark regions, and the magnetogam allows us to determine if the dark region is unipolar - a characteristic of coronal holes. There is no temporal sensitivity in this process, since coronal hole lifetimes span days to months. Dimming regions, however, emerge and disappear within hours. Hence, the time and location of a dimming emergence need to be known to successfully identify them and distinguish them from regular coronal holes. Currently, the Coronal Dimming Tracker (CoDiT) algorithm is semi-automated - it requires the dimming emergence time and location as an input. With those inputs we can identify the dimming and track it through its lifetime. CoDIT has also been developed to allow the tracking of dimmings that split or merge - a typical feature of dimmings.The advantage of these particular algorithms is their ability to adapt to detecting different types of open field regions. For coronal hole detection, each full-disk solar image is processed individually to determine a threshold for the image, hence, we are not limited to a single pre-determined threshold. For dimming regions we also allow individual thresholds for each dimming, as they can differ substantially. This flexibility is necessary for a subjective analysis of the studied regions. These algorithms were developed with the goal to allow us better understand the processes that give rise to eruptive and non-eruptive open field regions. We aim to study how these regions evolve over time and what environmental factors influence their growth and decay over short and long time-periods (days to solar cycles).
Searching for Dark Photons in the SeaQuest Experiment
NASA Astrophysics Data System (ADS)
Mesquita de Medeiros, Michelle
2017-01-01
The SeaQuest/E906 experiment at Fermilab was designed to study anti-quark distributions in the nucleon and nuclei by using Drell-Yan interactions between the 120 GeV proton beam from the Main Injector and different fixed targets. The front face of an iron magnet placed next to the targets serves as a beam dump while the muon pairs generated from these interactions are detected downstream. In the absorption process in the dump many particles are produced, including, possibly, dark photons through processes such as proton bremsstrahlung and eta decay. The dark photons could scape the dump and then decay into dimuons after travelling a certain distance determined by the coupling to the EM sector. The decay vertex is therefore significantly displaced, allowing for a very low background search. By detecting the dimuons with the SeaQuest spectrometer and analyzing their invariant mass distribution, one can search for signatures of these exotic processes. The present status of the dark photon search analysis will be presented. This work was supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under Contract No. DE-AC02-06CH11357.
Discovery potential for directional dark matter detection with nuclear emulsions
NASA Astrophysics Data System (ADS)
Guler, A. M.;
2017-06-01
Direct Dark Matter searches are nowadays one of the most exciting research topics. Several Experimental efforts are concentrated on the development, construction, and operation of detectors looking for the scattering of target nuclei with Weakly Interactive Massive Particles (WIMPs). In this field a new frontier can be opened by directional detectors able to reconstruct the direction of the WIMP-recoiled nucleus thus allowing to extend dark matter searches beyond the neutrino floor. Exploiting directionality would also give a proof of the galactic origin of dark matter making it possible to have a clear and unambiguous signal to background separation. The angular distribution of WIPM-scattered nuclei is indeed expected to be peaked in the direction of the motion of the Solar System in the Galaxy, i.e. toward the Cygnus constellation, while the background distribution is expected to be isotropic. Current directional experiments are based on the use of gas TPC whose sensitivity is limited by the small achievable detector mass. In this paper we show the potentiality in terms of exclusion limit of a directional experiment based on the use of a solid target made by newly developed nuclear emulsions and read-out systems reaching sub-micrometric resolution.
NASA Astrophysics Data System (ADS)
Yadav, Deepti; Arora, M. K.; Tiwari, K. C.; Ghosh, J. K.
2016-04-01
Hyperspectral imaging is a powerful tool in the field of remote sensing and has been used for many applications like mineral detection, detection of landmines, target detection etc. Major issues in target detection using HSI are spectral variability, noise, small size of the target, huge data dimensions, high computation cost, complex backgrounds etc. Many of the popular detection algorithms do not work for difficult targets like small, camouflaged etc. and may result in high false alarms. Thus, target/background discrimination is a key issue and therefore analyzing target's behaviour in realistic environments is crucial for the accurate interpretation of hyperspectral imagery. Use of standard libraries for studying target's spectral behaviour has limitation that targets are measured in different environmental conditions than application. This study uses the spectral data of the same target which is used during collection of the HSI image. This paper analyze spectrums of targets in a way that each target can be spectrally distinguished from a mixture of spectral data. Artificial neural network (ANN) has been used to identify the spectral range for reducing data and further its efficacy for improving target detection is verified. The results of ANN proposes discriminating band range for targets; these ranges were further used to perform target detection using four popular spectral matching target detection algorithm. Further, the results of algorithms were analyzed using ROC curves to evaluate the effectiveness of the ranges suggested by ANN over full spectrum for detection of desired targets. In addition, comparative assessment of algorithms is also performed using ROC.
NASA Astrophysics Data System (ADS)
Inc, Mustafa; Isa Aliyu, Aliyu; Yusuf, Abdullahi; Baleanu, Dumitru
2017-12-01
This paper obtains the dark, bright, dark-bright or combined optical and singular solitons to the nonlinear Schrödinger equation (NLSE) with group velocity dispersion coefficient and second-order spatio-temporal dispersion coefficient, which arises in photonics and waveguide optics and in optical fibers. The integration algorithm is the sine-Gordon equation method (SGEM). Furthermore, the explicit solutions of the equation are derived by considering the power series solutions (PSS) theory and the convergence of the solutions is guaranteed. Lastly, the modulation instability analysis (MI) is studied based on the standard linear-stability analysis and the MI gain spectrum is obtained.
Automated detection of new impact sites on Martian surface from HiRISE images
NASA Astrophysics Data System (ADS)
Xin, Xin; Di, Kaichang; Wang, Yexin; Wan, Wenhui; Yue, Zongyu
2017-10-01
In this study, an automated method for Martian new impact site detection from single images is presented. It first extracts dark areas in full high resolution image, then detects new impact craters within dark areas using a cascade classifier which combines local binary pattern features and Haar-like features trained by an AdaBoost machine learning algorithm. Experimental results using 100 HiRISE images show that the overall detection rate of proposed method is 84.5%, with a true positive rate of 86.9%. The detection rate and true positive rate in the flat regions are 93.0% and 91.5%, respectively.
On Connected Target k-Coverage in Heterogeneous Wireless Sensor Networks.
Yu, Jiguo; Chen, Ying; Ma, Liran; Huang, Baogui; Cheng, Xiuzhen
2016-01-15
Coverage and connectivity are two important performance evaluation indices for wireless sensor networks (WSNs). In this paper, we focus on the connected target k-coverage (CTC k) problem in heterogeneous wireless sensor networks (HWSNs). A centralized connected target k-coverage algorithm (CCTC k) and a distributed connected target k-coverage algorithm (DCTC k) are proposed so as to generate connected cover sets for energy-efficient connectivity and coverage maintenance. To be specific, our proposed algorithms aim at achieving minimum connected target k-coverage, where each target in the monitored region is covered by at least k active sensor nodes. In addition, these two algorithms strive to minimize the total number of active sensor nodes and guarantee that each sensor node is connected to a sink, such that the sensed data can be forwarded to the sink. Our theoretical analysis and simulation results show that our proposed algorithms outperform a state-of-art connected k-coverage protocol for HWSNs.
Detecting ultralight bosonic dark matter via absorption in superconductors
Hochberg, Yonit; Lin, Tongyan; Zurek, Kathryn M.
2016-07-18
Superconducting targets have recently been proposed for the direct detection of dark matter as light as a keV, via elastic scattering off conduction electrons in Cooper pairs. Detecting such light dark matter requires sensitivity to energies as small as the superconducting gap of O(meV). Here we show that these same superconducting devices can detect much lighter DM, of meV to eV mass, via dark matter absorption on a conduction electron, followed by emission of an athermal phonon. Lastly, we demonstrate the power of this setup for relic kinetically mixed hidden photons, pseudoscalars, and scalars, showing that the reach can exceedmore » current astrophysical and terrestrial constraints with only a moderate exposure.« less
Dark Matter Detection Using Helium Evaporation and Field Ionization
NASA Astrophysics Data System (ADS)
Maris, Humphrey J.; Seidel, George M.; Stein, Derek
2017-11-01
We describe a method for dark matter detection based on the evaporation of helium atoms from a cold surface and their subsequent detection using field ionization. When a dark matter particle scatters off a nucleus of the target material, elementary excitations (phonons or rotons) are produced. Excitations which have an energy greater than the binding energy of helium to the surface can result in the evaporation of helium atoms. We propose to detect these atoms by ionizing them in a strong electric field. Because the binding energy of helium to surfaces can be below 1 meV, this detection scheme opens up new possibilities for the detection of dark matter particles in a mass range down to 1 MeV /c2 .
Dark Matter Detection Using Helium Evaporation and Field Ionization.
Maris, Humphrey J; Seidel, George M; Stein, Derek
2017-11-03
We describe a method for dark matter detection based on the evaporation of helium atoms from a cold surface and their subsequent detection using field ionization. When a dark matter particle scatters off a nucleus of the target material, elementary excitations (phonons or rotons) are produced. Excitations which have an energy greater than the binding energy of helium to the surface can result in the evaporation of helium atoms. We propose to detect these atoms by ionizing them in a strong electric field. Because the binding energy of helium to surfaces can be below 1 meV, this detection scheme opens up new possibilities for the detection of dark matter particles in a mass range down to 1 MeV/c^{2}.
NASA Astrophysics Data System (ADS)
Wei, Jing; Sun, Lin; Huang, Bo; Bilal, Muhammad; Zhang, Zhaoyang; Wang, Lunche
2018-02-01
The objective of this study is to evaluate typical aerosol optical depth (AOD) products in China, which experienced seriously increasing atmospheric particulate pollution. For this, the Aqua-MODerate resolution Imaging Spectroradiometer (MODIS) AOD products (MYD04) at 10 km spatial resolution and Visible Infrared Imaging Radiometer Suite (VIIRS) Environmental Data Record (EDR) AOD product at 6 km resolution for different Quality Flags (QF) are obtained for validation against AErosol RObotic NETwork (AERONET) AOD measurements during 2013-2016. Results show that VIIRS EDR similarly Dark Target (DT) and MODIS DT algorithms perform worse with only 45.36% and 45.59% of the retrievals (QF = 3) falling within the Expected Error (EE, ±(0.05 + 15%)) compared to the Deep Blue (DB) algorithm (69.25%, QF ≥ 2). The DT retrievals perform poorly over the Beijing-Tianjin-Hebei (BTH) and Yangtze-River-Delta (YRD) regions, which significantly overestimate the AOD observations, but the performance is better over the Pearl-River-Delta (PRD) region than DB retrievals, which seriously under-estimate the AOD loadings. It is not surprising that the DT algorithm performs better over vegetated areas, while the DB algorithm performs better over bright areas mainly depends on the accuracy of surface reflectance estimation over different land use types. In general, the sensitivity of aerosol to apparent reflectance reduces by about 34% with an increasing surface reflectance by 0.01. Moreover, VIIRS EDR and MODIS DT algorithms perform overall better in the winter as 64.53% and 72.22% of the retrievals are within the EE but with less retrievals. However, the DB algorithm performs worst (57.17%) in summer mainly affected by the vegetation growth but there are overall high accuracies with more than 62% of the collections falling within the EE in other three seasons. Results suggest that the quality assurance process can help improve the overall data quality for MYD04 DB retrievals, but it is not always true for VIIRS EDR and MYD04 DT AOD retrievals.
The XENON1T dark matter experiment
NASA Astrophysics Data System (ADS)
Aprile, E.; Aalbers, J.; Agostini, F.; Alfonsi, M.; Amaro, F. D.; Anthony, M.; Antunes, B.; Arneodo, F.; Balata, M.; Barrow, P.; Baudis, L.; Bauermeister, B.; Benabderrahmane, M. L.; Berger, T.; Breskin, A.; Breur, P. A.; Brown, A.; Brown, E.; Bruenner, S.; Bruno, G.; Budnik, R.; Bütikofer, L.; Calvén, J.; Cardoso, J. M. R.; Cervantes, M.; Chiarini, A.; Cichon, D.; Coderre, D.; Colijn, A. P.; Conrad, J.; Corrieri, R.; Cussonneau, J. P.; Decowski, M. P.; de Perio, P.; Gangi, P. Di; Giovanni, A. Di; Diglio, S.; Disdier, J.-M.; Doets, M.; Duchovni, E.; Eurin, G.; Fei, J.; Ferella, A. D.; Fieguth, A.; Franco, D.; Front, D.; Fulgione, W.; Rosso, A. Gallo; Galloway, M.; Gao, F.; Garbini, M.; Geis, C.; Giboni, K.-L.; Goetzke, L. W.; Grandi, L.; Greene, Z.; Grignon, C.; Hasterok, C.; Hogenbirk, E.; Huhmann, C.; Itay, R.; James, A.; Kaminsky, B.; Kazama, S.; Kessler, G.; Kish, A.; Landsman, H.; Lang, R. F.; Lellouch, D.; Levinson, L.; Lin, Q.; Lindemann, S.; Lindner, M.; Lombardi, F.; Lopes, J. A. M.; Maier, R.; Manfredini, A.; Maris, I.; Undagoitia, T. Marrodán; Masbou, J.; Massoli, F. V.; Masson, D.; Mayani, D.; Messina, M.; Micheneau, K.; Molinario, A.; Morå, K.; Murra, M.; Naganoma, J.; Ni, K.; Oberlack, U.; Orlandi, D.; Othegraven, R.; Pakarha, P.; Parlati, S.; Pelssers, B.; Persiani, R.; Piastra, F.; Pienaar, J.; Pizzella, V.; Piro, M.-C.; Plante, G.; Priel, N.; García, D. Ramírez; Rauch, L.; Reichard, S.; Reuter, C.; Rizzo, A.; Rosendahl, S.; Rupp, N.; Santos, J. M. F. dos; Saldanha, R.; Sartorelli, G.; Scheibelhut, M.; Schindler, S.; Schreiner, J.; Schumann, M.; Lavina, L. Scotto; Selvi, M.; Shagin, P.; Shockley, E.; Silva, M.; Simgen, H.; Sivers, M. v.; Stern, M.; Stein, A.; Tatananni, D.; Tatananni, L.; Thers, D.; Tiseni, A.; Trinchero, G.; Tunnell, C.; Upole, N.; Vargas, M.; Wack, O.; Walet, R.; Wang, H.; Wang, Z.; Wei, Y.; Weinheimer, C.; Wittweg, C.; Wulf, J.; Ye, J.; Zhang, Y.
2017-12-01
The XENON1T experiment at the Laboratori Nazionali del Gran Sasso (LNGS) is the first WIMP dark matter detector operating with a liquid xenon target mass above the ton-scale. Out of its 3.2 t liquid xenon inventory, 2.0 t constitute the active target of the dual-phase time projection chamber. The scintillation and ionization signals from particle interactions are detected with low-background photomultipliers. This article describes the XENON1T instrument and its subsystems as well as strategies to achieve an unprecedented low background level. First results on the detector response and the performance of the subsystems are also presented.
Decision-level fusion of SAR and IR sensor information for automatic target detection
NASA Astrophysics Data System (ADS)
Cho, Young-Rae; Yim, Sung-Hyuk; Cho, Hyun-Woong; Won, Jin-Ju; Song, Woo-Jin; Kim, So-Hyeon
2017-05-01
We propose a decision-level architecture that combines synthetic aperture radar (SAR) and an infrared (IR) sensor for automatic target detection. We present a new size-based feature, called target-silhouette to reduce the number of false alarms produced by the conventional target-detection algorithm. Boolean Map Visual Theory is used to combine a pair of SAR and IR images to generate the target-enhanced map. Then basic belief assignment is used to transform this map into a belief map. The detection results of sensors are combined to build the target-silhouette map. We integrate the fusion mass and the target-silhouette map on the decision level to exclude false alarms. The proposed algorithm is evaluated using a SAR and IR synthetic database generated by SE-WORKBENCH simulator, and compared with conventional algorithms. The proposed fusion scheme achieves higher detection rate and lower false alarm rate than the conventional algorithms.
Penalty dynamic programming algorithm for dim targets detection in sensor systems.
Huang, Dayu; Xue, Anke; Guo, Yunfei
2012-01-01
In order to detect and track multiple maneuvering dim targets in sensor systems, an improved dynamic programming track-before-detect algorithm (DP-TBD) called penalty DP-TBD (PDP-TBD) is proposed. The performances of tracking techniques are used as a feedback to the detection part. The feedback is constructed by a penalty term in the merit function, and the penalty term is a function of the possible target state estimation, which can be obtained by the tracking methods. With this feedback, the algorithm combines traditional tracking techniques with DP-TBD and it can be applied to simultaneously detect and track maneuvering dim targets. Meanwhile, a reasonable constraint that a sensor measurement can originate from one target or clutter is proposed to minimize track separation. Thus, the algorithm can be used in the multi-target situation with unknown target numbers. The efficiency and advantages of PDP-TBD compared with two existing methods are demonstrated by several simulations.
2017-03-24
This enhanced-color image of a mysterious dark spot on Jupiter seems to reveal a Jovian "galaxy" of swirling storms. Juno acquired this JunoCam image on Feb. 2, 2017, at 5:13 a.m. PDT (8:13 a.m. EDT), at an altitude of 9,000 miles (14,500 kilometers) above the giant planet's cloud tops. This publicly selected target was simply titled "Dark Spot." In ground-based images it was difficult to tell that it is a dark storm. Citizen scientist Roman Tkachenko enhanced the color to bring out the rich detail in the storm and surrounding clouds. Just south of the dark storm is a bright, oval-shaped storm with high, bright, white clouds, reminiscent of a swirling galaxy. As a final touch, he rotated the image 90 degrees, turning the picture into a work of art. http://photojournal.jpl.nasa.gov/catalog/PIA21386
Dark Matter Limits from Dwarf Spheroidal Galaxies with the HAWC Gamma-Ray Observatory
NASA Astrophysics Data System (ADS)
Albert, A.; Alfaro, R.; Alvarez, C.; Álvarez, J. D.; Arceo, R.; Arteaga-Velázquez, J. C.; Avila Rojas, D.; Ayala Solares, H. A.; Bautista-Elivar, N.; Becerril, A.; Belmont-Moreno, E.; BenZvi, S. Y.; Bernal, A.; Braun, J.; Brisbois, C.; Caballero-Mora, K. S.; Capistrán, T.; Carramiñana, A.; Casanova, S.; Castillo, M.; Cotti, U.; Cotzomi, J.; Coutiño de León, S.; De León, C.; De la Fuente, E.; Diaz Hernandez, R.; Dingus, B. L.; DuVernois, M. A.; Díaz-Vélez, J. C.; Ellsworth, R. W.; Engel, K.; Fiorino, D. W.; Fraija, N.; García-González, J. A.; Garfias, F.; González, M. M.; Goodman, J. A.; Hampel-Arias, Z.; Harding, J. P.; Hernandez, S.; Hernandez-Almada, A.; Hona, B.; Hüntemeyer, P.; Iriarte, A.; Jardin-Blicq, A.; Joshi, V.; Kaufmann, S.; Kieda, D.; Lauer, R. J.; Lennarz, D.; León Vargas, H.; Linnemann, J. T.; Longinotti, A. L.; Longo Proper, M.; Raya, G. Luis; Luna-García, R.; López-Coto, R.; Malone, K.; Marinelli, S. S.; Martinez-Castellanos, I.; Martínez-Castro, J.; Martínez-Huerta, H.; Matthews, J. A.; Miranda-Romagnoli, P.; Moreno, E.; Mostafá, M.; Nellen, L.; Newbold, M.; Nisa, M. U.; Noriega-Papaqui, R.; Pelayo, R.; Pretz, J.; Pérez-Pérez, E. G.; Ren, Z.; Rho, C. D.; Rivière, C.; Rosa-González, D.; Rosenberg, M.; Ruiz-Velasco, E.; Salesa Greus, F.; Sandoval, A.; Schneider, M.; Schoorlemmer, H.; Sinnis, G.; Smith, A. J.; Springer, R. W.; Surajbali, P.; Taboada, I.; Tibolla, O.; Tollefson, K.; Torres, I.; Vianello, G.; Weisgarber, T.; Westerhoff, S.; Wood, J.; Yapici, T.; Younk, P. W.; Zhou, H.
2018-02-01
The High Altitude Water Cherenkov (HAWC) gamma-ray observatory is a wide field of view observatory sensitive to 500 GeV–100 TeV gamma-rays and cosmic rays. It can also perform diverse indirect searches for dark matter annihilation and decay. Among the most promising targets for the indirect detection of dark matter are dwarf spheroidal galaxies. These objects are expected to have few astrophysical sources of gamma-rays but high dark matter content, making them ideal candidates for an indirect dark matter detection with gamma-rays. Here we present individual limits on the annihilation cross section and decay lifetime for 15 dwarf spheroidal galaxies within the field of view, as well as their combined limit. These are the first limits on the annihilation cross section and decay lifetime using data collected with HAWC. We also present the HAWC flux upper limits of the 15 dwarf spheroidal galaxies in half-decade energy bins.
Status and perspective of the DarkSide experiment at LNGS
Agnes, P.
2018-09-01
The DarkSide experiment aims to perform a background-free direct search for dark matter with a dual-phase argon TPC. The current phase of the experiment, DarkSide-50, is acquiring data at Laboratori Nazionali del Gran Sasso and produced the most sensitive limit on the WIMP-nucleon cross section ever obtained with a liquid argon target (2.0 × 10 -44 cm2 for a WIMP mass of 100 GeV/c 2). The future phase of the experiment will be a 20 t fiducial mass detector, designed to reach a sensitivity of ~1 × 10 -47 cm2 (at 1 TeV/c 2 WIMP mass) with a background-free exposuremore » of 100 ty. Here, this work contains a discussion of the current status of the DarkSide-50 WIMP search and of the results which are more relevant for the construction of the future detector.« less
Dark interactions and cosmological fine-tuning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quartin, Miguel; Calvao, Mauricio O; Joras, Sergio E
2008-05-15
Cosmological models involving an interaction between dark matter and dark energy have been proposed in order to solve the so-called coincidence problem. Different forms of coupling have been studied, but there have been claims that observational data seem to narrow (some of) them down to something annoyingly close to the {Lambda}CDM (CDM: cold dark matter) model, thus greatly reducing their ability to deal with the problem in the first place. The smallness problem of the initial energy density of dark energy has also been a target of cosmological models in recent years. Making use of a moderately general coupling scheme,more » this paper aims to unite these different approaches and shed some light on whether this class of models has any true perspective in suppressing the aforementioned issues that plague our current understanding of the universe, in a quantitative and unambiguous way.« less
Cosmology and accelerator tests of strongly interacting dark matter
Berlin, Asher; Blinov, Nikita; Gori, Stefania; ...
2018-03-23
A natural possibility for dark matter is that it is composed of the stable pions of a QCD-like hidden sector. Existing literature largely assumes that pion self-interactions alone control the early universe cosmology. We point out that processes involving vector mesons typically dominate the physics of dark matter freeze-out and significantly widen the viable mass range for these models. The vector mesons also give rise to striking signals at accelerators. For example, in most of the cosmologically favored parameter space, the vector mesons are naturally long-lived and produce standard model particles in their decays. Electron and proton beam fixed-target experimentsmore » such as HPS, SeaQuest, and LDMX can exploit these signals to explore much of the viable parameter space. As a result, we also comment on dark matter decay inherent in a large class of previously considered models and explain how to ensure dark matter stability.« less
Cosmology and accelerator tests of strongly interacting dark matter
NASA Astrophysics Data System (ADS)
Berlin, Asher; Blinov, Nikita; Gori, Stefania; Schuster, Philip; Toro, Natalia
2018-03-01
A natural possibility for dark matter is that it is composed of the stable pions of a QCD-like hidden sector. Existing literature largely assumes that pion self-interactions alone control the early universe cosmology. We point out that processes involving vector mesons typically dominate the physics of dark matter freeze-out and significantly widen the viable mass range for these models. The vector mesons also give rise to striking signals at accelerators. For example, in most of the cosmologically favored parameter space, the vector mesons are naturally long-lived and produce standard model particles in their decays. Electron and proton beam fixed-target experiments such as HPS, SeaQuest, and LDMX can exploit these signals to explore much of the viable parameter space. We also comment on dark matter decay inherent in a large class of previously considered models and explain how to ensure dark matter stability.
Direct search for dark matter with DarkSide
Agnes, P.
2015-11-16
Here, the DarkSide experiment is designed for the direct detection of Dark Matter with a double phase liquid Argon TPC operating underground at Laboratori Nazionali del Gran Sasso. The TPC is placed inside a 30 tons liquid organic scintillator sphere, acting as a neutron veto, which is in turn installed inside a 1 kt water Cherenkov detector. The current detector is running since November 2013 with a 50 kg atmospheric Argon fill and we report here the first null results of a Dark Matter search for a (1422 ± 67) kg.d exposure. This result correspond to a 90% CL uppermore » limit on the WIMP-nucleon cross section of 6.1 × 10 -44 cm 2 (for a WIMP mass of 100 GeV/c 2) and it's currently the most sensitive limit obtained with an Argon target.« less
Cosmology and accelerator tests of strongly interacting dark matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berlin, Asher; Blinov, Nikita; Gori, Stefania
A natural possibility for dark matter is that it is composed of the stable pions of a QCD-like hidden sector. Existing literature largely assumes that pion self-interactions alone control the early universe cosmology. We point out that processes involving vector mesons typically dominate the physics of dark matter freeze-out and significantly widen the viable mass range for these models. The vector mesons also give rise to striking signals at accelerators. For example, in most of the cosmologically favored parameter space, the vector mesons are naturally long-lived and produce standard model particles in their decays. Electron and proton beam fixed-target experimentsmore » such as HPS, SeaQuest, and LDMX can exploit these signals to explore much of the viable parameter space. As a result, we also comment on dark matter decay inherent in a large class of previously considered models and explain how to ensure dark matter stability.« less
Status and perspective of the DarkSide experiment at LNGS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agnes, P.
The DarkSide experiment aims to perform a background-free direct search for dark matter with a dual-phase argon TPC. The current phase of the experiment, DarkSide-50, is acquiring data at Laboratori Nazionali del Gran Sasso and produced the most sensitive limit on the WIMP-nucleon cross section ever obtained with a liquid argon target (2.0 × 10 -44 cm2 for a WIMP mass of 100 GeV/c 2). The future phase of the experiment will be a 20 t fiducial mass detector, designed to reach a sensitivity of ~1 × 10 -47 cm2 (at 1 TeV/c 2 WIMP mass) with a background-free exposuremore » of 100 ty. Here, this work contains a discussion of the current status of the DarkSide-50 WIMP search and of the results which are more relevant for the construction of the future detector.« less
Wang, Xuezhi; Huang, Xiaotao; Suvorova, Sofia; Moran, Bill
2018-01-01
Golay complementary waveforms can, in theory, yield radar returns of high range resolution with essentially zero sidelobes. In practice, when deployed conventionally, while high signal-to-noise ratios can be achieved for static target detection, significant range sidelobes are generated by target returns of nonzero Doppler causing unreliable detection. We consider signal processing techniques using Golay complementary waveforms to improve radar detection performance in scenarios involving multiple nonzero Doppler targets. A signal processing procedure based on an existing, so called, Binomial Design algorithm that alters the transmission order of Golay complementary waveforms and weights the returns is proposed in an attempt to achieve an enhanced illumination performance. The procedure applies one of three proposed waveform transmission ordering algorithms, followed by a pointwise nonlinear processor combining the outputs of the Binomial Design algorithm and one of the ordering algorithms. The computational complexity of the Binomial Design algorithm and the three ordering algorithms are compared, and a statistical analysis of the performance of the pointwise nonlinear processing is given. Estimation of the areas in the Delay–Doppler map occupied by significant range sidelobes for given targets are also discussed. Numerical simulations for the comparison of the performances of the Binomial Design algorithm and the three ordering algorithms are presented for both fixed and randomized target locations. The simulation results demonstrate that the proposed signal processing procedure has a better detection performance in terms of lower sidelobes and higher Doppler resolution in the presence of multiple nonzero Doppler targets compared to existing methods. PMID:29324708
NASA Astrophysics Data System (ADS)
Yi, Juan; Du, Qingyu; Zhang, Hong jiang; Zhang, Yao lei
2017-11-01
Target recognition is a leading key technology in intelligent image processing and application development at present, with the enhancement of computer processing ability, autonomous target recognition algorithm, gradually improve intelligence, and showed good adaptability. Taking the airport target as the research object, analysis the airport layout characteristics, construction of knowledge model, Gabor filter and Radon transform based on the target recognition algorithm of independent design, image processing and feature extraction of the airport, the algorithm was verified, and achieved better recognition results.
General Quantum Meet-in-the-Middle Search Algorithm Based on Target Solution of Fixed Weight
NASA Astrophysics Data System (ADS)
Fu, Xiang-Qun; Bao, Wan-Su; Wang, Xiang; Shi, Jian-Hong
2016-10-01
Similar to the classical meet-in-the-middle algorithm, the storage and computation complexity are the key factors that decide the efficiency of the quantum meet-in-the-middle algorithm. Aiming at the target vector of fixed weight, based on the quantum meet-in-the-middle algorithm, the algorithm for searching all n-product vectors with the same weight is presented, whose complexity is better than the exhaustive search algorithm. And the algorithm can reduce the storage complexity of the quantum meet-in-the-middle search algorithm. Then based on the algorithm and the knapsack vector of the Chor-Rivest public-key crypto of fixed weight d, we present a general quantum meet-in-the-middle search algorithm based on the target solution of fixed weight, whose computational complexity is \\sumj = 0d {(O(\\sqrt {Cn - k + 1d - j }) + O(C_kj log C_k^j))} with Σd i =0 Ck i memory cost. And the optimal value of k is given. Compared to the quantum meet-in-the-middle search algorithm for knapsack problem and the quantum algorithm for searching a target solution of fixed weight, the computational complexity of the algorithm is lower. And its storage complexity is smaller than the quantum meet-in-the-middle-algorithm. Supported by the National Basic Research Program of China under Grant No. 2013CB338002 and the National Natural Science Foundation of China under Grant No. 61502526
Research on infrared small-target tracking technology under complex background
NASA Astrophysics Data System (ADS)
Liu, Lei; Wang, Xin; Chen, Jilu; Pan, Tao
2012-10-01
In this paper, some basic principles and the implementing flow charts of a series of algorithms for target tracking are described. On the foundation of above works, a moving target tracking software base on the OpenCV is developed by the software developing platform MFC. Three kinds of tracking algorithms are integrated in this software. These two tracking algorithms are Kalman Filter tracking method and Camshift tracking method. In order to explain the software clearly, the framework and the function are described in this paper. At last, the implementing processes and results are analyzed, and those algorithms for tracking targets are evaluated from the two aspects of subjective and objective. This paper is very significant in the application of the infrared target tracking technology.
Chávez-Hernández, Elva C.; Alejandri-Ramírez, Naholi D.; Juárez-González, Vasti T.; Dinkova, Tzvetanka D.
2015-01-01
Maize somatic embryogenesis (SE) is induced from the immature zygotic embryo in darkness and under the appropriate hormones' levels. Small RNA expression is reprogrammed and certain miRNAs become particularly enriched during induction while others, characteristic to the zygotic embryo, decrease. To explore the impact of different environmental cues on miRNA regulation in maize SE, we tested specific miRNA abundance and their target gene expression in response to photoperiod and hormone depletion for two different maize cultivars (VS-535 and H-565). The expression levels of miR156, miR159, miR164, miR168, miR397, miR398, miR408, miR528, and some predicted targets (SBP23, GA-MYB, CUC2, AGO1c, LAC2, SOD9, GR1, SOD1A, PLC) were examined upon staged hormone depletion in the presence of light photoperiod or darkness. Almost all examined miRNA, except miR159, increased upon hormone depletion, regardless photoperiod absence/presence. miR528, miR408, and miR398 changed the most. On the other hand, expression of miRNA target genes was strongly regulated by the photoperiod exposure. Stress-related miRNA targets showed greater differences between cultivars than development-related targets. miRNA/target inverse relationship was more frequently observed in darkness than light. Interestingly, miR528, but not miR159, miR168 or miR398, was located on polyribosome fractions suggesting a role for this miRNA at the level of translation. Overall our results demonstrate that hormone depletion exerts a great influence on specific miRNA expression during plant regeneration independently of light. However, their targets are additionally influenced by the presence of photoperiod. The reproducibility or differences observed for particular miRNA-target regulation between two different highly embryogenic genotypes provide clues for conserved miRNA roles within the SE process. PMID:26257760
Glint-induced false alarm reduction in signature adaptive target detection
NASA Astrophysics Data System (ADS)
Crosby, Frank J.
2002-07-01
The signal adaptive target detection algorithm developed by Crosby and Riley uses target geometry to discern anomalies in local backgrounds. Detection is not restricted based on specific target signatures. The robustness of the algorithm is limited by an increased false alarm potential. The base algorithm is extended to eliminate one common source of false alarms in a littoral environment. This common source is glint reflected on the surface of water. The spectral and spatial transience of glint prevent straightforward characterization and complicate exclusion. However, the statistical basis of the detection algorithm and its inherent computations allow for glint discernment and the removal of its influence.
Target-type probability combining algorithms for multisensor tracking
NASA Astrophysics Data System (ADS)
Wigren, Torbjorn
2001-08-01
Algorithms for the handing of target type information in an operational multi-sensor tracking system are presented. The paper discusses recursive target type estimation, computation of crosses from passive data (strobe track triangulation), as well as the computation of the quality of the crosses for deghosting purposes. The focus is on Bayesian algorithms that operate in the discrete target type probability space, and on the approximations introduced for computational complexity reduction. The centralized algorithms are able to fuse discrete data from a variety of sensors and information sources, including IFF equipment, ESM's, IRST's as well as flight envelopes estimated from track data. All algorithms are asynchronous and can be tuned to handle clutter, erroneous associations as well as missed and erroneous detections. A key to obtain this ability is the inclusion of data forgetting by a procedure for propagation of target type probability states between measurement time instances. Other important properties of the algorithms are their abilities to handle ambiguous data and scenarios. The above aspects are illustrated in a simulations study. The simulation setup includes 46 air targets of 6 different types that are tracked by 5 airborne sensor platforms using ESM's and IRST's as data sources.
Chen, Zhiru; Hong, Wenxue
2016-02-01
Considering the low accuracy of prediction in the positive samples and poor overall classification effects caused by unbalanced sample data of MicroRNA (miRNA) target, we proposes a support vector machine (SVM)-integration of under-sampling and weight (IUSM) algorithm in this paper, an under-sampling based on the ensemble learning algorithm. The algorithm adopts SVM as learning algorithm and AdaBoost as integration framework, and embeds clustering-based under-sampling into the iterative process, aiming at reducing the degree of unbalanced distribution of positive and negative samples. Meanwhile, in the process of adaptive weight adjustment of the samples, the SVM-IUSM algorithm eliminates the abnormal ones in negative samples with robust sample weights smoothing mechanism so as to avoid over-learning. Finally, the prediction of miRNA target integrated classifier is achieved with the combination of multiple weak classifiers through the voting mechanism. The experiment revealed that the SVM-IUSW, compared with other algorithms on unbalanced dataset collection, could not only improve the accuracy of positive targets and the overall effect of classification, but also enhance the generalization ability of miRNA target classifier.
NASA Astrophysics Data System (ADS)
Weber, Bruce A.
2005-07-01
We have performed an experiment that compares the performance of human observers with that of a robust algorithm for the detection of targets in difficult, nonurban forward-looking infrared imagery. Our purpose was to benchmark the comparison and document performance differences for future algorithm improvement. The scale-insensitive detection algorithm, used as a benchmark by the Night Vision Electronic Sensors Directorate for algorithm evaluation, employed a combination of contrastlike features to locate targets. Detection receiver operating characteristic curves and observer-confidence analyses were used to compare human and algorithmic responses and to gain insight into differences. The test database contained ground targets, in natural clutter, whose detectability, as judged by human observers, ranged from easy to very difficult. In general, as compared with human observers, the algorithm detected most of the same targets, but correlated confidence with correct detections poorly and produced many more false alarms at any useful level of performance. Though characterizing human performance was not the intent of this study, results suggest that previous observational experience was not a strong predictor of human performance, and that combining individual human observations by majority vote significantly reduced false-alarm rates.
Multi-modal automatic montaging of adaptive optics retinal images
Chen, Min; Cooper, Robert F.; Han, Grace K.; Gee, James; Brainard, David H.; Morgan, Jessica I. W.
2016-01-01
We present a fully automated adaptive optics (AO) retinal image montaging algorithm using classic scale invariant feature transform with random sample consensus for outlier removal. Our approach is capable of using information from multiple AO modalities (confocal, split detection, and dark field) and can accurately detect discontinuities in the montage. The algorithm output is compared to manual montaging by evaluating the similarity of the overlapping regions after montaging, and calculating the detection rate of discontinuities in the montage. Our results show that the proposed algorithm has high alignment accuracy and a discontinuity detection rate that is comparable (and often superior) to manual montaging. In addition, we analyze and show the benefits of using multiple modalities in the montaging process. We provide the algorithm presented in this paper as open-source and freely available to download. PMID:28018714
Jiang, Xiaolei; Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang
2015-01-01
X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm.
Zhang, Li; Zhang, Ran; Yin, Hongxia; Wang, Zhenchang
2015-01-01
X-ray grating interferometry offers a novel framework for the study of weakly absorbing samples. Three kinds of information, that is, the attenuation, differential phase contrast (DPC), and dark-field images, can be obtained after a single scanning, providing additional and complementary information to the conventional attenuation image. Phase shifts of X-rays are measured by the DPC method; hence, DPC-CT reconstructs refraction indexes rather than attenuation coefficients. In this work, we propose an explicit filtering based low-dose differential phase reconstruction algorithm, which enables reconstruction from reduced scanning without artifacts. The algorithm adopts a differential algebraic reconstruction technique (DART) with the explicit filtering based sparse regularization rather than the commonly used total variation (TV) method. Both the numerical simulation and the biological sample experiment demonstrate the feasibility of the proposed algorithm. PMID:26089971
Wang, Wensheng; Nie, Ting; Fu, Tianjiao; Ren, Jianyue; Jin, Longxu
2017-01-01
In target detection of optical remote sensing images, two main obstacles for aircraft target detection are how to extract the candidates in complex gray-scale-multi background and how to confirm the targets in case the target shapes are deformed, irregular or asymmetric, such as that caused by natural conditions (low signal-to-noise ratio, illumination condition or swaying photographing) and occlusion by surrounding objects (boarding bridge, equipment). To solve these issues, an improved active contours algorithm, namely region-scalable fitting energy based threshold (TRSF), and a corner-convex hull based segmentation algorithm (CCHS) are proposed in this paper. Firstly, the maximal variance between-cluster algorithm (Otsu’s algorithm) and region-scalable fitting energy (RSF) algorithm are combined to solve the difficulty of targets extraction in complex and gray-scale-multi backgrounds. Secondly, based on inherent shapes and prominent corners, aircrafts are divided into five fragments by utilizing convex hulls and Harris corner points. Furthermore, a series of new structure features, which describe the proportion of targets part in the fragment to the whole fragment and the proportion of fragment to the whole hull, are identified to judge whether the targets are true or not. Experimental results show that TRSF algorithm could improve extraction accuracy in complex background, and that it is faster than some traditional active contours algorithms. The CCHS is effective to suppress the detection difficulties caused by the irregular shape. PMID:28481260
Accelerated Dimension-Independent Adaptive Metropolis
Chen, Yuxin; Keyes, David E.; Law, Kody J.; ...
2016-10-27
This work describes improvements from algorithmic and architectural means to black-box Bayesian inference over high-dimensional parameter spaces. The well-known adaptive Metropolis (AM) algorithm [33] is extended herein to scale asymptotically uniformly with respect to the underlying parameter dimension for Gaussian targets, by respecting the variance of the target. The resulting algorithm, referred to as the dimension-independent adaptive Metropolis (DIAM) algorithm, also shows improved performance with respect to adaptive Metropolis on non-Gaussian targets. This algorithm is further improved, and the possibility of probing high-dimensional (with dimension d 1000) targets is enabled, via GPU-accelerated numerical libraries and periodically synchronized concurrent chains (justimore » ed a posteriori). Asymptotically in dimension, this GPU implementation exhibits a factor of four improvement versus a competitive CPU-based Intel MKL parallel version alone. Strong scaling to concurrent chains is exhibited, through a combination of longer time per sample batch (weak scaling) and yet fewer necessary samples to convergence. The algorithm performance is illustrated on several Gaussian and non-Gaussian target examples, in which the dimension may be in excess of one thousand.« less
Accelerated Dimension-Independent Adaptive Metropolis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yuxin; Keyes, David E.; Law, Kody J.
This work describes improvements from algorithmic and architectural means to black-box Bayesian inference over high-dimensional parameter spaces. The well-known adaptive Metropolis (AM) algorithm [33] is extended herein to scale asymptotically uniformly with respect to the underlying parameter dimension for Gaussian targets, by respecting the variance of the target. The resulting algorithm, referred to as the dimension-independent adaptive Metropolis (DIAM) algorithm, also shows improved performance with respect to adaptive Metropolis on non-Gaussian targets. This algorithm is further improved, and the possibility of probing high-dimensional (with dimension d 1000) targets is enabled, via GPU-accelerated numerical libraries and periodically synchronized concurrent chains (justimore » ed a posteriori). Asymptotically in dimension, this GPU implementation exhibits a factor of four improvement versus a competitive CPU-based Intel MKL parallel version alone. Strong scaling to concurrent chains is exhibited, through a combination of longer time per sample batch (weak scaling) and yet fewer necessary samples to convergence. The algorithm performance is illustrated on several Gaussian and non-Gaussian target examples, in which the dimension may be in excess of one thousand.« less
Separating astrophysical sources from indirect dark matter signals
Siegal-Gaskins, Jennifer M.
2015-01-01
Indirect searches for products of dark matter annihilation and decay face the challenge of identifying an uncertain and subdominant signal in the presence of uncertain backgrounds. Two valuable approaches to this problem are (i) using analysis methods which take advantage of different features in the energy spectrum and angular distribution of the signal and backgrounds and (ii) more accurately characterizing backgrounds, which allows for more robust identification of possible signals. These two approaches are complementary and can be significantly strengthened when used together. I review the status of indirect searches with gamma rays using two promising targets, the Inner Galaxy and the isotropic gamma-ray background. For both targets, uncertainties in the properties of backgrounds are a major limitation to the sensitivity of indirect searches. I then highlight approaches which can enhance the sensitivity of indirect searches using these targets. PMID:25304638
The MiniCLEAN Dark Matter Experiment
NASA Astrophysics Data System (ADS)
Schnee, Richard; Deap/Clean Collaboration
2011-10-01
The MiniCLEAN dark matter experiment exploits a single-phase liquid argon (LAr) detector, instrumented with photomultiplier tubes submerged in the cryogen with nearly 4 π coverage of a 500 kg target (150 kg fiducial) mass. The high light yield and large difference in singlet/triplet scintillation time-profiles in LAr provide effective defense against radioactive backgrounds through pulse-shape discrimination and event position reconstruction. The detector is also designed for a liquid neon target which, in the event of a positive signal in LAr, will enable an independent verification of backgrounds and provide a unique test of the expected A2 dependence of the WIMP interaction rate. The conceptually simple design can be scaled to target masses in excess of 10 tons in a relatively straightforward and economic manner. The experimental technique and current status of MiniCLEAN will be summarized.
Experiments and Analysis of Close-Shot Identification of On-Branch Citrus Fruit with RealSense
Liu, Jizhan; Yuan, Yan; Zhou, Yao; Zhu, Xinxin
2018-01-01
Fruit recognition based on depth information has been a hot topic due to its advantages. However, the present equipment and methods cannot meet the requirements of rapid and reliable recognition and location of fruits in close shot for robot harvesting. To solve this problem, we propose a recognition algorithm for citrus fruit based on RealSense. This method effectively utilizes depth-point cloud data in a close-shot range of 160 mm and different geometric features of the fruit and leaf to recognize fruits with a intersection curve cut by the depth-sphere. Experiments with close-shot recognition of six varieties of fruit under different conditions were carried out. The detection rates of little occlusion and adhesion were from 80–100%. However, severe occlusion and adhesion still have a great influence on the overall success rate of on-branch fruits recognition, the rate being 63.8%. The size of the fruit has a more noticeable impact on the success rate of detection. Moreover, due to close-shot near-infrared detection, there was no obvious difference in recognition between bright and dark conditions. The advantages of close-shot limited target detection with RealSense, fast foreground and background removal and the simplicity of the algorithm with high precision may contribute to high real-time vision-servo operations of harvesting robots. PMID:29751594
NASA Astrophysics Data System (ADS)
Patadia, Falguni; Levy, Robert C.; Mattoo, Shana
2018-06-01
Retrieving aerosol optical depth (AOD) from top-of-atmosphere (TOA) satellite-measured radiance requires separating the aerosol signal from the total observed signal. Total TOA radiance includes signal from the underlying surface and from atmospheric constituents such as aerosols, clouds and gases. Multispectral retrieval algorithms, such as the dark-target (DT) algorithm that operates upon the Moderate Resolution Imaging Spectroradiometer (MODIS, on board Terra and Aqua satellites) and Visible Infrared Imaging Radiometer Suite (VIIRS, on board Suomi-NPP) sensors, use wavelength bands in window
regions. However, while small, the gas absorptions in these bands are non-negligible and require correction. In this paper, we use the High-resolution TRANsmission (HITRAN) database and Line-By-Line Radiative Transfer Model (LBLRTM) to derive consistent gas corrections for both MODIS and VIIRS wavelength bands. Absorptions from H2O, CO2 and O3 are considered, as well as other trace gases. Even though MODIS and VIIRS bands are similar
, they are different enough that applying MODIS-specific gas corrections to VIIRS observations results in an underestimate of global mean AOD (by 0.01), but with much larger regional AOD biases of up to 0.07. As recent studies have been attempting to create a long-term data record by joining multiple satellite data sets, including MODIS and VIIRS, the consistency of gas correction has become even more crucial.
Spectral Detection of Human Skin in VIS-SWIR Hyperspectral Imagery without Radiometric Calibration
2012-03-01
range than the high-altitude scenarios for which the re- mote sensing algorithms were developed. At this close range, there is relatively little...sequence contain a dismount with arms extended out to the side. The dismount, Caucasian male with dark brown hair, is wearing a black, cotton , short
NASA Astrophysics Data System (ADS)
Hashimoto, M.; Nakajima, T.; Takenaka, H.; Higurashi, A.
2013-12-01
We develop a new satellite remote sensing algorithm to retrieve the properties of aerosol particles in the atmosphere. In late years, high resolution and multi-wavelength, and multiple-angle observation data have been obtained by grand-based spectral radiometers and imaging sensors on board the satellite. With this development, optimized multi-parameter remote sensing methods based on the Bayesian theory have become popularly used (Turchin and Nozik, 1969; Rodgers, 2000; Dubovik et al., 2000). Additionally, a direct use of radiation transfer calculation has been employed for non-linear remote sensing problems taking place of look up table methods supported by the progress of computing technology (Dubovik et al., 2011; Yoshida et al., 2011). We are developing a flexible multi-pixel and multi-parameter remote sensing algorithm for aerosol optical properties. In this algorithm, the inversion method is a combination of the MAP method (Maximum a posteriori method, Rodgers, 2000) and the Phillips-Twomey method (Phillips, 1962; Twomey, 1963) as a smoothing constraint for the state vector. Furthermore, we include a radiation transfer calculation code, Rstar (Nakajima and Tanaka, 1986, 1988), numerically solved each time in iteration for solution search. The Rstar-code has been directly used in the AERONET operational processing system (Dubovik and King, 2000). Retrieved parameters in our algorithm are aerosol optical properties, such as aerosol optical thickness (AOT) of fine mode, sea salt, and dust particles, a volume soot fraction in fine mode particles, and ground surface albedo of each observed wavelength. We simultaneously retrieve all the parameters that characterize pixels in each of horizontal sub-domains consisting the target area. Then we successively apply the retrieval method to all the sub-domains in the target area. We conducted numerical tests for the retrieval of aerosol properties and ground surface albedo for GOSAT/CAI imager data to test the algorithm for the land area. In this test, we simulated satellite-observed radiances for a sub-domain consisting of 5 by 5 pixels by the Rstar code assuming wavelengths of 380, 674, 870 and 1600 [nm], atmospheric condition of the US standard atmosphere, and the several aerosol and ground surface conditions. The result of the experiment showed that AOTs of fine mode and dust particles, soot fraction and ground surface albedo at the wavelength of 674 [nm] are retrieved within absolute value differences of 0.04, 0.01, 0.06 and 0.006 from the true value, respectively, for the case of dark surface, and also, for the case of blight surface, 0.06, 0.03, 0.04 and 0.10 from the true value, respectively. We will conduct more tests to study the information contents of parameters needed for aerosol and land surface remote sensing with different boundary conditions among sub-domains.
Automatic target detection using binary template matching
NASA Astrophysics Data System (ADS)
Jun, Dong-San; Sun, Sun-Gu; Park, HyunWook
2005-03-01
This paper presents a new automatic target detection (ATD) algorithm to detect targets such as battle tanks and armored personal carriers in ground-to-ground scenarios. Whereas most ATD algorithms were developed for forward-looking infrared (FLIR) images, we have developed an ATD algorithm for charge-coupled device (CCD) images, which have superior quality to FLIR images in daylight. The proposed algorithm uses fast binary template matching with an adaptive binarization, which is robust to various light conditions in CCD images and saves computation time. Experimental results show that the proposed method has good detection performance.
Validation of the Thematic Mapper radiometric and geometric correction algorithms
NASA Technical Reports Server (NTRS)
Fischel, D.
1984-01-01
The radiometric and geometric correction algorithms for Thematic Mapper are critical to subsequent successful information extraction. Earlier Landsat scanners, known as Multispectral Scanners, produce imagery which exhibits striping due to mismatching of detector gains and biases. Thematic Mapper exhibits the same phenomenon at three levels: detector-to-detector, scan-to-scan, and multiscan striping. The cause of these variations has been traced to variations in the dark current of the detectors. An alternative formulation has been tested and shown to be very satisfactory. Unfortunately, the Thematic Mapper detectors exhibit saturation effects suffered while viewing extensive cloud areas, and is not easily correctable. The geometric correction algorithm has been shown to be remarkably reliable. Only minor and modest improvements are indicated and shown to be effective.
Underwater image enhancement through depth estimation based on random forest
NASA Astrophysics Data System (ADS)
Tai, Shen-Chuan; Tsai, Ting-Chou; Huang, Jyun-Han
2017-11-01
Light absorption and scattering in underwater environments can result in low-contrast images with a distinct color cast. This paper proposes a systematic framework for the enhancement of underwater images. Light transmission is estimated using the random forest algorithm. RGB values, luminance, color difference, blurriness, and the dark channel are treated as features in training and estimation. Transmission is calculated using an ensemble machine learning algorithm to deal with a variety of conditions encountered in underwater environments. A color compensation and contrast enhancement algorithm based on depth information was also developed with the aim of improving the visual quality of underwater images. Experimental results demonstrate that the proposed scheme outperforms existing methods with regard to subjective visual quality as well as objective measurements.
Research on correction algorithm of laser positioning system based on four quadrant detector
NASA Astrophysics Data System (ADS)
Gao, Qingsong; Meng, Xiangyong; Qian, Weixian; Cai, Guixia
2018-02-01
This paper first introduces the basic principle of the four quadrant detector, and a set of laser positioning experiment system is built based on the four quadrant detector. Four quadrant laser positioning system in the actual application, not only exist interference of background light and detector dark current noise, and the influence of random noise, system stability, spot equivalent error can't be ignored, so it is very important to system calibration and correction. This paper analyzes the various factors of system positioning error, and then propose an algorithm for correcting the system error, the results of simulation and experiment show that the modified algorithm can improve the effect of system error on positioning and improve the positioning accuracy.
Judgments of eye level in light and in darkness
NASA Technical Reports Server (NTRS)
Stoper, Arnold E.; Cohen, Malcolm M.
1986-01-01
Subjects judged eye level in the light and in the dark by raising and lowering themselves in a dental chair until a stationary target appeared to be at the level of their eyes. This method reduced the possibility of subjects' using visible landmarks as reference points for setting eye level during lighted trials, which may have contributed to artificially low estimates of the variability of this judgment in previous studies. Chair settings were 2.5 deg higher in the dark than in the light, and variability was approximately 66 percent greater in the dark than in the light. These results are discussed in terms of possible interactions of two separate systems, one sensitive to the orientations of visible surfaces and the other sensitive to bodily and gravitational information.
Direct dark matter search by annual modulation with 2.7 years of XMASS-I data
NASA Astrophysics Data System (ADS)
Abe, K.; Hiraide, K.; Ichimura, K.; Kishimoto, Y.; Kobayashi, K.; Kobayashi, M.; Moriyama, S.; Nakahata, M.; Norita, T.; Ogawa, H.; Sato, K.; Sekiya, H.; Takachio, O.; Takeda, A.; Tasaka, S.; Yamashita, M.; Yang, B. S.; Kim, N. Y.; Kim, Y. D.; Itow, Y.; Kanzawa, K.; Kegasa, R.; Masuda, K.; Takiya, H.; Fushimi, K.; Kanzaki, G.; Martens, K.; Suzuki, Y.; Xu, B. D.; Fujita, R.; Hosokawa, K.; Miuchi, K.; Oka, N.; Takeuchi, Y.; Kim, Y. H.; Lee, K. B.; Lee, M. K.; Fukuda, Y.; Miyasaka, M.; Nishijima, K.; Nakamura, S.; Xmass Collaboration
2018-05-01
An annual modulation signal due to the Earth orbiting around the Sun would be one of the strongest indications of the direct detection of dark matter. In 2016, we reported a search for dark matter by looking for this annual modulation with our single-phase liquid xenon XMASS-I detector. That analysis resulted in a slightly negative modulation amplitude at low energy. In this work, we included more than one year of additional data, which more than doubles the exposure to 800 live days with the same 832 kg target mass. When we assume weakly interacting massive particle (WIMP) dark matter elastically scattering on the xenon target, the exclusion upper limit for the WIMP-nucleon cross section was improved by a factor of 2 to 1.9 ×10-41 cm2 at 8 GeV /c2 at 90% confidence level with our newly implemented data selection through a likelihood method. For the model-independent case, without assuming any specific dark matter model, we obtained more consistency with the null hypothesis than before with a p -value of 0.11 in the 1-20 keV energy region. This search probed this region with an exposure that was larger than that of DAMA/LIBRA. We also did not find any significant amplitude in the data for periodicity with periods between 50 and 600 days in the energy region between 1 to 6 keV.
Ackermann, M.; Albert, A.; Anderson, B.; ...
2014-02-11
The dwarf spheroidal satellite galaxies of the Milky Way are some of the most dark-matter-dominated objects known. Due to their proximity, high dark matter content, and lack of astrophysical backgrounds, dwarf spheroidal galaxies are widely considered to be among the most promising targets for the indirect detection of dark matter via γ rays. We report on γ -ray observations of 25 Milky Way dwarf spheroidal satellite galaxies based on 4 years of Fermi Large Area Telescope (LAT) data. None of the dwarf galaxies are significantly detected in γ rays, and we present γ -ray flux upper limits between 500 MeVmore » and 500 GeV. We determine the dark matter content of 18 dwarf spheroidal galaxies from stellar kinematic data and combine LAT observations of 15 dwarf galaxies to constrain the dark matter annihilation cross section. Furthermore, we set some of the tightest constraints to date on the annihilation of dark matter particles with masses between 2 GeV and 10 TeV into prototypical standard model channels. We also find these results to be robust against systematic uncertainties in the LAT instrument performance, diffuse γ -ray background modeling, and assumed dark matter density profile.« less
An Improved Aerial Target Localization Method with a Single Vector Sensor
Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin
2017-01-01
This paper focuses on the problems encountered in the actual data processing with the use of the existing aerial target localization methods, analyzes the causes of the problems, and proposes an improved algorithm. Through the processing of the sea experiment data, it is found that the existing algorithms have higher requirements for the accuracy of the angle estimation. The improved algorithm reduces the requirements of the angle estimation accuracy and obtains the robust estimation results. The closest distance matching estimation algorithm and the horizontal distance estimation compensation algorithm are proposed. The smoothing effect of the data after being post-processed by using the forward and backward two-direction double-filtering method has been improved, thus the initial stage data can be filtered, so that the filtering results retain more useful information. In this paper, the aerial target height measurement methods are studied, the estimation results of the aerial target are given, so as to realize the three-dimensional localization of the aerial target and increase the understanding of the underwater platform to the aerial target, so that the underwater platform has better mobility and concealment. PMID:29135956
Color Feature-Based Object Tracking through Particle Swarm Optimization with Improved Inertia Weight
Guo, Siqiu; Zhang, Tao; Song, Yulong
2018-01-01
This paper presents a particle swarm tracking algorithm with improved inertia weight based on color features. The weighted color histogram is used as the target feature to reduce the contribution of target edge pixels in the target feature, which makes the algorithm insensitive to the target non-rigid deformation, scale variation, and rotation. Meanwhile, the influence of partial obstruction on the description of target features is reduced. The particle swarm optimization algorithm can complete the multi-peak search, which can cope well with the object occlusion tracking problem. This means that the target is located precisely where the similarity function appears multi-peak. When the particle swarm optimization algorithm is applied to the object tracking, the inertia weight adjustment mechanism has some limitations. This paper presents an improved method. The concept of particle maturity is introduced to improve the inertia weight adjustment mechanism, which could adjust the inertia weight in time according to the different states of each particle in each generation. Experimental results show that our algorithm achieves state-of-the-art performance in a wide range of scenarios. PMID:29690610
Guo, Siqiu; Zhang, Tao; Song, Yulong; Qian, Feng
2018-04-23
This paper presents a particle swarm tracking algorithm with improved inertia weight based on color features. The weighted color histogram is used as the target feature to reduce the contribution of target edge pixels in the target feature, which makes the algorithm insensitive to the target non-rigid deformation, scale variation, and rotation. Meanwhile, the influence of partial obstruction on the description of target features is reduced. The particle swarm optimization algorithm can complete the multi-peak search, which can cope well with the object occlusion tracking problem. This means that the target is located precisely where the similarity function appears multi-peak. When the particle swarm optimization algorithm is applied to the object tracking, the inertia weight adjustment mechanism has some limitations. This paper presents an improved method. The concept of particle maturity is introduced to improve the inertia weight adjustment mechanism, which could adjust the inertia weight in time according to the different states of each particle in each generation. Experimental results show that our algorithm achieves state-of-the-art performance in a wide range of scenarios.
Ogawa, Takahiro; Haseyama, Miki
2013-03-01
A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.
Data fusion for target tracking and classification with wireless sensor network
NASA Astrophysics Data System (ADS)
Pannetier, Benjamin; Doumerc, Robin; Moras, Julien; Dezert, Jean; Canevet, Loic
2016-10-01
In this paper, we address the problem of multiple ground target tracking and classification with information obtained from a unattended wireless sensor network. A multiple target tracking (MTT) algorithm, taking into account road and vegetation information, is proposed based on a centralized architecture. One of the key issue is how to adapt classical MTT approach to satisfy embedded processing. Based on track statistics, the classification algorithm uses estimated location, velocity and acceleration to help to classify targets. The algorithms enables tracking human and vehicles driving both on and off road. We integrate road or trail width and vegetation cover, as constraints in target motion models to improve performance of tracking under constraint with classification fusion. Our algorithm also presents different dynamic models, to palliate the maneuvers of targets. The tracking and classification algorithms are integrated into an operational platform (the fusion node). In order to handle realistic ground target tracking scenarios, we use an autonomous smart computer deposited in the surveillance area. After the calibration step of the heterogeneous sensor network, our system is able to handle real data from a wireless ground sensor network. The performance of system is evaluated in a real exercise for intelligence operation ("hunter hunt" scenario).
Dark eyes in female sand gobies indicate readiness to spawn.
Olsson, Karin H; Johansson, Sandra; Blom, Eva-Lotta; Lindström, Kai; Svensson, Ola; Nilsson Sköld, Helen; Kvarnemo, Charlotta
2017-01-01
In animals, colorful and conspicuous ornaments enhance individual attractiveness to potential mates, but are typically tempered by natural selection for crypsis and predator protection. In species where males compete for females, this can lead to highly ornamented males competing for mating opportunities with choosy females, and vice versa. However, even where males compete for mating opportunities, females may exhibit conspicuous displays. These female displays are often poorly understood and it may be unclear whether they declare mating intent, signal intrasexual aggression or form a target for male mate preference. We examined the function of the conspicuous dark eyes that female sand gobies temporarily display during courtship by experimentally testing if males preferred to associate with females with artificially darkened eyes and if dark eyes are displayed during female aggression. By observing interactions between a male and two females freely associating in an aquarium we also investigated in which context females naturally displayed dark eyes. We found that dark eyes were more likely to be displayed by more gravid females than less gravid females and possibly ahead of spawning, but that males did not respond behaviorally to dark eyes or prefer dark-eyed females. Females behaving aggressively did not display dark eyes. We suggest that dark eyes are not a signal per se but may be an aspect of female mate choice, possibly related to vision.
Halo mass and weak galaxy-galaxy lensing profiles in rescaled cosmological N-body simulations
NASA Astrophysics Data System (ADS)
Renneby, Malin; Hilbert, Stefan; Angulo, Raúl E.
2018-05-01
We investigate 3D density and weak lensing profiles of dark matter haloes predicted by a cosmology-rescaling algorithm for N-body simulations. We extend the rescaling method of Angulo & White (2010) and Angulo & Hilbert (2015) to improve its performance on intra-halo scales by using models for the concentration-mass-redshift relation based on excursion set theory. The accuracy of the method is tested with numerical simulations carried out with different cosmological parameters. We find that predictions for median density profiles are more accurate than ˜5 % for haloes with masses of 1012.0 - 1014.5h-1 M⊙ for radii 0.05 < r/r200m < 0.5, and for cosmologies with Ωm ∈ [0.15, 0.40] and σ8 ∈ [0.6, 1.0]. For larger radii, 0.5 < r/r200m < 5, the accuracy degrades to ˜20 %, due to inaccurate modelling of the cosmological and redshift dependence of the splashback radius. For changes in cosmology allowed by current data, the residuals decrease to ≲ 2 % up to scales twice the virial radius. We illustrate the usefulness of the method by estimating the mean halo mass of a mock galaxy group sample. We find that the algorithm's accuracy is sufficient for current data. Improvements in the algorithm, particularly in the modelling of baryons, are likely required for interpreting future (dark energy task force stage IV) experiments.
NASA Astrophysics Data System (ADS)
Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan
2018-03-01
False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of point spread function (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.
Projected sensitivity of the SuperCDMS SNOLAB experiment
Agnese, R.; Anderson, A. J.; Aramaki, T.; ...
2017-04-07
SuperCDMS SNOLAB will be a next-generation experiment aimed at directly detecting low-mass particles (with masses ≤10 GeV/c 2) that may constitute dark matter by using cryogenic detectors of two types (HV and iZIP) and two target materials (germanium and silicon). The experiment is being designed with an initial sensitivity to nuclear recoil cross sections ~1×10 –43 cm 2 for a dark matter particle mass of 1 GeV/c 2, and with capacity to continue exploration to both smaller masses and better sensitivities. The phonon sensitivity of the HV detectors will be sufficient to detect nuclear recoils from sub-GeV dark matter. Amore » detailed calibration of the detector response to low-energy recoils will be needed to optimize running conditions of the HV detectors and to interpret their data for dark matter searches. Low-activity shielding, and the depth of SNOLAB, will reduce most backgrounds, but cosmogenically produced 3H and naturally occurring 32Si will be present in the detectors at some level. Even if these backgrounds are 10 times higher than expected, the science reach of the HV detectors would be over 3 orders of magnitude beyond current results for a dark matter mass of 1 GeV/c 2. The iZIP detectors are relatively insensitive to variations in detector response and backgrounds, and will provide better sensitivity for dark matter particles with masses ≳5 GeV/c 2. The mix of detector types (HV and iZIP), and targets (germanium and silicon), planned for the experiment, as well as flexibility in how the detectors are operated, will allow us to maximize the low-mass reach, and understand the backgrounds that the experiment will encounter. In conclusion, upgrades to the experiment, perhaps with a variety of ultra-low-background cryogenic detectors, will extend dark matter sensitivity down to the “neutrino floor,” where coherent scatters of solar neutrinos become a limiting background.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agnese, R.; Anderson, A. J.; Aramaki, T.
SuperCDMS SNOLAB will be a next-generation experiment aimed at directly detecting low-mass (< 10 GeV/cmore » $^2$) particles that may constitute dark matter by using cryogenic detectors of two types (HV and iZIP) and two target materials (germanium and silicon). The experiment is being designed with an initial sensitivity to nuclear recoil cross sections ~ 1 x 10$$^{-43}$$ cm$^2$ for a dark matter particle mass of 1 GeV/c$^2$, and with capacity to continue exploration to both smaller masses and better sensitivities. The phonon sensitivity of the HV detectors will be sufficient to detect nuclear recoils from sub-GeV dark matter. A detailed calibration of the detector response to low energy recoils will be needed to optimize running conditions of the HV detectors and to interpret their data for dark matter searches. Low-activity shielding, and the depth of SNOLAB, will reduce most backgrounds, but cosmogenically produced $$^{3}$$H and naturally occurring $$^{32}$$Si will be present in the detectors at some level. Even if these backgrounds are x10 higher than expected, the science reach of the HV detectors would be over three orders of magnitude beyond current results for a dark matter mass of 1 GeV/c$^2$. The iZIP detectors are relatively insensitive to variations in detector response and backgrounds, and will provide better sensitivity for dark matter particle masses (> 5 GeV/c$^2$). The mix of detector types (HV and iZIP), and targets (germanium and silicon), planned for the experiment, as well as flexibility in how the detectors are operated, will allow us to maximize the low-mass reach, and understand the backgrounds that the experiment will encounter. Upgrades to the experiment, perhaps with a variety of ultra-low-background cryogenic detectors, will extend dark matter sensitivity down to the "neutrino floor", where coherent scatters of solar neutrinos become a limiting background.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agnese, R.; Anderson, A. J.; Aramaki, T.
SuperCDMS SNOLAB will be a next-generation experiment aimed at directly detecting low-mass particles (with masses ≤ 10 GeV/c^2) that may constitute dark matter by using cryogenic detectors of two types (HV and iZIP) and two target materials (germanium and silicon). The experiment is being designed with an initial sensitivity to nuclear recoil cross sections ~1×10^-43 cm^2 for a dark matter particle mass of 1 GeV/c^2, and with capacity to continue exploration to both smaller masses and better sensitivities. The phonon sensitivity of the HV detectors will be sufficient to detect nuclear recoils from sub-GeV dark matter. A detailed calibration ofmore » the detector response to low-energy recoils will be needed to optimize running conditions of the HV detectors and to interpret their data for dark matter searches. Low-activity shielding, and the depth of SNOLAB, will reduce most backgrounds, but cosmogenically produced H-3 and naturally occurring Si-32 will be present in the detectors at some level. Even if these backgrounds are 10 times higher than expected, the science reach of the HV detectors would be over 3 orders of magnitude beyond current results for a dark matter mass of 1 GeV/c^2. The iZIP detectors are relatively insensitive to variations in detector response and backgrounds, and will provide better sensitivity for dark matter particles with masses ≳5 GeV/c^2. The mix of detector types (HV and iZIP), and targets (germanium and silicon), planned for the experiment, as well as flexibility in how the detectors are operated, will allow us to maximize the low-mass reach, and understand the backgrounds that the experiment will encounter. Upgrades to the experiment, perhaps with a variety of ultra-low-background cryogenic detectors, will extend dark matter sensitivity down to the “neutrino floor,” where coherent scatters of solar neutrinos become a limiting background.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agnese, R.; Anderson, A. J.; Aramaki, T.
SuperCDMS SNOLAB will be a next-generation experiment aimed at directly detecting low-mass particles (with masses ≤10 GeV/c 2) that may constitute dark matter by using cryogenic detectors of two types (HV and iZIP) and two target materials (germanium and silicon). The experiment is being designed with an initial sensitivity to nuclear recoil cross sections ~1×10 –43 cm 2 for a dark matter particle mass of 1 GeV/c 2, and with capacity to continue exploration to both smaller masses and better sensitivities. The phonon sensitivity of the HV detectors will be sufficient to detect nuclear recoils from sub-GeV dark matter. Amore » detailed calibration of the detector response to low-energy recoils will be needed to optimize running conditions of the HV detectors and to interpret their data for dark matter searches. Low-activity shielding, and the depth of SNOLAB, will reduce most backgrounds, but cosmogenically produced 3H and naturally occurring 32Si will be present in the detectors at some level. Even if these backgrounds are 10 times higher than expected, the science reach of the HV detectors would be over 3 orders of magnitude beyond current results for a dark matter mass of 1 GeV/c 2. The iZIP detectors are relatively insensitive to variations in detector response and backgrounds, and will provide better sensitivity for dark matter particles with masses ≳5 GeV/c 2. The mix of detector types (HV and iZIP), and targets (germanium and silicon), planned for the experiment, as well as flexibility in how the detectors are operated, will allow us to maximize the low-mass reach, and understand the backgrounds that the experiment will encounter. In conclusion, upgrades to the experiment, perhaps with a variety of ultra-low-background cryogenic detectors, will extend dark matter sensitivity down to the “neutrino floor,” where coherent scatters of solar neutrinos become a limiting background.« less
NASA Technical Reports Server (NTRS)
Anderson, J. C.; Wang, J.; Zeng, J.; Petrenko, M.; Leptoukh, G. G.; Ichoku, C.
2012-01-01
Coastal regions around the globe are a major source for anthropogenic aerosols in the atmosphere, but the underlying surface characteristics are not favorable for the Moderate Resolution Imaging Spectroradiometer (MODIS) algorithms designed for retrieval of aerosols over dark land or open-ocean surfaces. Using data collected from 62 coastal stations worldwide from the Aerosol Robotic Network (AERONET) from approximately 2002-2010, accuracy assessments are made for coastal aerosol optical depth (AOD) retrieved from MODIS aboard Aqua satellite. It is found that coastal AODs (at 550 nm) characterized respectively by the MODIS Dark Land (hereafter Land) surface algorithm, the Open-Ocean (hereafter Ocean) algorithm, and AERONET all exhibit a log-normal distribution. After filtering by quality flags, the MODIS AODs respectively retrieved from the Land and Ocean algorithms are highly correlated with AERONET (with R(sup 2) is approximately equal to 0.8), but only the Land algorithm AODs fall within the expected error envelope greater than 66% of the time. Furthermore, the MODIS AODs from the Land algorithm, Ocean algorithm, and combined Land and Ocean product show statistically significant discrepancies from their respective counterparts from AERONET in terms of mean, probability density function, and cumulative density function, which suggest a need for future improvement in retrieval algorithms. Without filtering with quality flag, the MODIS Land and Ocean AOD dataset can be degraded by 30-50% in terms of mean bias. Overall, the MODIS Ocean algorithm overestimates the AERONET coastal AOD by 0.021 for AOD less than 0.25 and underestimates it by 0.029 for AOD greater than 0.25. This dichotomy is shown to be related to the ocean surface wind speed and cloud contamination effects on the satellite aerosol retrieval. The Modern Era Retrospective-Analysis for Research and Applications (MERRA) reveals that wind speeds over the global coastal region 25 (with a mean and median value of 2.94 meters per second and 2.66 meters per second, respectively) are often slower than 6 meters per second assumed in the MODIS Ocean algorithm. As a result of high correlation (R(sup 2) greater than 0.98) between the bias in binned MODIS AOD and the corresponding binned wind speed over the coastal sea surface, an empirical scheme for correcting the bias of AOD retrieved from the MODIS Ocean algorithm is formulated and is shown to be effective over the majority of the coastal AERONET stations, and hence can be used in future analysis of AOD trend and MODIS AOD data assimilation.
Penalty Dynamic Programming Algorithm for Dim Targets Detection in Sensor Systems
Huang, Dayu; Xue, Anke; Guo, Yunfei
2012-01-01
In order to detect and track multiple maneuvering dim targets in sensor systems, an improved dynamic programming track-before-detect algorithm (DP-TBD) called penalty DP-TBD (PDP-TBD) is proposed. The performances of tracking techniques are used as a feedback to the detection part. The feedback is constructed by a penalty term in the merit function, and the penalty term is a function of the possible target state estimation, which can be obtained by the tracking methods. With this feedback, the algorithm combines traditional tracking techniques with DP-TBD and it can be applied to simultaneously detect and track maneuvering dim targets. Meanwhile, a reasonable constraint that a sensor measurement can originate from one target or clutter is proposed to minimize track separation. Thus, the algorithm can be used in the multi-target situation with unknown target numbers. The efficiency and advantages of PDP-TBD compared with two existing methods are demonstrated by several simulations. PMID:22666074
ATR architecture for multisensor fusion
NASA Astrophysics Data System (ADS)
Hamilton, Mark K.; Kipp, Teresa A.
1996-06-01
The work of the U.S. Army Research Laboratory (ARL) in the area of algorithms for the identification of static military targets in single-frame electro-optical (EO) imagery has demonstrated great potential in platform-based automatic target identification (ATI). In this case, the term identification is used to mean being able to tell the difference between two military vehicles -- e.g., the M60 from the T72. ARL's work includes not only single-sensor forward-looking infrared (FLIR) ATI algorithms, but also multi-sensor ATI algorithms. We briefly discuss ARL's hybrid model-based/data-learning strategy for ATI, which represents a significant step forward in ATI algorithm design. For example, in the case of single sensor FLIR it allows the human algorithm designer to build directly into the algorithm knowledge that can be adequately modeled at this time, such as the target geometry which directly translates into the target silhouette in the FLIR realm. In addition, it allows structure that is not currently well understood (i.e., adequately modeled) to be incorporated through automated data-learning algorithms, which in a FLIR directly translates into an internal thermal target structure signature. This paper shows the direct applicability of this strategy to both the single-sensor FLIR as well as the multi-sensor FLIR and laser radar.
A taste of dark matter: Flavour constraints on pseudoscalar mediators
Dolan, Matthew J.; Kahlhoefer, Felix; McCabe, Christopher; ...
2015-03-31
Dark matter interacting via the exchange of a light pseudoscalar can induce observable signals in indirect detection experiments and experience large self-interactions while evading the strong bounds from direct dark matter searches. The pseudoscalar mediator will however induce flavour-changing interactions in the Standard Model, providing a promising alternative way to test these models. We investigate in detail the constraints arising from rare meson decays and fixed target experiments for different coupling structures between the pseudoscalar and Standard Model fermions. The resulting bounds are highly complementary to the information inferred from the dark matter relic density and the constraints from primordialmore » nucleosynthesis. We discuss the implications of our findings for the dark matter self-interaction cross section and the prospects of probing dark matter coupled to a light pseudoscalar with direct or indirect detection experiments. In particular, we find that a pseudoscalar mediator can only explain the Galactic Centre excess if its mass is above that of the B mesons, and that it is impossible to obtain a sufficiently large direct detection cross section to account for the DAMA modulation.« less
Exacerbating the Cosmological Constant Problem with Interacting Dark Energy Models.
Marsh, M C David
2017-01-06
Future cosmological surveys will probe the expansion history of the Universe and constrain phenomenological models of dark energy. Such models do not address the fine-tuning problem of the vacuum energy, i.e., the cosmological constant problem (CCP), but can make it spectacularly worse. We show that this is the case for "interacting dark energy" models in which the masses of the dark matter states depend on the dark energy sector. If realized in nature, these models have far-reaching implications for proposed solutions to the CCP that require the number of vacua to exceed the fine-tuning of the vacuum energy density. We show that current estimates of the number of flux vacua in string theory, N_{vac}∼O(10^{272 000}), are far too small to realize certain simple models of interacting dark energy and solve the cosmological constant problem anthropically. These models admit distinctive observational signatures that can be targeted by future gamma-ray observatories, hence making it possible to observationally rule out the anthropic solution to the cosmological constant problem in theories with a finite number of vacua.
Dark chocolate acceptability: influence of cocoa origin and processing conditions.
Torres-Moreno, Miriam; Tarrega, Amparo; Costell, Elvira; Blanch, Consol
2012-01-30
Chocolate properties can vary depending on cocoa origin, composition and manufacturing procedure, which affect consumer acceptability. The aim of this work was to study the effect of two cocoa origins (Ghana and Ecuador) and two processing conditions (roasting time and conching time) on dark chocolate acceptability. Overall acceptability and acceptability for different attributes (colour, flavour, odour and texture) were evaluated by 95 consumers. Differences in acceptability among dark chocolates were mainly related to differences in flavour acceptability. The use of a long roasting time lowered chocolate acceptability in Ghanaian samples while it had no effect on acceptability of Ecuadorian chocolates. This response was observed for most consumers (two subgroups with different frequency consumption of dark chocolate). However, for a third group of consumers identified as distinguishers, the most acceptable dark chocolate samples were those produced with specific combinations of roasting time and conching time for each of the cocoa geographical origin considered. To produce dark chocolates from a single origin it is important to know the target market preferences and to select the appropriate roasting and conching conditions. Copyright © 2011 Society of Chemical Industry.
Quick fuzzy backpropagation algorithm.
Nikov, A; Stoeva, S
2001-03-01
A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.
Results from the XENON10 and the Race to Detect Dark Matter with Noble Liquids
Shutt, Tom [Case Western Reserve, Cleveland, Ohio, United States
2017-12-09
Detectors based on liquid noble gases have the potential to revolutionize the direct search for WIMP dark matter. The XENON10 experiment, of which I am a member, has recently announced the results from it's first data run and is now the leading WIMP search experiment. This and other experiments using xenon, argon and neon have the potential to rapidly move from the current kg-scale target mass to the ton scale and well beyond. This should allow a (nearly) definitive test or discovery of dark matter if it is in the form of weakly interacting massive particles.
Readout technologies for directional WIMP Dark Matter detection
NASA Astrophysics Data System (ADS)
Battat, J. B. R.; Irastorza, I. G.; Aleksandrov, A.; Asada, T.; Baracchini, E.; Billard, J.; Bosson, G.; Bourrion, O.; Bouvier, J.; Buonaura, A.; Burdge, K.; Cebrián, S.; Colas, P.; Consiglio, L.; Dafni, T.; D'Ambrosio, N.; Deaconu, C.; De Lellis, G.; Descombes, T.; Di Crescenzo, A.; Di Marco, N.; Druitt, G.; Eggleston, R.; Ferrer-Ribas, E.; Fusayasu, T.; Galán, J.; Galati, G.; García, J. A.; Garza, J. G.; Gentile, V.; Garcia-Sciveres, M.; Giomataris, Y.; Guerrero, N.; Guillaudin, O.; Guler, A. M.; Harton, J.; Hashimoto, T.; Hedges, M. T.; Iguaz, F. J.; Ikeda, T.; Jaegle, I.; Kadyk, J. A.; Katsuragawa, T.; Komura, S.; Kubo, H.; Kuge, K.; Lamblin, J.; Lauria, A.; Lee, E. R.; Lewis, P.; Leyton, M.; Loomba, D.; Lopez, J. P.; Luzón, G.; Mayet, F.; Mirallas, H.; Miuchi, K.; Mizumoto, T.; Mizumura, Y.; Monacelli, P.; Monroe, J.; Montesi, M. C.; Naka, T.; Nakamura, K.; Nishimura, H.; Ochi, A.; Papevangelou, T.; Parker, J. D.; Phan, N. S.; Pupilli, F.; Richer, J. P.; Riffard, Q.; Rosa, G.; Santos, D.; Sawano, T.; Sekiya, H.; Seong, I. S.; Snowden-Ifft, D. P.; Spooner, N. J. C.; Sugiyama, A.; Taishaku, R.; Takada, A.; Takeda, A.; Tanaka, M.; Tanimori, T.; Thorpe, T. N.; Tioukov, V.; Tomita, H.; Umemoto, A.; Vahsen, S. E.; Yamaguchi, Y.; Yoshimoto, M.; Zayas, E.
2016-11-01
The measurement of the direction of WIMP-induced nuclear recoils is a compelling but technologically challenging strategy to provide an unambiguous signature of the detection of Galactic dark matter. Most directional detectors aim to reconstruct the dark-matter-induced nuclear recoil tracks, either in gas or solid targets. The main challenge with directional detection is the need for high spatial resolution over large volumes, which puts strong requirements on the readout technologies. In this paper we review the various detector readout technologies used by directional detectors. In particular, we summarize the challenges, advantages and drawbacks of each approach, and discuss future prospects for these technologies.
NASA Astrophysics Data System (ADS)
Li, Shuo; Jin, Weiqi; Li, Li; Li, Yiyang
2018-05-01
Infrared thermal images can reflect the thermal-radiation distribution of a particular scene. However, the contrast of the infrared images is usually low. Hence, it is generally necessary to enhance the contrast of infrared images in advance to facilitate subsequent recognition and analysis. Based on the adaptive double plateaus histogram equalization, this paper presents an improved contrast enhancement algorithm for infrared thermal images. In the proposed algorithm, the normalized coefficient of variation of the histogram, which characterizes the level of contrast enhancement, is introduced as feedback information to adjust the upper and lower plateau thresholds. The experiments on actual infrared images show that compared to the three typical contrast-enhancement algorithms, the proposed algorithm has better scene adaptability and yields better contrast-enhancement results for infrared images with more dark areas or a higher dynamic range. Hence, it has high application value in contrast enhancement, dynamic range compression, and digital detail enhancement for infrared thermal images.
Top-attack modeling and automatic target detection using synthetic FLIR scenery
NASA Astrophysics Data System (ADS)
Weber, Bruce A.; Penn, Joseph A.
2004-09-01
A series of experiments have been performed to verify the utility of algorithmic tools for the modeling and analysis of cold-target signatures in synthetic, top-attack, FLIR video sequences. The tools include: MuSES/CREATION for the creation of synthetic imagery with targets, an ARL target detection algorithm to detect imbedded synthetic targets in scenes, and an ARL scoring algorithm, using Receiver-Operating-Characteristic (ROC) curve analysis, to evaluate detector performance. Cold-target detection variability was examined as a function of target emissivity, surrounding clutter type, and target placement in non-obscuring clutter locations. Detector metrics were also individually scored so as to characterize the effect of signature/clutter variations. Results show that using these tools, a detailed, physically meaningful, target detection analysis is possible and that scenario specific target detectors may be developed by selective choice and/or weighting of detector metrics. However, developing these tools into a reliable predictive capability will require the extension of these results to the modeling and analysis of a large number of data sets configured for a wide range of target and clutter conditions. Finally, these tools should also be useful for the comparison of competitive detection algorithms by providing well defined, and controllable target detection scenarios, as well as for the training and testing of expert human observers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, I; Ahmad, S; Alsbou, N
Purpose: A motion algorithm was developed to extract actual length, CT-numbers and motion amplitude of a mobile target imaged with cone-beam-CT (CBCT) retrospective to image-reconstruction. Methods: The motion model considered a mobile target moving with a sinusoidal motion and employed three measurable parameters: apparent length, CT number level and gradient of a mobile target obtained from CBCT images to extract information about the actual length and CT number value of the stationary target and motion amplitude. The algorithm was verified experimentally with a mobile phantom setup that has three targets with different sizes manufactured from homogenous tissue-equivalent gel material embeddedmore » into a thorax phantom. The phantom moved sinusoidal in one-direction using eight amplitudes (0–20mm) and a frequency of 15-cycles-per-minute. The model required imaging parameters such as slice thickness, imaging time. Results: This motion algorithm extracted three unknown parameters: length of the target, CT-number-level, motion amplitude for a mobile target retrospective to CBCT image reconstruction. The algorithm relates three unknown parameters to measurable apparent length, CT-number-level and gradient for well-defined mobile targets obtained from CBCT images. The motion model agreed with measured apparent lengths which were dependent on actual length of the target and motion amplitude. The cumulative CT-number for a mobile target was dependent on CT-number-level of the stationary target and motion amplitude. The gradient of the CT-distribution of mobile target is dependent on the stationary CT-number-level, actual target length along the direction of motion, and motion amplitude. Motion frequency and phase did not affect the elongation and CT-number distributions of mobile targets when imaging time included several motion cycles. Conclusion: The motion algorithm developed in this study has potential applications in diagnostic CT imaging and radiotherapy to extract actual length, size and CT-numbers distorted by motion in CBCT imaging. The model provides further information about motion of the target.« less
Access control violation prevention by low-cost infrared detection
NASA Astrophysics Data System (ADS)
Rimmer, Andrew N.
2004-09-01
A low cost 16x16 un-cooled pyroelectric detector array, allied with advanced tracking and detection algorithms, has enabled the development of a universal detector with a wide range of applications in people monitoring and homeland security. Violation of access control systems, whether controlled by proximity card, biometrics, swipe card or similar, may occur by 'tailgating' or 'piggybacking' where an 'approved' entrant with a valid entry card is accompanied by a closely spaced 'non-approved' entrant. The violation may be under duress, where the accompanying person is attempting to enter a secure facility by force or threat. Alternatively, the violation may be benign where staff members collude either through habit or lassitude, either with each other or with third parties, without considering the security consequences. Examples of the latter could include schools, hospitals or maternity homes. The 16x16 pyroelectric array is integrated into a detector or imaging system which incorporates data processing, target extraction and decision making algorithms. The algorithms apply interpolation to the array output, allowing a higher level of resolution than might otherwise be expected from such a low resolution array. The pyroelectric detection principle means that the detection will work in variable light conditions and even in complete darkness, if required. The algorithms can monitor the shape, form, temperature and number of persons in the scene and utilise this information to determine whether a violation has occurred or not. As people are seen as 'hot blobs' and are not individually recognisable, civil liberties are not infringed in the detection process. The output from the detector is a simple alarm signal which may act as input to the access control system as an alert or to trigger CCTV image display and storage. The applications for a tailgate detector can be demonstrated across many medium security applications where there are no physical means to prevent this type of security breach.
Zhang, Zhengyan; Zhang, Jianyun; Zhou, Qingsong; Li, Xiaobo
2018-01-01
In this paper, we consider the problem of tracking the direction of arrivals (DOA) and the direction of departure (DOD) of multiple targets for bistatic multiple-input multiple-output (MIMO) radar. A high-precision tracking algorithm for target angle is proposed. First, the linear relationship between the covariance matrix difference and the angle difference of the adjacent moment was obtained through three approximate relations. Then, the proposed algorithm obtained the relationship between the elements in the covariance matrix difference. On this basis, the performance of the algorithm was improved by averaging the covariance matrix element. Finally, the least square method was used to estimate the DOD and DOA. The algorithm realized the automatic correlation of the angle and provided better performance when compared with the adaptive asymmetric joint diagonalization (AAJD) algorithm. The simulation results demonstrated the effectiveness of the proposed algorithm. The algorithm provides the technical support for the practical application of MIMO radar. PMID:29518957
Zhang, Zhengyan; Zhang, Jianyun; Zhou, Qingsong; Li, Xiaobo
2018-03-07
In this paper, we consider the problem of tracking the direction of arrivals (DOA) and the direction of departure (DOD) of multiple targets for bistatic multiple-input multiple-output (MIMO) radar. A high-precision tracking algorithm for target angle is proposed. First, the linear relationship between the covariance matrix difference and the angle difference of the adjacent moment was obtained through three approximate relations. Then, the proposed algorithm obtained the relationship between the elements in the covariance matrix difference. On this basis, the performance of the algorithm was improved by averaging the covariance matrix element. Finally, the least square method was used to estimate the DOD and DOA. The algorithm realized the automatic correlation of the angle and provided better performance when compared with the adaptive asymmetric joint diagonalization (AAJD) algorithm. The simulation results demonstrated the effectiveness of the proposed algorithm. The algorithm provides the technical support for the practical application of MIMO radar.
NASA Astrophysics Data System (ADS)
Kuźniak, M.; Amaudruz, P.-A.; Batygov, M.; Beltran, B.; Bonatt, J.; Boulay, M. G.; Broerman, B.; Bueno, J. F.; Butcher, A.; Cai, B.; Chen, M.; Chouinard, R.; Cleveland, B. T.; Dering, K.; DiGioseffo, J.; Duncan, F.; Flower, T.; Ford, R.; Giampa, P.; Gorel, P.; Graham, K.; Grant, D. R.; Guliyev, E.; Hallin, A. L.; Hamstra, M.; Harvey, P.; Jillings, C. J.; Lawson, I.; Li, O.; Liimatainen, P.; Majewski, P.; McDonald, A. B.; McElroy, T.; McFarlane, K.; Monroe, J.; Muir, A.; Nantais, C.; Ng, C.; Noble, A. J.; Ouellet, C.; Palladino, K.; Pasuthip, P.; Peeters, S. J. M.; Pollmann, T.; Rau, W.; Retière, F.; Seeburn, N.; Singhrao, K.; Skensved, P.; Smith, B.; Sonley, T.; Tang, J.; Vázquez-Jáuregui, E.; Veloce, L.; Walding, J.; Ward, M.; DEAP Collaboration
2016-04-01
The DEAP-3600 experiment is located 2 km underground at SNOLAB, in Sudbury, Ontario. It is a single-phase detector that searches for dark matter particle interactions within a 1000-kg fiducial mass target of liquid argon. A first generation prototype detector (DEAP-1) with a 7-kg liquid argon target mass demonstrated a high level of pulse-shape discrimination (PSD) for reducing β / γ backgrounds and helped to develop low radioactivity techniques to mitigate surface-related α backgrounds. Construction of the DEAP-3600 detector is nearly complete and commissioning is starting in 2014. The target sensitivity to spin-independent scattering of Weakly Interacting Massive Particles (WIMPs) on nucleons of 10-46cm2 will allow one order of magnitude improvement in sensitivity over current searches at 100 GeV WIMP mass. This paper presents an overview and status of the DEAP-3600 project and discusses plans for a future multi-tonne experiment, DEAP-50T.
The new approach for infrared target tracking based on the particle filter algorithm
NASA Astrophysics Data System (ADS)
Sun, Hang; Han, Hong-xia
2011-08-01
Target tracking on the complex background in the infrared image sequence is hot research field. It provides the important basis in some fields such as video monitoring, precision, and video compression human-computer interaction. As a typical algorithms in the target tracking framework based on filtering and data connection, the particle filter with non-parameter estimation characteristic have ability to deal with nonlinear and non-Gaussian problems so it were widely used. There are various forms of density in the particle filter algorithm to make it valid when target occlusion occurred or recover tracking back from failure in track procedure, but in order to capture the change of the state space, it need a certain amount of particles to ensure samples is enough, and this number will increase in accompany with dimension and increase exponentially, this led to the increased amount of calculation is presented. In this paper particle filter algorithm and the Mean shift will be combined. Aiming at deficiencies of the classic mean shift Tracking algorithm easily trapped into local minima and Unable to get global optimal under the complex background. From these two perspectives that "adaptive multiple information fusion" and "with particle filter framework combining", we expand the classic Mean Shift tracking framework .Based on the previous perspective, we proposed an improved Mean Shift infrared target tracking algorithm based on multiple information fusion. In the analysis of the infrared characteristics of target basis, Algorithm firstly extracted target gray and edge character and Proposed to guide the above two characteristics by the moving of the target information thus we can get new sports guide grayscale characteristics and motion guide border feature. Then proposes a new adaptive fusion mechanism, used these two new information adaptive to integrate into the Mean Shift tracking framework. Finally we designed a kind of automatic target model updating strategy to further improve tracking performance. Experimental results show that this algorithm can compensate shortcoming of the particle filter has too much computation, and can effectively overcome the fault that mean shift is easy to fall into local extreme value instead of global maximum value .Last because of the gray and fusion target motion information, this approach also inhibit interference from the background, ultimately improve the stability and the real-time of the target track.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agnes, P.
We report the first results of DarkSide-50, a direct search for dark matter operating in the underground Laboratori Nazionali del Gran Sasso (LNGS) and searching for the rare nuclear recoils possibly induced by weakly interacting massive particles (WIMPs). The dark matter detector is a Liquid Argon Time Projection Chamber with a (46.4 ± 0.7) kg active mass, operated inside a 30 t organic liquid scintillator neutron veto, which is in turn installed at the center of a 1 kt water Cherenkov veto for the residual flux of cosmic rays. We report here the null results of a dark matter searchmore » for a (1422 ± 67) kg d exposure with an atmospheric argon fill. As a result, this is the most sensitive dark matter search performed with an argon target, corresponding to a 90% CL upper limit on the WIMP-nucleon spin-independent cross section of 6.1×10 -44 cm 2 for a WIMP mass of 100 Gev/c 2.« less
Not-so-well-tempered neutralino
NASA Astrophysics Data System (ADS)
Profumo, Stefano; Stefaniak, Tim; Stephenson-Haskins, Laurel
2017-09-01
Light electroweakinos, the neutral and charged fermionic supersymmetric partners of the standard model SU (2 )×U (1 ) gauge bosons and of the two SU(2) Higgs doublets, are an important target for searches for new physics with the Large Hadron Collider (LHC). However, if the lightest neutralino is the dark matter, constraints from direct dark matter detection experiments rule out large swaths of the parameter space accessible to the LHC, including in large part the so-called "well-tempered" neutralinos. We focus on the minimal supersymmetric standard model (MSSM) and explore in detail which regions of parameter space are not excluded by null results from direct dark matter detection, assuming exclusive thermal production of neutralinos in the early universe, and illustrate the complementarity with current and future LHC searches for electroweak gauginos. We consider both bino-Higgsino and bino-wino "not-so-well-tempered" neutralinos, i.e. we include models where the lightest neutralino constitutes only part of the cosmological dark matter, with the consequent suppression of the constraints from direct and indirect dark matter searches.
A review on quantum search algorithms
NASA Astrophysics Data System (ADS)
Giri, Pulak Ranjan; Korepin, Vladimir E.
2017-12-01
The use of superposition of states in quantum computation, known as quantum parallelism, has significant advantage in terms of speed over the classical computation. It is evident from the early invented quantum algorithms such as Deutsch's algorithm, Deutsch-Jozsa algorithm and its variation as Bernstein-Vazirani algorithm, Simon algorithm, Shor's algorithms, etc. Quantum parallelism also significantly speeds up the database search algorithm, which is important in computer science because it comes as a subroutine in many important algorithms. Quantum database search of Grover achieves the task of finding the target element in an unsorted database in a time quadratically faster than the classical computer. We review Grover's quantum search algorithms for a singe and multiple target elements in a database. The partial search algorithm of Grover and Radhakrishnan and its optimization by Korepin called GRK algorithm are also discussed.
Analysis of Alpha Backgrounds in DarkSide-50
NASA Astrophysics Data System (ADS)
Monte, Alissa; DarkSide Collaboration
2017-01-01
DarkSide-50 is the current phase of the DarkSide direct dark matter search program, operating underground at the Laboratori Nazionali del Gran Sasso in Italy. The detector is a dual-phase argon Time Projection Chamber (TPC), designed for direct detection of Weakly Interacting Massive Particles, and housed within an active veto system of liquid scintillator and water Cherenkov detectors. Since switching to a target of low radioactivity argon extracted from underground sources in April, 2016, the background is no longer dominated by naturally occurring 39Ar. However, alpha backgrounds from radon and its daughters remain, both from the liquid argon bulk and internal detector surfaces. I will present details of the analysis used to understand and quantify alpha backgrounds, as well as to understand other types of radon contamination that may be present, and our sensitivity to them.
TargetSpy: a supervised machine learning approach for microRNA target prediction.
Sturm, Martin; Hackenberg, Michael; Langenberger, David; Frishman, Dmitrij
2010-05-28
Virtually all currently available microRNA target site prediction algorithms require the presence of a (conserved) seed match to the 5' end of the microRNA. Recently however, it has been shown that this requirement might be too stringent, leading to a substantial number of missed target sites. We developed TargetSpy, a novel computational approach for predicting target sites regardless of the presence of a seed match. It is based on machine learning and automatic feature selection using a wide spectrum of compositional, structural, and base pairing features covering current biological knowledge. Our model does not rely on evolutionary conservation, which allows the detection of species-specific interactions and makes TargetSpy suitable for analyzing unconserved genomic sequences.In order to allow for an unbiased comparison of TargetSpy to other methods, we classified all algorithms into three groups: I) no seed match requirement, II) seed match requirement, and III) conserved seed match requirement. TargetSpy predictions for classes II and III are generated by appropriate postfiltering. On a human dataset revealing fold-change in protein production for five selected microRNAs our method shows superior performance in all classes. In Drosophila melanogaster not only our class II and III predictions are on par with other algorithms, but notably the class I (no-seed) predictions are just marginally less accurate. We estimate that TargetSpy predicts between 26 and 112 functional target sites without a seed match per microRNA that are missed by all other currently available algorithms. Only a few algorithms can predict target sites without demanding a seed match and TargetSpy demonstrates a substantial improvement in prediction accuracy in that class. Furthermore, when conservation and the presence of a seed match are required, the performance is comparable with state-of-the-art algorithms. TargetSpy was trained on mouse and performs well in human and drosophila, suggesting that it may be applicable to a broad range of species. Moreover, we have demonstrated that the application of machine learning techniques in combination with upcoming deep sequencing data results in a powerful microRNA target site prediction tool http://www.targetspy.org.
TargetSpy: a supervised machine learning approach for microRNA target prediction
2010-01-01
Background Virtually all currently available microRNA target site prediction algorithms require the presence of a (conserved) seed match to the 5' end of the microRNA. Recently however, it has been shown that this requirement might be too stringent, leading to a substantial number of missed target sites. Results We developed TargetSpy, a novel computational approach for predicting target sites regardless of the presence of a seed match. It is based on machine learning and automatic feature selection using a wide spectrum of compositional, structural, and base pairing features covering current biological knowledge. Our model does not rely on evolutionary conservation, which allows the detection of species-specific interactions and makes TargetSpy suitable for analyzing unconserved genomic sequences. In order to allow for an unbiased comparison of TargetSpy to other methods, we classified all algorithms into three groups: I) no seed match requirement, II) seed match requirement, and III) conserved seed match requirement. TargetSpy predictions for classes II and III are generated by appropriate postfiltering. On a human dataset revealing fold-change in protein production for five selected microRNAs our method shows superior performance in all classes. In Drosophila melanogaster not only our class II and III predictions are on par with other algorithms, but notably the class I (no-seed) predictions are just marginally less accurate. We estimate that TargetSpy predicts between 26 and 112 functional target sites without a seed match per microRNA that are missed by all other currently available algorithms. Conclusion Only a few algorithms can predict target sites without demanding a seed match and TargetSpy demonstrates a substantial improvement in prediction accuracy in that class. Furthermore, when conservation and the presence of a seed match are required, the performance is comparable with state-of-the-art algorithms. TargetSpy was trained on mouse and performs well in human and drosophila, suggesting that it may be applicable to a broad range of species. Moreover, we have demonstrated that the application of machine learning techniques in combination with upcoming deep sequencing data results in a powerful microRNA target site prediction tool http://www.targetspy.org. PMID:20509939
Applications of independent component analysis in SAR images
NASA Astrophysics Data System (ADS)
Huang, Shiqi; Cai, Xinhua; Hui, Weihua; Xu, Ping
2009-07-01
The detection of faint, small and hidden targets in synthetic aperture radar (SAR) image is still an issue for automatic target recognition (ATR) system. How to effectively separate these targets from the complex background is the aim of this paper. Independent component analysis (ICA) theory can enhance SAR image targets and improve signal clutter ratio (SCR), which benefits to detect and recognize faint targets. Therefore, this paper proposes a new SAR image target detection algorithm based on ICA. In experimental process, the fast ICA (FICA) algorithm is utilized. Finally, some real SAR image data is used to test the method. The experimental results verify that the algorithm is feasible, and it can improve the SCR of SAR image and increase the detection rate for the faint small targets.
Implementation and performance evaluation of acoustic denoising algorithms for UAV
NASA Astrophysics Data System (ADS)
Chowdhury, Ahmed Sony Kamal
Unmanned Aerial Vehicles (UAVs) have become popular alternative for wildlife monitoring and border surveillance applications. Elimination of the UAV's background noise and classifying the target audio signal effectively are still a major challenge. The main goal of this thesis is to remove UAV's background noise by means of acoustic denoising techniques. Existing denoising algorithms, such as Adaptive Least Mean Square (LMS), Wavelet Denoising, Time-Frequency Block Thresholding, and Wiener Filter, were implemented and their performance evaluated. The denoising algorithms were evaluated for average Signal to Noise Ratio (SNR), Segmental SNR (SSNR), Log Likelihood Ratio (LLR), and Log Spectral Distance (LSD) metrics. To evaluate the effectiveness of the denoising algorithms on classification of target audio, we implemented Support Vector Machine (SVM) and Naive Bayes classification algorithms. Simulation results demonstrate that LMS and Discrete Wavelet Transform (DWT) denoising algorithm offered superior performance than other algorithms. Finally, we implemented the LMS and DWT algorithms on a DSP board for hardware evaluation. Experimental results showed that LMS algorithm's performance is robust compared to DWT for various noise types to classify target audio signals.
Real Time Intelligent Target Detection and Analysis with Machine Vision
NASA Technical Reports Server (NTRS)
Howard, Ayanna; Padgett, Curtis; Brown, Kenneth
2000-01-01
We present an algorithm for detecting a specified set of targets for an Automatic Target Recognition (ATR) application. ATR involves processing images for detecting, classifying, and tracking targets embedded in a background scene. We address the problem of discriminating between targets and nontarget objects in a scene by evaluating 40x40 image blocks belonging to an image. Each image block is first projected onto a set of templates specifically designed to separate images of targets embedded in a typical background scene from those background images without targets. These filters are found using directed principal component analysis which maximally separates the two groups. The projected images are then clustered into one of n classes based on a minimum distance to a set of n cluster prototypes. These cluster prototypes have previously been identified using a modified clustering algorithm based on prior sensed data. Each projected image pattern is then fed into the associated cluster's trained neural network for classification. A detailed description of our algorithm will be given in this paper. We outline our methodology for designing the templates, describe our modified clustering algorithm, and provide details on the neural network classifiers. Evaluation of the overall algorithm demonstrates that our detection rates approach 96% with a false positive rate of less than 0.03%.
Data association approaches in bearings-only multi-target tracking
NASA Astrophysics Data System (ADS)
Xu, Benlian; Wang, Zhiquan
2008-03-01
According to requirements of time computation complexity and correctness of data association of the multi-target tracking, two algorithms are suggested in this paper. The proposed Algorithm 1 is developed from the modified version of dual Simplex method, and it has the advantage of direct and explicit form of the optimal solution. The Algorithm 2 is based on the idea of Algorithm 1 and rotational sort method, it combines not only advantages of Algorithm 1, but also reduces the computational burden, whose complexity is only 1/ N times that of Algorithm 1. Finally, numerical analyses are carried out to evaluate the performance of the two data association algorithms.
LETTER TO THE EDITOR: Optimization of partial search
NASA Astrophysics Data System (ADS)
Korepin, Vladimir E.
2005-11-01
A quantum Grover search algorithm can find a target item in a database faster than any classical algorithm. One can trade accuracy for speed and find a part of the database (a block) containing the target item even faster; this is partial search. A partial search algorithm was recently suggested by Grover and Radhakrishnan. Here we optimize it. Efficiency of the search algorithm is measured by the number of queries to the oracle. The author suggests a new version of the Grover-Radhakrishnan algorithm which uses a minimal number of such queries. The algorithm can run on the same hardware that is used for the usual Grover algorithm.
2016-06-01
TECHNICAL REPORT Algorithm for Automatic Detection, Localization and Characterization of Magnetic Dipole Targets Using the Laser Scalar...Automatic Detection, Localization and Characterization of Magnetic Dipole Targets Using the Laser Scalar Gradiometer Leon Vaizer, Jesse Angle, Neil...of Magnetic Dipole Targets Using LSG i June 2016 TABLE OF CONTENTS INTRODUCTION
Autonomous proximity operations using machine vision for trajectory control and pose estimation
NASA Technical Reports Server (NTRS)
Cleghorn, Timothy F.; Sternberg, Stanley R.
1991-01-01
A machine vision algorithm was developed which permits guidance control to be maintained during autonomous proximity operations. At present this algorithm exists as a simulation, running upon an 80386 based personal computer, using a ModelMATE CAD package to render the target vehicle. However, the algorithm is sufficiently simple, so that following off-line training on a known target vehicle, it should run in real time with existing vision hardware. The basis of the algorithm is a sequence of single camera images of the target vehicle, upon which radial transforms were performed. Selected points of the resulting radial signatures are fed through a decision tree, to determine whether the signature matches that of the known reference signatures for a particular view of the target. Based upon recognized scenes, the position of the maneuvering vehicle with respect to the target vehicles can be calculated, and adjustments made in the former's trajectory. In addition, the pose and spin rates of the target satellite can be estimated using this method.
NASA Astrophysics Data System (ADS)
Liu, Ming Xiong
2017-03-01
In this review, we present the current status and prospects of the dark sector physics search program of the SeaQuest/E1067 fixed target dimuon experiment at Fermilab Main Injector. There has been tremendous excitement and progress in searching for new physics in the dark sector in recent years. Dark sector refers to a collection of currently unknown particles that do not directly couple with the Standard Model (SM) strong and electroweak (EW) interactions but assumed to carry gravitational force, thus could be candidates of the missing Dark Matter (DM). Such particles may interact with the SM particles through “portal” interactions. Two of the simple possibilities are being investigated in our initial search: (1) dark photon and (2) dark Higgs. They could be within immediate reach of current or near future experimental search. We show there is a unique opportunity today at Fermilab to directly search for these particles in a highly motivated but uncharted parameter space in high-energy proton-nucleus collisions in the beam-dump mode using the 120 GeV proton beam from the Main Injector. Our current search window covers the mass range 0.2-10 GeV/c2, and in the near future, by adding an electromagnetic calorimeter (EMCal) to the spectrometer, we can further explore the lower mass region down to about ˜1 MeV/c2 through the di-electron channel. If dark photons (and/or dark Higgs) were observed, they would revolutionize our understanding of the fundamental structures and interactions of our universe.
Detecting Stealth Dark Matter Directly through Electromagnetic Polarizability.
Appelquist, T; Berkowitz, E; Brower, R C; Buchoff, M I; Fleming, G T; Jin, X-Y; Kiskis, J; Kribs, G D; Neil, E T; Osborn, J C; Rebbi, C; Rinaldi, E; Schaich, D; Schroeder, C; Syritsyn, S; Vranas, P; Weinberg, E; Witzel, O
2015-10-23
We calculate the spin-independent scattering cross section for direct detection that results from the electromagnetic polarizability of a composite scalar "stealth baryon" dark matter candidate, arising from a dark SU(4) confining gauge theory-"stealth dark matter." In the nonrelativistic limit, electromagnetic polarizability proceeds through a dimension-7 interaction leading to a very small scattering cross section for dark matter with weak-scale masses. This represents a lower bound on the scattering cross section for composite dark matter theories with electromagnetically charged constituents. We carry out lattice calculations of the polarizability for the lightest "baryon" states in SU(3) and SU(4) gauge theories using the background field method on quenched configurations. We find the polarizabilities of SU(3) and SU(4) to be comparable (within about 50%) normalized to the stealth baryon mass, which is suggestive for extensions to larger SU(N) groups. The resulting scattering cross sections with a xenon target are shown to be potentially detectable in the dark matter mass range of about 200-700 GeV, where the lower bound is from the existing LUX constraint while the upper bound is the coherent neutrino background. Significant uncertainties in the cross section remain due to the more complicated interaction of the polarizablity operator with nuclear structure; however, the steep dependence on the dark matter mass, 1/m(B)(6), suggests the observable dark matter mass range is not appreciably modified. We briefly highlight collider searches for the mesons in the theory as well as the indirect astrophysical effects that may also provide excellent probes of stealth dark matter.
Research of maneuvering target prediction and tracking technology based on IMM algorithm
NASA Astrophysics Data System (ADS)
Cao, Zheng; Mao, Yao; Deng, Chao; Liu, Qiong; Chen, Jing
2016-09-01
Maneuvering target prediction and tracking technology is widely used in both military and civilian applications, the study of those technologies is all along the hotspot and difficulty. In the Electro-Optical acquisition-tracking-pointing system (ATP), the primary traditional maneuvering targets are ballistic target, large aircraft and other big targets. Those targets have the features of fast velocity and a strong regular trajectory and Kalman Filtering and polynomial fitting have good effects when they are used to track those targets. In recent years, the small unmanned aerial vehicles developed rapidly for they are small, nimble and simple operation. The small unmanned aerial vehicles have strong maneuverability in the observation system of ATP although they are close-in, slow and small targets. Moreover, those vehicles are under the manual operation, therefore, the acceleration of them changes greatly and they move erratically. So the prediction and tracking precision is low when traditional algorithms are used to track the maneuvering fly of those targets, such as speeding up, turning, climbing and so on. The interacting multiple model algorithm (IMM) use multiple models to match target real movement trajectory, there are interactions between each model. The IMM algorithm can switch model based on a Markov chain to adapt to the change of target movement trajectory, so it is suitable to solve the prediction and tracking problems of the small unmanned aerial vehicles because of the better adaptability of irregular movement. This paper has set up model set of constant velocity model (CV), constant acceleration model (CA), constant turning model (CT) and current statistical model. And the results of simulating and analyzing the real movement trajectory data of the small unmanned aerial vehicles show that the prediction and tracking technology based on the interacting multiple model algorithm can get relatively lower tracking error and improve tracking precision comparing with traditional algorithms.
Genome-wide profiling of diel and circadian gene expression in the malaria vector Anopheles gambiae.
Rund, Samuel S C; Hou, Tim Y; Ward, Sarah M; Collins, Frank H; Duffield, Giles E
2011-08-09
Anopheles gambiae, the primary African vector of malaria parasites, exhibits numerous rhythmic behaviors including flight activity, swarming, mating, host seeking, egg laying, and sugar feeding. However, little work has been performed to elucidate the molecular basis for these daily rhythms. To study how gene expression is regulated globally by diel and circadian mechanisms, we have undertaken a DNA microarray analysis of An. gambiae under light/dark cycle (LD) and constant dark (DD) conditions. Adult mated, non-blood-fed female mosquitoes were collected every 4 h for 48 h, and samples were processed with DNA microarrays. Using a cosine wave-fitting algorithm, we identified 1,293 and 600 rhythmic genes with a period length of 20-28 h in the head and body, respectively, under LD conditions, representing 9.7 and 4.5% of the An. gambiae gene set. A majority of these genes was specific to heads or bodies. Examination of mosquitoes under DD conditions revealed that rhythmic programming of the transcriptome is dependent on an interaction between the endogenous clock and extrinsic regulation by the LD cycle. A subset of genes, including the canonical clock components, was expressed rhythmically under both environmental conditions. A majority of genes had peak expression clustered around the day/night transitions, anticipating dawn and dusk. Genes cover diverse biological processes such as transcription/translation, metabolism, detoxification, olfaction, vision, cuticle regulation, and immunity, and include rate-limiting steps in the pathways. This study highlights the fundamental roles that both the circadian clock and light play in the physiology of this important insect vector and suggests targets for intervention.
NASA Astrophysics Data System (ADS)
Choi, Myungje; Kim, Jhoon; Lee, Jaehwa; Kim, Mijin; Park, Young-Je; Jeong, Ukkyo; Kim, Woogyung; Hong, Hyunkee; Holben, Brent; Eck, Thomas F.; Song, Chul H.; Lim, Jae-Hyun; Song, Chang-Keun
2016-04-01
The Geostationary Ocean Color Imager (GOCI) onboard the Communication, Ocean, and Meteorological Satellite (COMS) is the first multi-channel ocean color imager in geostationary orbit. Hourly GOCI top-of-atmosphere radiance has been available for the retrieval of aerosol optical properties over East Asia since March 2011. This study presents improvements made to the GOCI Yonsei Aerosol Retrieval (YAER) algorithm together with validation results during the Distributed Regional Aerosol Gridded Observation Networks - Northeast Asia 2012 campaign (DRAGON-NE Asia 2012 campaign). The evaluation during the spring season over East Asia is important because of high aerosol concentrations and diverse types of Asian dust and haze. Optical properties of aerosol are retrieved from the GOCI YAER algorithm including aerosol optical depth (AOD) at 550 nm, fine-mode fraction (FMF) at 550 nm, single-scattering albedo (SSA) at 440 nm, Ångström exponent (AE) between 440 and 860 nm, and aerosol type. The aerosol models are created based on a global analysis of the Aerosol Robotic Networks (AERONET) inversion data, and covers a broad range of size distribution and absorptivity, including nonspherical dust properties. The Cox-Munk ocean bidirectional reflectance distribution function (BRDF) model is used over ocean, and an improved minimum reflectance technique is used over land. Because turbid water is persistent over the Yellow Sea, the land algorithm is used for such cases. The aerosol products are evaluated against AERONET observations and MODIS Collection 6 aerosol products retrieved from Dark Target (DT) and Deep Blue (DB) algorithms during the DRAGON-NE Asia 2012 campaign conducted from March to May 2012. Comparison of AOD from GOCI and AERONET resulted in a Pearson correlation coefficient of 0.881 and a linear regression equation with GOCI AOD = 1.083 × AERONET AOD - 0.042. The correlation between GOCI and MODIS AODs is higher over ocean than land. GOCI AOD shows better agreement with MODIS DB than MODIS DT. The other GOCI YAER products (AE, FMF, and SSA) show lower correlation with AERONET than AOD, but still show some skills for qualitative use.
NASA Technical Reports Server (NTRS)
Choi, Myungje; Kim, Jhoon; Lee, Jaehwa; Kim, Mijin; Park, Young-Je; Jeong, Ukkyo; Kim, Woogyung; Hong, Hyunkee; Holben, Brent; Eck, Thomas F.;
2016-01-01
The Geostationary Ocean Color Imager (GOCI) onboard the Communication, Ocean, and Meteorological Satellite (COMS) is the first multi-channel ocean color imager in geostationary orbit. Hourly GOCI top-of-atmosphere radiance has been available for the retrieval of aerosol optical properties over East Asia since March 2011. This study presents improvements made to the GOCI Yonsei Aerosol Retrieval (YAER) algorithm together with validation results during the Distributed Regional Aerosol Gridded Observation Networks - Northeast Asia 2012 campaign (DRAGONNE Asia 2012 campaign). The evaluation during the spring season over East Asia is important because of high aerosol concentrations and diverse types of Asian dust and haze. Optical properties of aerosol are retrieved from the GOCI YAER algorithm including aerosol optical depth (AOD) at 550 nm, fine-mode fraction (FMF) at 550 nm, single-scattering albedo (SSA) at 440 nm, Angstrom exponent (AE) between 440 and 860 nm, and aerosol type. The aerosol models are created based on a global analysis of the Aerosol Robotic Networks (AERONET) inversion data, and covers a broad range of size distribution and absorptivity, including nonspherical dust properties. The Cox-Munk ocean bidirectional reflectance distribution function (BRDF) model is used over ocean, and an improved minimum reflectance technique is used over land. Because turbid water is persistent over the Yellow Sea, the land algorithm is used for such cases. The aerosol products are evaluated against AERONET observations and MODIS Collection 6 aerosol products retrieved from Dark Target (DT) and Deep Blue (DB) algorithms during the DRAGON-NE Asia 2012 campaign conducted from March to May 2012. Comparison of AOD from GOCI and AERONET resulted in a Pearson correlation coefficient of 0.881 and a linear regression equation with GOCI AOD = 1.083 x AERONET AOD - 0.042. The correlation between GOCI and MODIS AODs is higher over ocean than land. GOCI AOD shows better agreement with MODIS DB than MODIS DT. The other GOCI YAER products (AE, FMF, and SSA) show lower correlation with AERONET than AOD, but still show some skills for qualitative use.
Strong Tracking Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking.
Liu, Hua; Wu, Wen
2017-03-31
Conventional spherical simplex-radial cubature Kalman filter (SSRCKF) for maneuvering target tracking may decline in accuracy and even diverge when a target makes abrupt state changes. To overcome this problem, a novel algorithm named strong tracking spherical simplex-radial cubature Kalman filter (STSSRCKF) is proposed in this paper. The proposed algorithm uses the spherical simplex-radial (SSR) rule to obtain a higher accuracy than cubature Kalman filter (CKF) algorithm. Meanwhile, by introducing strong tracking filter (STF) into SSRCKF and modifying the predicted states' error covariance with a time-varying fading factor, the gain matrix is adjusted on line so that the robustness of the filter and the capability of dealing with uncertainty factors is improved. In this way, the proposed algorithm has the advantages of both STF's strong robustness and SSRCKF's high accuracy. Finally, a maneuvering target tracking problem with abrupt state changes is used to test the performance of the proposed filter. Simulation results show that the STSSRCKF algorithm can get better estimation accuracy and greater robustness for maneuvering target tracking.
Strong Tracking Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking
Liu, Hua; Wu, Wen
2017-01-01
Conventional spherical simplex-radial cubature Kalman filter (SSRCKF) for maneuvering target tracking may decline in accuracy and even diverge when a target makes abrupt state changes. To overcome this problem, a novel algorithm named strong tracking spherical simplex-radial cubature Kalman filter (STSSRCKF) is proposed in this paper. The proposed algorithm uses the spherical simplex-radial (SSR) rule to obtain a higher accuracy than cubature Kalman filter (CKF) algorithm. Meanwhile, by introducing strong tracking filter (STF) into SSRCKF and modifying the predicted states’ error covariance with a time-varying fading factor, the gain matrix is adjusted on line so that the robustness of the filter and the capability of dealing with uncertainty factors is improved. In this way, the proposed algorithm has the advantages of both STF’s strong robustness and SSRCKF’s high accuracy. Finally, a maneuvering target tracking problem with abrupt state changes is used to test the performance of the proposed filter. Simulation results show that the STSSRCKF algorithm can get better estimation accuracy and greater robustness for maneuvering target tracking. PMID:28362347
Infrared small target detection technology based on OpenCV
NASA Astrophysics Data System (ADS)
Liu, Lei; Huang, Zhijian
2013-05-01
Accurate and fast detection of infrared (IR) dim target has very important meaning for infrared precise guidance, early warning, video surveillance, etc. In this paper, some basic principles and the implementing flow charts of a series of algorithms for target detection are described. These algorithms are traditional two-frame difference method, improved three-frame difference method, background estimate and frame difference fusion method, and building background with neighborhood mean method. On the foundation of above works, an infrared target detection software platform which is developed by OpenCV and MFC is introduced. Three kinds of tracking algorithms are integrated in this software. In order to explain the software clearly, the framework and the function are described in this paper. At last, the experiments are performed for some real-life IR images. The whole algorithm implementing processes and results are analyzed, and those algorithms for detection targets are evaluated from the two aspects of subjective and objective. The results prove that the proposed method has satisfying detection effectiveness and robustness. Meanwhile, it has high detection efficiency and can be used for real-time detection.
Infrared small target detection technology based on OpenCV
NASA Astrophysics Data System (ADS)
Liu, Lei; Huang, Zhijian
2013-09-01
Accurate and fast detection of infrared (IR) dim target has very important meaning for infrared precise guidance, early warning, video surveillance, etc. In this paper, some basic principles and the implementing flow charts of a series of algorithms for target detection are described. These algorithms are traditional two-frame difference method, improved three-frame difference method, background estimate and frame difference fusion method, and building background with neighborhood mean method. On the foundation of above works, an infrared target detection software platform which is developed by OpenCV and MFC is introduced. Three kinds of tracking algorithms are integrated in this software. In order to explain the software clearly, the framework and the function are described in this paper. At last, the experiments are performed for some real-life IR images. The whole algorithm implementing processes and results are analyzed, and those algorithms for detection targets are evaluated from the two aspects of subjective and objective. The results prove that the proposed method has satisfying detection effectiveness and robustness. Meanwhile, it has high detection efficiency and can be used for real-time detection.
Study of image matching algorithm and sub-pixel fitting algorithm in target tracking
NASA Astrophysics Data System (ADS)
Yang, Ming-dong; Jia, Jianjun; Qiang, Jia; Wang, Jian-yu
2015-03-01
Image correlation matching is a tracking method that searched a region most approximate to the target template based on the correlation measure between two images. Because there is no need to segment the image, and the computation of this method is little. Image correlation matching is a basic method of target tracking. This paper mainly studies the image matching algorithm of gray scale image, which precision is at sub-pixel level. The matching algorithm used in this paper is SAD (Sum of Absolute Difference) method. This method excels in real-time systems because of its low computation complexity. The SAD method is introduced firstly and the most frequently used sub-pixel fitting algorithms are introduced at the meantime. These fitting algorithms can't be used in real-time systems because they are too complex. However, target tracking often requires high real-time performance, we put forward a fitting algorithm named paraboloidal fitting algorithm based on the consideration above, this algorithm is simple and realized easily in real-time system. The result of this algorithm is compared with that of surface fitting algorithm through image matching simulation. By comparison, the precision difference between these two algorithms is little, it's less than 0.01pixel. In order to research the influence of target rotation on precision of image matching, the experiment of camera rotation was carried on. The detector used in the camera is a CMOS detector. It is fixed to an arc pendulum table, take pictures when the camera rotated different angles. Choose a subarea in the original picture as the template, and search the best matching spot using image matching algorithm mentioned above. The result shows that the matching error is bigger when the target rotation angle is larger. It's an approximate linear relation. Finally, the influence of noise on matching precision was researched. Gaussian noise and pepper and salt noise were added in the image respectively, and the image was processed by mean filter and median filter, then image matching was processed. The result show that when the noise is little, mean filter and median filter can achieve a good result. But when the noise density of salt and pepper noise is bigger than 0.4, or the variance of Gaussian noise is bigger than 0.0015, the result of image matching will be wrong.
Direct dark matter detection with the DarkSide-50 experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pagani, Luca
2017-01-01
The existence of dark matter is known because of its gravitational effects, and although its nature remains undisclosed, there is a growing indication that the galactic halo could be permeated by weakly interacting massive particles (WIMPs) with mass of the order ofmore » $100$$\\,GeV/c$$^2$ and coupling with ordinary matter at or below the weak scale. In this context, DarkSide-50 aims to direct observe WIMP-nucleon collisions in a liquid argon dual phase time-projection chamber located deep underground at Gran Sasso National Laboratory, in Italy. In this work a re-analysis of the data that led to the best limit on WIMP-nucleon cross section with an argon target is done. As starting point of the new approach, the energy reconstruction of events is considered: a new energy variable is developed where anti-correlation between ionization and scintillation produced by an interaction is taken into account. As first result, a better energy resolution is achieved. In this new energy framewor k, access is granted to micro-physics parameters fundamental to argon scintillation such as the recombination and quenching as a function of the energy. The improved knowledge of recombination and quenching allows to develop a new model for distinguish between events possibly due to WIMPs and backgrounds. In light of the new model, the final result of this work is a more stringent limit on spin independent WIMP-nucleon cross section with an argon target. This work was supervised by Marco Pallavicini and was completed in collaboration with members of the DarkSide collaboration.« less
Henry, Clémence; Bledsoe, Samuel W.; Siekman, Allison; Kollman, Alec; Waters, Brian M.; Feil, Regina; Stitt, Mark; Lagrimini, L. Mark
2014-01-01
Energy resources in plants are managed in continuously changing environments, such as changes occurring during the day/night cycle. Shading is an environmental disruption that decreases photosynthesis, compromises energy status, and impacts on crop productivity. The trehalose pathway plays a central but not well-defined role in maintaining energy balance. Here, we characterized the maize trehalose pathway genes and deciphered the impacts of the diurnal cycle and disruption of the day/night cycle on trehalose pathway gene expression and sugar metabolism. The maize genome encodes 14 trehalose-6-phosphate synthase (TPS) genes, 11 trehalose-6-phosphate phosphatase (TPP) genes, and one trehalase gene. Transcript abundance of most of these genes was impacted by the day/night cycle and extended dark stress, as were sucrose, hexose sugars, starch, and trehalose-6-phosphate (T6P) levels. After extended darkness, T6P levels inversely followed class II TPS and sucrose non-fermenting-related protein kinase 1 (SnRK1) target gene expression. Most significantly, T6P no longer tracked sucrose levels after extended darkness. These results showed: (i) conservation of the trehalose pathway in maize; (ii) that sucrose, hexose, starch, T6P, and TPS/TPP transcripts respond to the diurnal cycle; and(iii) that extended darkness disrupts the correlation between T6P and sucrose/hexose pools and affects SnRK1 target gene expression. A model for the role of the trehalose pathway in sensing of sucrose and energy status in maize seedlings is proposed. PMID:25271261
NASA Astrophysics Data System (ADS)
Shuxin, Li; Zhilong, Zhang; Biao, Li
2018-01-01
Plane is an important target category in remote sensing targets and it is of great value to detect the plane targets automatically. As remote imaging technology developing continuously, the resolution of the remote sensing image has been very high and we can get more detailed information for detecting the remote sensing targets automatically. Deep learning network technology is the most advanced technology in image target detection and recognition, which provided great performance improvement in the field of target detection and recognition in the everyday scenes. We combined the technology with the application in the remote sensing target detection and proposed an algorithm with end to end deep network, which can learn from the remote sensing images to detect the targets in the new images automatically and robustly. Our experiments shows that the algorithm can capture the feature information of the plane target and has better performance in target detection with the old methods.
The DataBridge: A System For Optimizing The Use Of Dark Data From The Long Tail Of Science
NASA Astrophysics Data System (ADS)
Lander, H.; Rajasekar, A.
2015-12-01
The DataBridge is a National Science Foundation funded collaborative project (OCI-1247652, OCI-1247602, OCI-1247663) designed to assist in the discovery of dark data sets from the long tail of science. The DataBridge aims to to build queryable communities of datasets using sociometric network analysis. This approach is being tested to evaluate the ability to leverage various forms of metadata to facilitate discovery of new knowledge. Each dataset in the Databridge has an associated name space used as a first level partitioning. In addition to testing known algorithms for SNA community building, the DataBridge project has built a message-based platform that allows users to provide their own algorithms for each of the stages in the community building process. The stages are: Signature Generation (SG): An SG algorithm creates a metadata signature for a dataset. Signature algorithms might use text metadata provided by the dataset creator or derive metadata. Relevance Algorithm (RA): An RA compares a pair of datasets and produces a similarity value between 0 and 1 for the two datasets. Sociometric Network Analysis (SNA): The SNA will operate on a similarity matrix produced by an RA to partition all of the datasets in the name space into a set of clusters. These clusters represent communities of closely related datasets. The DataBridge also includes a web application that produces a visual representation of the clustering. Future work includes a more complete application that will allow different types of searching of the network of datasets. The DataBridge approach is relevant to geoscience research and informatics. In this presentation we will outline the project, illustrate the deployment of the approach, and discuss other potential applications and next steps for the research such as applying this approach to models. In addition we will explore the relevance of DataBridge to other geoscience projects such as various EarthCube Building Blocks and DIBBS projects.
Antenna Allocation in MIMO Radar with Widely Separated Antennas for Multi-Target Detection
Gao, Hao; Wang, Jian; Jiang, Chunxiao; Zhang, Xudong
2014-01-01
In this paper, we explore a new resource called multi-target diversity to optimize the performance of multiple input multiple output (MIMO) radar with widely separated antennas for detecting multiple targets. In particular, we allocate antennas of the MIMO radar to probe different targets simultaneously in a flexible manner based on the performance metric of relative entropy. Two antenna allocation schemes are proposed. In the first scheme, each antenna is allocated to illuminate a proper target over the entire illumination time, so that the detection performance of each target is guaranteed. The problem is formulated as a minimum makespan scheduling problem in the combinatorial optimization framework. Antenna allocation is implemented through a branch-and-bound algorithm and an enhanced factor 2 algorithm. In the second scheme, called antenna-time allocation, each antenna is allocated to illuminate different targets with different illumination time. Both antenna allocation and time allocation are optimized based on illumination probabilities. Over a large range of transmitted power, target fluctuations and target numbers, both of the proposed antenna allocation schemes outperform the scheme without antenna allocation. Moreover, the antenna-time allocation scheme achieves a more robust detection performance than branch-and-bound algorithm and the enhanced factor 2 algorithm when the target number changes. PMID:25350505
Antenna allocation in MIMO radar with widely separated antennas for multi-target detection.
Gao, Hao; Wang, Jian; Jiang, Chunxiao; Zhang, Xudong
2014-10-27
In this paper, we explore a new resource called multi-target diversity to optimize the performance of multiple input multiple output (MIMO) radar with widely separated antennas for detecting multiple targets. In particular, we allocate antennas of the MIMO radar to probe different targets simultaneously in a flexible manner based on the performance metric of relative entropy. Two antenna allocation schemes are proposed. In the first scheme, each antenna is allocated to illuminate a proper target over the entire illumination time, so that the detection performance of each target is guaranteed. The problem is formulated as a minimum makespan scheduling problem in the combinatorial optimization framework. Antenna allocation is implemented through a branch-and-bound algorithm and an enhanced factor 2 algorithm. In the second scheme, called antenna-time allocation, each antenna is allocated to illuminate different targets with different illumination time. Both antenna allocation and time allocation are optimized based on illumination probabilities. Over a large range of transmitted power, target fluctuations and target numbers, both of the proposed antenna allocation schemes outperform the scheme without antenna allocation. Moreover, the antenna-time allocation scheme achieves a more robust detection performance than branch-and-bound algorithm and the enhanced factor 2 algorithm when the target number changes.
NASA Astrophysics Data System (ADS)
Senkerik, Roman; Zelinka, Ivan; Davendra, Donald; Oplatkova, Zuzana
2010-06-01
This research deals with the optimization of the control of chaos by means of evolutionary algorithms. This work is aimed on an explanation of how to use evolutionary algorithms (EAs) and how to properly define the advanced targeting cost function (CF) securing very fast and precise stabilization of desired state for any initial conditions. As a model of deterministic chaotic system, the one dimensional Logistic equation was used. The evolutionary algorithm Self-Organizing Migrating Algorithm (SOMA) was used in four versions. For each version, repeated simulations were conducted to outline the effectiveness and robustness of used method and targeting CF.
Evolutionary Multiobjective Design Targeting a Field Programmable Transistor Array
NASA Technical Reports Server (NTRS)
Aguirre, Arturo Hernandez; Zebulum, Ricardo S.; Coello, Carlos Coello
2004-01-01
This paper introduces the ISPAES algorithm for circuit design targeting a Field Programmable Transistor Array (FPTA). The use of evolutionary algorithms is common in circuit design problems, where a single fitness function drives the evolution process. Frequently, the design problem is subject to several goals or operating constraints, thus, designing a suitable fitness function catching all requirements becomes an issue. Such a problem is amenable for multi-objective optimization, however, evolutionary algorithms lack an inherent mechanism for constraint handling. This paper introduces ISPAES, an evolutionary optimization algorithm enhanced with a constraint handling technique. Several design problems targeting a FPTA show the potential of our approach.
Aircraft target detection algorithm based on high resolution spaceborne SAR imagery
NASA Astrophysics Data System (ADS)
Zhang, Hui; Hao, Mengxi; Zhang, Cong; Su, Xiaojing
2018-03-01
In this paper, an image classification algorithm for airport area is proposed, which based on the statistical features of synthetic aperture radar (SAR) images and the spatial information of pixels. The algorithm combines Gamma mixture model and MRF. The algorithm using Gamma mixture model to obtain the initial classification result. Pixel space correlation based on the classification results are optimized by the MRF technique. Additionally, morphology methods are employed to extract airport (ROI) region where the suspected aircraft target samples are clarified to reduce the false alarm and increase the detection performance. Finally, this paper presents the plane target detection, which have been verified by simulation test.
The research of radar target tracking observed information linear filter method
NASA Astrophysics Data System (ADS)
Chen, Zheng; Zhao, Xuanzhi; Zhang, Wen
2018-05-01
Aiming at the problems of low precision or even precision divergent is caused by nonlinear observation equation in radar target tracking, a new filtering algorithm is proposed in this paper. In this algorithm, local linearization is carried out on the observed data of the distance and angle respectively. Then the kalman filter is performed on the linearized data. After getting filtered data, a mapping operation will provide the posteriori estimation of target state. A large number of simulation results show that this algorithm can solve above problems effectively, and performance is better than the traditional filtering algorithm for nonlinear dynamic systems.
Interacting with target tracking algorithms in a gaze-enhanced motion video analysis system
NASA Astrophysics Data System (ADS)
Hild, Jutta; Krüger, Wolfgang; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen
2016-05-01
Motion video analysis is a challenging task, particularly if real-time analysis is required. It is therefore an important issue how to provide suitable assistance for the human operator. Given that the use of customized video analysis systems is more and more established, one supporting measure is to provide system functions which perform subtasks of the analysis. Recent progress in the development of automated image exploitation algorithms allow, e.g., real-time moving target tracking. Another supporting measure is to provide a user interface which strives to reduce the perceptual, cognitive and motor load of the human operator for example by incorporating the operator's visual focus of attention. A gaze-enhanced user interface is able to help here. This work extends prior work on automated target recognition, segmentation, and tracking algorithms as well as about the benefits of a gaze-enhanced user interface for interaction with moving targets. We also propose a prototypical system design aiming to combine both the qualities of the human observer's perception and the automated algorithms in order to improve the overall performance of a real-time video analysis system. In this contribution, we address two novel issues analyzing gaze-based interaction with target tracking algorithms. The first issue extends the gaze-based triggering of a target tracking process, e.g., investigating how to best relaunch in the case of track loss. The second issue addresses the initialization of tracking algorithms without motion segmentation where the operator has to provide the system with the object's image region in order to start the tracking algorithm.
Do Galactic Potential Wells Depend on Their Largescale Environment
NASA Astrophysics Data System (ADS)
Mo, H. J.; Lahav, O.
1993-04-01
We study the dependence of the intrinsic velocities of galaxies on their large-scale environment, using a cross-correlation technique that provides an objective way of defining the local overdensity of `trace' galaxies around `target' galaxies. We use galaxies in optical (CfA and SSRS) and IRAS redshift surveys as tracers of the density field, and about 1000 spiral galaxies with measured circular velocities and elliptical galaxies with measured velocity dispersion as `targets'. We find that the correlation function tends to increase with circular velocity, the trend being weak except in the case of cD-like elliptical galaxies with the highest velocity dispersions (σ >~ 300 km s^-1^), where the effect is strong, possibly due to morphological segregations in clusters of galaxies. A fit to the mean overdensity δ(r < r_p_) of the trace galaxies (in spheres of radius r_p_) around target galaxies as a function of the circular velocities V_c_ shows a weak increase of δ with v_c_, with slope {DELTA}δ(r<~3.6 h^-1^ Mpc)/{DELTA}V_c_ <~ 0.02. The observed weak correlation is contrasted with the strong dependence of the correlation functions of dark haloes on their circular velocities predicted in some (e.g. high-biasing cold dark matter) models for galaxy formation. In particular, our results are inconsistent with the prediction of the `natural' (high) biasing model at a high significance level. Comparison of our results with those of a simple biasing model suggests that either the observed circular velocities of galaxies are not simply related to the circular velocities of dark haloes, or most dark haloes were formed at high redshifts, or the galaxy distribution does not trace the matter distribution in a simple way.
Contrast, size, and orientation-invariant target detection in infrared imagery
NASA Astrophysics Data System (ADS)
Zhou, Yi-Tong; Crawshaw, Richard D.
1991-08-01
Automatic target detection in IR imagery is a very difficult task due to variations in target brightness, shape, size, and orientation. In this paper, the authors present a contrast, size, and orientation invariant algorithm based on Gabor functions for detecting targets from a single IR image frame. The algorithms consists of three steps. First, it locates potential targets by using low-resolution Gabor functions which resist noise and background clutter effects, then, it removes false targets and eliminates redundant target points based on a similarity measure. These two steps mimic human vision processing but are different from Zeevi's Foveating Vision System. Finally, it uses both low- and high-resolution Gabor functions to verify target existence. This algorithm has been successfully tested on several IR images that contain multiple examples of military vehicles with different size and brightness in various background scenes and orientations.
A range-based predictive localization algorithm for WSID networks
NASA Astrophysics Data System (ADS)
Liu, Yuan; Chen, Junjie; Li, Gang
2017-11-01
Most studies on localization algorithms are conducted on the sensor networks with densely distributed nodes. However, the non-localizable problems are prone to occur in the network with sparsely distributed sensor nodes. To solve this problem, a range-based predictive localization algorithm (RPLA) is proposed in this paper for the wireless sensor networks syncretizing the RFID (WSID) networks. The Gaussian mixture model is established to predict the trajectory of a mobile target. Then, the received signal strength indication is used to reduce the residence area of the target location based on the approximate point-in-triangulation test algorithm. In addition, collaborative localization schemes are introduced to locate the target in the non-localizable situations. Simulation results verify that the RPLA achieves accurate localization for the network with sparsely distributed sensor nodes. The localization accuracy of the RPLA is 48.7% higher than that of the APIT algorithm, 16.8% higher than that of the single Gaussian model-based algorithm and 10.5% higher than that of the Kalman filtering-based algorithm.
Aided target recognition processing of MUDSS sonar data
NASA Astrophysics Data System (ADS)
Lau, Brian; Chao, Tien-Hsin
1998-09-01
The Mobile Underwater Debris Survey System (MUDSS) is a collaborative effort by the Navy and the Jet Propulsion Lab to demonstrate multi-sensor, real-time, survey of underwater sites for ordnance and explosive waste (OEW). We describe the sonar processing algorithm, a novel target recognition algorithm incorporating wavelets, morphological image processing, expansion by Hermite polynomials, and neural networks. This algorithm has found all planted targets in MUDSS tests and has achieved spectacular success upon another Coastal Systems Station (CSS) sonar image database.
HACC: Simulating sky surveys on state-of-the-art supercomputing architectures
NASA Astrophysics Data System (ADS)
Habib, Salman; Pope, Adrian; Finkel, Hal; Frontiere, Nicholas; Heitmann, Katrin; Daniel, David; Fasel, Patricia; Morozov, Vitali; Zagaris, George; Peterka, Tom; Vishwanath, Venkatram; Lukić, Zarija; Sehrish, Saba; Liao, Wei-keng
2016-01-01
Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the 'Dark Universe', dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers that enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC's design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.
HACC: Simulating sky surveys on state-of-the-art supercomputing architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, Salman; Pope, Adrian; Finkel, Hal
2016-01-01
Current and future surveys of large-scale cosmic structure are associated with a massive and complex datastream to study, characterize, and ultimately understand the physics behind the two major components of the ‘Dark Universe’, dark energy and dark matter. In addition, the surveys also probe primordial perturbations and carry out fundamental measurements, such as determining the sum of neutrino masses. Large-scale simulations of structure formation in the Universe play a critical role in the interpretation of the data and extraction of the physics of interest. Just as survey instruments continue to grow in size and complexity, so do the supercomputers thatmore » enable these simulations. Here we report on HACC (Hardware/Hybrid Accelerated Cosmology Code), a recently developed and evolving cosmology N-body code framework, designed to run efficiently on diverse computing architectures and to scale to millions of cores and beyond. HACC can run on all current supercomputer architectures and supports a variety of programming models and algorithms. It has been demonstrated at scale on Cell- and GPU-accelerated systems, standard multi-core node clusters, and Blue Gene systems. HACC’s design allows for ease of portability, and at the same time, high levels of sustained performance on the fastest supercomputers available. We present a description of the design philosophy of HACC, the underlying algorithms and code structure, and outline implementation details for several specific architectures. We show selected accuracy and performance results from some of the largest high resolution cosmological simulations so far performed, including benchmarks evolving more than 3.6 trillion particles.« less
Indirect dark matter searches in the dwarf satellite galaxy Ursa Major II with the MAGIC telescopes
NASA Astrophysics Data System (ADS)
Ahnen, M. L.; Ansoldi, S.; Antonelli, L. A.; Arcaro, C.; Baack, D.; Babić, A.; Banerjee, B.; Bangale, P.; Barres de Almeida, U.; Barrio, J. A.; Becerra González, J.; Bednarek, W.; Bernardini, E.; Berse, R. Ch.; Berti, A.; Bhattacharyya, W.; Biland, A.; Blanch, O.; Bonnoli, G.; Carosi, R.; Carosi, A.; Ceribella, G.; Chatterjee, A.; Colak, S. M.; Colin, P.; Colombo, E.; Contreras, J. L.; Cortina, J.; Covino, S.; Cumani, P.; Da Vela, P.; Dazzi, F.; De Angelis, A.; De Lotto, B.; Delfino, M.; Delgado, J.; Di Pierro, F.; Domínguez, A.; Dominis Prester, D.; Dorner, D.; Doro, M.; Einecke, S.; Elsaesser, D.; Fallah Ramazani, V.; Fernández-Barral, A.; Fidalgo, D.; Fonseca, M. V.; Font, L.; Fruck, C.; Galindo, D.; García López, R. J.; Garczarczyk, M.; Gaug, M.; Giammaria, P.; Godinović, N.; Gora, D.; Guberman, D.; Hadasch, D.; Hahn, A.; Hassan, T.; Hayashida, M.; Herrera, J.; Hose, J.; Hrupec, D.; Ishio, K.; Konno, Y.; Kubo, H.; Kushida, J.; Kuveždić, D.; Lelas, D.; Lindfors, E.; Lombardi, S.; Longo, F.; López, M.; Maggio, C.; Majumdar, P.; Makariev, M.; Maneva, G.; Manganaro, M.; Mannheim, K.; Maraschi, L.; Mariotti, M.; Martínez, M.; Masuda, S.; Mazin, D.; Mielke, K.; Minev, M.; Miranda, J. M.; Mirzoyan, R.; Moralejo, A.; Moreno, V.; Moretti, E.; Nagayoshi, T.; Neustroev, V.; Niedzwiecki, A.; Nievas Rosillo, M.; Nigro, C.; Nilsson, K.; Ninci, D.; Nishijima, K.; Noda, K.; Nogués, L.; Paiano, S.; Palacio, J.; Paneque, D.; Paoletti, R.; Paredes, J. M.; Pedaletti, G.; Peresano, M.; Persic, M.; Prada Moroni, P. G.; Prandini, E.; Puljak, I.; Garcia, J. R.; Reichardt, I.; Rhode, W.; Ribó, M.; Rico, J.; Righi, C.; Rugliancich, A.; Saito, T.; Satalecka, K.; Schweizer, T.; Sitarek, J.; Šnidarić, I.; Sobczynska, D.; Stamerra, A.; Strzys, M.; Surić, T.; Takahashi, M.; Takalo, L.; Tavecchio, F.; Temnikov, P.; Terzić, T.; Teshima, M.; Torres-Albà, N.; Treves, A.; Tsujimoto, S.; Vanzo, G.; Vazquez Acosta, M.; Vovk, I.; Ward, J. E.; Will, M.; Zarić, D.
2018-03-01
The dwarf spheroidal galaxy Ursa Major II (UMaII) is believed to be one of the most dark-matter dominated systems among the Milky Way satellites and represents a suitable target for indirect dark matter (DM) searches. The MAGIC telescopes carried out a deep observation campaign on UMaII between 2014 and 2016, collecting almost one hundred hours of good-quality data. This campaign enlarges the pool of DM targets observed at very high energy (E gtrsim 50 GeV) in search for signatures of DM annihilation in the wide mass range between ~100 GeV and ~100 TeV. To this end, the data are analyzed with the full likelihood analysis, a method based on the exploitation of the spectral information of the recorded events for an optimal sensitivity to the explored DM models. We obtain constraints on the annihilation cross-section for different channels that are among the most robust and stringent achieved so far at the TeV mass scale from observations of dwarf satellite galaxies.
Dailey, George; Aurand, Lisa; Stewart, John; Ameer, Barbara; Zhou, Rong
2014-03-01
Several titration algorithms can be used to adjust insulin dose and attain blood glucose targets. We compared clinical outcomes using three initiation and titration algorithms for insulin glargine in insulin-naive patients with type 2 diabetes mellitus (T2DM); focusing on those receiving both metformin and sulfonylurea (SU) at baseline. This was a pooled analysis of patient-level data from prospective, randomized, controlled 24-week trials. Patients received algorithm 1 (1 IU increase once daily, if fasting plasma glucose [FPG] > target), algorithm 2 (2 IU increase every 3 days, if FPG > target), or algorithm 3 (treat-to-target, generally 2-8 IU increase weekly based on 2-day mean FPG levels). Glycemic control, insulin dose, and hypoglycemic events were compared between algorithms. Overall, 1380 patients were included. In patients receiving metformin and SU at baseline, there were no significant differences in glycemic control between algorithms. Weight-adjusted dose was higher for algorithm 2 vs algorithms 1 and 3 (P = 0.0037 and P < 0.0001, respectively), though results were not significantly different when adjusted for reductions in HbA1c (0.36 IU/kg, 0.43 IU/kg, and 0.31 IU/kg for algorithms 1, 2, and 3, respectively). Yearly hypoglycemic event rates (confirmed blood glucose <56 mg/dL) were higher for algorithm 3 than algorithms 1 (P = 0.0003) and 2 (P < 0.0001). Three algorithms for initiation and titration of insulin glargine in patients with T2DM resulted in similar levels of glycemic control, with lower rates of hypoglycemia for patients treated using simpler algorithms 1 and 2. © 2013 Ruijin Hospital, Shanghai Jiaotong University School of Medicine and Wiley Publishing Asia Pty Ltd.
Caputo, Regina; Buckley, Matthew R.; Martin, Pierrick; ...
2016-03-22
The Small Magellanic Cloud (SMC) is the second-largest satellite galaxy of the Milky Way and is only 60 kpc away. As a nearby, massive, and dense object with relatively low astrophysical backgrounds, it is a natural target for dark matter indirect detection searches. In this work, we use six years of Pass 8 data from the Fermi Large Area Telescope to search for gamma-ray signals of dark matter annihilation in the SMC. Using data-driven fits to the gamma-ray backgrounds, and a combination of N-body simulations and direct measurements of rotation curves to estimate the SMC DM density profile, we found that themore » SMC was well described by standard astrophysical sources, and no signal from dark matter annihilation was detected. We set conservative upper limits on the dark matter annihilation cross section. Furthermore, these constraints are in agreement with stronger constraints set by searches in the Large Magellanic Cloud and approach the canonical thermal relic cross section at dark matter masses lower than 10 GeV in the bb¯ and τ +τ - channels.« less
Thermal dark matter through the Dirac neutrino portal
NASA Astrophysics Data System (ADS)
Batell, Brian; Han, Tao; McKeen, David; Haghi, Barmak Shams Es
2018-04-01
We study a simple model of thermal dark matter annihilating to standard model neutrinos via the neutrino portal. A (pseudo-)Dirac sterile neutrino serves as a mediator between the visible and the dark sectors, while an approximate lepton number symmetry allows for a large neutrino Yukawa coupling and, in turn, efficient dark matter annihilation. The dark sector consists of two particles, a Dirac fermion and complex scalar, charged under a symmetry that ensures the stability of the dark matter. A generic prediction of the model is a sterile neutrino with a large active-sterile mixing angle that decays primarily invisibly. We derive existing constraints and future projections from direct detection experiments, colliders, rare meson and tau decays, electroweak precision tests, and small scale structure observations. Along with these phenomenological tests, we investigate the consequences of perturbativity and scalar mass fine tuning on the model parameter space. A simple, conservative scheme to confront the various tests with the thermal relic target is outlined, and we demonstrate that much of the cosmologically-motivated parameter space is already constrained. We also identify new probes of this scenario such as multibody kaon decays and Drell-Yan production of W bosons at the LHC.
NASA Astrophysics Data System (ADS)
Inc, Mustafa; Aliyu, Aliyu Isa; Yusuf, Abdullahi
2017-05-01
This paper studies the dynamics of solitons to the nonlinear Schrödinger’s equation (NLSE) with spatio-temporal dispersion (STD). The integration algorithm that is employed in this paper is the Riccati-Bernoulli sub-ODE method. This leads to dark and singular soliton solutions that are important in the field of optoelectronics and fiber optics. The soliton solutions appear with all necessary constraint conditions that are necessary for them to exist. There are four types of nonlinear media studied in this paper. They are Kerr law, power law, parabolic law and dual law. The conservation laws (Cls) for the Kerr law and parabolic law nonlinear media are constructed using the conservation theorem presented by Ibragimov.
Generation of dark hollow beam via coherent combination based on adaptive optics.
Zheng, Yi; Wang, Xiaohua; Shen, Feng; Li, Xinyang
2010-12-20
A novel method for generating a dark hollow beam (DHB) is proposed and studied both theoretically and experimentally. A coherent combination technique for laser arrays is implemented based on adaptive optics (AO). A beam arraying structure and an active segmented mirror are designed and described. Piston errors are extracted by a zero-order interference detection system with the help of a custom-made photo-detectors array. An algorithm called the extremum approach is adopted to calculate feedback control signals. A dynamic piston error is imported by LiNbO3 to test the capability of the AO servo. In a closed loop the stable and clear DHB is obtained. The experimental results confirm the feasibility of the concept.
Private algorithms for the protected in social network search
Kearns, Michael; Roth, Aaron; Wu, Zhiwei Steven; Yaroslavtsev, Grigory
2016-01-01
Motivated by tensions between data privacy for individual citizens and societal priorities such as counterterrorism and the containment of infectious disease, we introduce a computational model that distinguishes between parties for whom privacy is explicitly protected, and those for whom it is not (the targeted subpopulation). The goal is the development of algorithms that can effectively identify and take action upon members of the targeted subpopulation in a way that minimally compromises the privacy of the protected, while simultaneously limiting the expense of distinguishing members of the two groups via costly mechanisms such as surveillance, background checks, or medical testing. Within this framework, we provide provably privacy-preserving algorithms for targeted search in social networks. These algorithms are natural variants of common graph search methods, and ensure privacy for the protected by the careful injection of noise in the prioritization of potential targets. We validate the utility of our algorithms with extensive computational experiments on two large-scale social network datasets. PMID:26755606
Private algorithms for the protected in social network search.
Kearns, Michael; Roth, Aaron; Wu, Zhiwei Steven; Yaroslavtsev, Grigory
2016-01-26
Motivated by tensions between data privacy for individual citizens and societal priorities such as counterterrorism and the containment of infectious disease, we introduce a computational model that distinguishes between parties for whom privacy is explicitly protected, and those for whom it is not (the targeted subpopulation). The goal is the development of algorithms that can effectively identify and take action upon members of the targeted subpopulation in a way that minimally compromises the privacy of the protected, while simultaneously limiting the expense of distinguishing members of the two groups via costly mechanisms such as surveillance, background checks, or medical testing. Within this framework, we provide provably privacy-preserving algorithms for targeted search in social networks. These algorithms are natural variants of common graph search methods, and ensure privacy for the protected by the careful injection of noise in the prioritization of potential targets. We validate the utility of our algorithms with extensive computational experiments on two large-scale social network datasets.
Van Nuffel, A; Van De Gucht, T; Saeys, W; Sonck, B; Opsomer, G; Vangeyte, J; Mertens, K C; De Ketelaere, B; Van Weyenberg, S
2016-09-01
To tackle the high prevalence of lameness, techniques to monitor cow locomotion are being developed in order to detect changes in cows' locomotion due to lameness. Obviously, in such lameness detection systems, alerts should only respond to locomotion changes that are related to lameness. However, other environmental or cow factors can contribute to locomotion changes not related to lameness and hence, might cause false alerts. In this study the effects of wet surfaces, dark environment, age, production level, lactation and gestation stage on cow locomotion were investigated. Data was collected at Institute for Agricultural and Fisheries Research research farm (Melle, Belgium) during a 5-month period. The gait variables of 30 non-lame and healthy Holstein cows were automatically measured every day. In dark environments and on wet walking surfaces cows took shorter, more asymmetrical strides with less step overlap. In general, older cows had a more asymmetrical gait and they walked slower with more abduction. Lactation stage or gestation stage also showed significant association with asymmetrical and shorter gait and less step overlap probably due to the heavy calf in the uterus. Next, two lameness detection algorithms were developed to investigate the added value of environmental and cow data into detection models. One algorithm solely used locomotion variables and a second algorithm used the same locomotion variables and additional environmental and cow data. In the latter algorithm only age and lactation stage together with the locomotion variables were withheld during model building. When comparing the sensitivity for the detection of non-lame cows, sensitivity increased by 10% when the cow data was added in the algorithm (sensitivity was 70% and 80% for the first and second algorithm, respectively). Hence, the number of false alerts for lame cows that were actually non-lame, decreased. This pilot study shows that using knowledge on influencing factors on cow locomotion will help in reducing the number of false alerts for lameness detection systems under development. However, further research is necessary in order to better understand these and many other possible influencing factors (e.g. trimming, conformation) of non-lame and hence 'normal' locomotion in cows.
The DAMIC Dark Matter Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
de Mello Neto, J. R.T.
The DAMIC (DArk Matter In CCDs) experiment uses high-resistivity, scientific-grade CCDs to search for dark matter. The CCD’s low electronic noise allows an unprecedently low energy threshold of a few tens of eV; this characteristic makes it possible to detect silicon recoils resulting from interactions of low-mass WIMPs. In addition, the CCD’s high spatial resolution and the excellent energy response results in very effective background identification techniques. The experiment has a unique sensitivity to dark matter particles with masses below 10 GeV/c 2. Previous results have motivated the construction of DAMIC100, a 100 grams silicon target detector currently being installedmore » at SNOLAB. The mode of operation and unique imaging capabilities of the CCDs, and how they may be exploited to characterize and suppress backgrounds are discussed, as well as physics results after one year of data taking.« less
Losing the Dark: Public Outreach about Light Pollution and Its Mitigation
NASA Astrophysics Data System (ADS)
Collins Petersen, Carolyn; Petersen, Mark C.; Walker, Constance E.; Kardel, W. Scott; International Dark Sky Association Education Committee
2015-01-01
Losing the Dark is a PSA video available for public outreach through fulldome theaters as well as conventional venues (classroom, lecture hall, YouTube, Vimeo). It was created by Loch Ness Productions for the International Dark Sky Association. It explains problems caused by light pollution, which targets astronomy, health, and the environment. Losing the Dark also suggests ways people can implement "wise lighting" practices to help mitigate light pollution. The video is available free of charge for outreach professionals in planetarium facilities (both fulldome and classical), science centers, classroom, and other outreach venues, and has been translated into 13 languages. It is available via download, USB key (at cost), and through online venues. This paper summarizes the program's outreach to more than a thousand fulldome theaters, nearly 100,000 views via four sites on Youtube and Vimeo,a number of presentations at other museum and classroom facilities, and shares some preliminary metrics and commentary from users.
NASA Astrophysics Data System (ADS)
Cai, Lei; Wang, Lin; Li, Bo; Zhang, Libao; Lv, Wen
2017-06-01
Vehicle tracking technology is currently one of the most active research topics in machine vision. It is an important part of intelligent transportation system. However, in theory and technology, it still faces many challenges including real-time and robustness. In video surveillance, the targets need to be detected in real-time and to be calculated accurate position for judging the motives. The contents of video sequence images and the target motion are complex, so the objects can't be expressed by a unified mathematical model. Object-tracking is defined as locating the interest moving target in each frame of a piece of video. The current tracking technology can achieve reliable results in simple environment over the target with easy identified characteristics. However, in more complex environment, it is easy to lose the target because of the mismatch between the target appearance and its dynamic model. Moreover, the target usually has a complex shape, but the tradition target tracking algorithm usually represents the tracking results by simple geometric such as rectangle or circle, so it cannot provide accurate information for the subsequent upper application. This paper combines a traditional object-tracking technology, Mean-Shift algorithm, with a kind of image segmentation algorithm, Active-Contour model, to get the outlines of objects while the tracking process and automatically handle topology changes. Meanwhile, the outline information is used to aid tracking algorithm to improve it.
High-Contrast Coronagraph Performance in the Presence of DM Actuator Defects
NASA Technical Reports Server (NTRS)
Sidick, Erkin; Shaklan, Stuart; Cady, Eric
2015-01-01
Deformable Mirrors (DMs) are critical elements in high contrast coronagraphs, requiring precision and stability measured in picometers to enable detection of Earth-like exoplanets. Occasionally DM actuators or their associated cables or electronics fail, requiring a wavefront control algorithm to compensate for actuators that may be displaced from their neighbors by hundreds of nanometers. We have carried out experiments on our High-Contrast Imaging Testbed (HCIT) to study the impact of failed actuators in partial fulfillment of the Terrestrial Planet Finder Coronagraph optical model validation milestone. We show that the wavefront control algorithm adapts to several broken actuators and maintains dark-hole contrast in broadband light.
High-contrast coronagraph performance in the presence of DM actuator defects
NASA Astrophysics Data System (ADS)
Sidick, Erkin; Shaklan, Stuart; Cady, Eric
2015-09-01
Deformable Mirrors (DMs) are critical elements in high contrast coronagraphs, requiring precision and stability measured in picometers to enable detection of Earth-like exoplanets. Occasionally DM actuators or their associated cables or electronics fail, requiring a wavefront control algorithm to compensate for actuators that may be displaced from their neighbors by hundreds of nanometers. We have carried out experiments on our High-Contrast Imaging Testbed (HCIT) to study the impact of failed actuators in partial fulfilment of the Terrestrial Planet Finder Coronagraph optical model validation milestone. We show that the wavefront control algorithm adapts to several broken actuators and maintains dark-hole contrast in broadband light.
Bio-inspired algorithms applied to molecular docking simulations.
Heberlé, G; de Azevedo, W F
2011-01-01
Nature as a source of inspiration has been shown to have a great beneficial impact on the development of new computational methodologies. In this scenario, analyses of the interactions between a protein target and a ligand can be simulated by biologically inspired algorithms (BIAs). These algorithms mimic biological systems to create new paradigms for computation, such as neural networks, evolutionary computing, and swarm intelligence. This review provides a description of the main concepts behind BIAs applied to molecular docking simulations. Special attention is devoted to evolutionary algorithms, guided-directed evolutionary algorithms, and Lamarckian genetic algorithms. Recent applications of these methodologies to protein targets identified in the Mycobacterium tuberculosis genome are described.
Any Two Learning Algorithms Are (Almost) Exactly Identical
NASA Technical Reports Server (NTRS)
Wolpert, David H.
2000-01-01
This paper shows that if one is provided with a loss function, it can be used in a natural way to specify a distance measure quantifying the similarity of any two supervised learning algorithms, even non-parametric algorithms. Intuitively, this measure gives the fraction of targets and training sets for which the expected performance of the two algorithms differs significantly. Bounds on the value of this distance are calculated for the case of binary outputs and 0-1 loss, indicating that any two learning algorithms are almost exactly identical for such scenarios. As an example, for any two algorithms A and B, even for small input spaces and training sets, for less than 2e(-50) of all targets will the difference between A's and B's generalization performance of exceed 1%. In particular, this is true if B is bagging applied to A, or boosting applied to A. These bounds can be viewed alternatively as telling us, for example, that the simple English phrase 'I expect that algorithm A will generalize from the training set with an accuracy of at least 75% on the rest of the target' conveys 20,000 bytes of information concerning the target. The paper ends by discussing some of the subtleties of extending the distance measure to give a full (non-parametric) differential geometry of the manifold of learning algorithms.
Trajectory Control of Rendezvous with Maneuver Target Spacecraft
NASA Technical Reports Server (NTRS)
Zhou, Zhinqiang
2012-01-01
In this paper, a nonlinear trajectory control algorithm of rendezvous with maneuvering target spacecraft is presented. The disturbance forces on the chaser and target spacecraft and the thrust forces on the chaser spacecraft are considered in the analysis. The control algorithm developed in this paper uses the relative distance and relative velocity between the target and chaser spacecraft as the inputs. A general formula of reference relative trajectory of the chaser spacecraft to the target spacecraft is developed and applied to four different proximity maneuvers, which are in-track circling, cross-track circling, in-track spiral rendezvous and cross-track spiral rendezvous. The closed-loop differential equations of the proximity relative motion with the control algorithm are derived. It is proven in the paper that the tracking errors between the commanded relative trajectory and the actual relative trajectory are bounded within a constant region determined by the control gains. The prediction of the tracking errors is obtained. Design examples are provided to show the implementation of the control algorithm. The simulation results show that the actual relative trajectory tracks the commanded relative trajectory tightly. The predicted tracking errors match those calculated in the simulation results. The control algorithm developed in this paper can also be applied to interception of maneuver target spacecraft and relative trajectory control of spacecraft formation flying.
New prospects in fixed target searches for dark forces with the SeaQuest experiment at Fermilab
Gardner, S.; Holt, R. J.; Tadepalli, A. S.
2016-06-10
An intense 120 GeV proton beam incident on an extremely long iron target generates enormous numbers of light-mass particles that also decay within that target. If one of these particles decays to a final state with a hidden gauge boson, or if such a particle is produced as a result of the initial collision, then that weakly interacting hidden-sector particle may traverse the remainder of the target and be detected downstream through its possible decay to an e +e –, μ +μ –, or π +π – final state. These conditions can be realized through an extension of the SeaQuestmore » experiment at Fermilab, and in this initial investigation we consider how it can serve as an ultrasensitive probe of hidden vector gauge forces, both Abelian and non-Abelian. Here a light, weakly coupled hidden sector may well explain the dark matter established through astrophysical observations, and the proposed search can provide tangible evidence for its existence—or, alternatively, constrain a “sea” of possibilities.« less
Searching Dynamic Agents with a Team of Mobile Robots
Juliá, Miguel; Gil, Arturo; Reinoso, Oscar
2012-01-01
This paper presents a new algorithm that allows a team of robots to cooperatively search for a set of moving targets. An estimation of the areas of the environment that are more likely to hold a target agent is obtained using a grid-based Bayesian filter. The robot sensor readings and the maximum speed of the moving targets are used in order to update the grid. This representation is used in a search algorithm that commands the robots to those areas that are more likely to present target agents. This algorithm splits the environment in a tree of connected regions using dynamic programming. This tree is used in order to decide the destination for each robot in a coordinated manner. The algorithm has been successfully tested in known and unknown environments showing the validity of the approach. PMID:23012519
Searching dynamic agents with a team of mobile robots.
Juliá, Miguel; Gil, Arturo; Reinoso, Oscar
2012-01-01
This paper presents a new algorithm that allows a team of robots to cooperatively search for a set of moving targets. An estimation of the areas of the environment that are more likely to hold a target agent is obtained using a grid-based Bayesian filter. The robot sensor readings and the maximum speed of the moving targets are used in order to update the grid. This representation is used in a search algorithm that commands the robots to those areas that are more likely to present target agents. This algorithm splits the environment in a tree of connected regions using dynamic programming. This tree is used in order to decide the destination for each robot in a coordinated manner. The algorithm has been successfully tested in known and unknown environments showing the validity of the approach.
Transitioning from Targeted to Comprehensive Mass Spectrometry Using Genetic Algorithms.
Jaffe, Jacob D; Feeney, Caitlin M; Patel, Jinal; Lu, Xiaodong; Mani, D R
2016-11-01
Targeted proteomic assays are becoming increasingly popular because of their robust quantitative applications enabled by internal standardization, and they can be routinely executed on high performance mass spectrometry instrumentation. However, these assays are typically limited to 100s of analytes per experiment. Considerable time and effort are often expended in obtaining and preparing samples prior to targeted analyses. It would be highly desirable to detect and quantify 1000s of analytes in such samples using comprehensive mass spectrometry techniques (e.g., SWATH and DIA) while retaining a high degree of quantitative rigor for analytes with matched internal standards. Experimentally, it is facile to port a targeted assay to a comprehensive data acquisition technique. However, data analysis challenges arise from this strategy concerning agreement of results from the targeted and comprehensive approaches. Here, we present the use of genetic algorithms to overcome these challenges in order to configure hybrid targeted/comprehensive MS assays. The genetic algorithms are used to select precursor-to-fragment transitions that maximize the agreement in quantification between the targeted and the comprehensive methods. We find that the algorithm we used provided across-the-board improvement in the quantitative agreement between the targeted assay data and the hybrid comprehensive/targeted assay that we developed, as measured by parameters of linear models fitted to the results. We also found that the algorithm could perform at least as well as an independently-trained mass spectrometrist in accomplishing this task. We hope that this approach will be a useful tool in the development of quantitative approaches for comprehensive proteomics techniques. Graphical Abstract ᅟ.
Transitioning from Targeted to Comprehensive Mass Spectrometry Using Genetic Algorithms
NASA Astrophysics Data System (ADS)
Jaffe, Jacob D.; Feeney, Caitlin M.; Patel, Jinal; Lu, Xiaodong; Mani, D. R.
2016-11-01
Targeted proteomic assays are becoming increasingly popular because of their robust quantitative applications enabled by internal standardization, and they can be routinely executed on high performance mass spectrometry instrumentation. However, these assays are typically limited to 100s of analytes per experiment. Considerable time and effort are often expended in obtaining and preparing samples prior to targeted analyses. It would be highly desirable to detect and quantify 1000s of analytes in such samples using comprehensive mass spectrometry techniques (e.g., SWATH and DIA) while retaining a high degree of quantitative rigor for analytes with matched internal standards. Experimentally, it is facile to port a targeted assay to a comprehensive data acquisition technique. However, data analysis challenges arise from this strategy concerning agreement of results from the targeted and comprehensive approaches. Here, we present the use of genetic algorithms to overcome these challenges in order to configure hybrid targeted/comprehensive MS assays. The genetic algorithms are used to select precursor-to-fragment transitions that maximize the agreement in quantification between the targeted and the comprehensive methods. We find that the algorithm we used provided across-the-board improvement in the quantitative agreement between the targeted assay data and the hybrid comprehensive/targeted assay that we developed, as measured by parameters of linear models fitted to the results. We also found that the algorithm could perform at least as well as an independently-trained mass spectrometrist in accomplishing this task. We hope that this approach will be a useful tool in the development of quantitative approaches for comprehensive proteomics techniques.
Detecting Stealth Dark Matter Directly through Electromagnetic Polarizability
Appelquist, T.; Berkowitz, E.; Brower, R. C.; ...
2015-10-23
We calculate the spin-independent scattering cross section for direct detection that results from the electromagnetic polarizability of a composite scalar “stealth baryon” dark matter candidate, arising from a dark SU(4) confining gauge theory—“stealth dark matter.” In the nonrelativistic limit, electromagnetic polarizability proceeds through a dimension-7 interaction leading to a very small scattering cross section for dark matter with weak-scale masses. This represents a lower bound on the scattering cross section for composite dark matter theories with electromagnetically charged constituents. We carry out lattice calculations of the polarizability for the lightest “baryon” states in SU(3) and SU(4) gauge theories using themore » background field method on quenched configurations. We find the polarizabilities of SU(3) and SU(4) to be comparable (within about 50%) normalized to the stealth baryon mass, which is suggestive for extensions to larger SU(N) groups. The resulting scattering cross sections with a xenon target are shown to be possibly detectable in the dark matter mass range of about 200–700 GeV, where the lower bound is from the existing LUX constraint while the upper bound is the coherent neutrino background. Significant uncertainties in the cross section remain due to the more complicated interaction of the polarizablity operator with nuclear structure; however, the steep dependence on the dark matter mass, 1/m 6 B, suggests the observable dark matter mass range is not appreciably modified. We highlight collider searches for the mesons in the theory as well as the indirect astrophysical effects that may also provide excellent probes of stealth dark matter.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ming Xiong
In this study, we present the current status and prospects of the dark sector physics search program of the SeaQuest/E1067 fixed target dimuon experiment at Fermilab Main Injector. There has been tremendous excitement and progress in searching for new physics in the dark sector in recent years. Dark sector refers to a collection of currently unknown particles that do not directly couple with the Standard Model (SM) strong and electroweak (EW) interactions but assumed to carry gravitational force, thus could be candidates of the missing Dark Matter (DM). Such particles may interact with the SM particles through “portal” interactions. Twomore » of the simple possibilities are being investigated in our initial search: (1) dark photon and (2) dark Higgs. They could be within immediate reach of current or near future experimental search. We show there is a unique opportunity today at Fermilab to directly search for these particles in a highly motivated but uncharted parameter space in high-energy proton–nucleus collisions in the beam-dump mode using the 120 GeV proton beam from the Main Injector. Our current search window covers the mass range 0.2–10 GeV/c 2, and in the near future, by adding an electromagnetic calorimeter (EMCal) to the spectrometer, we can further explore the lower mass region down to about ~1 MeV/c 2 through the di-electron channel. If dark photons (and/or dark Higgs) were observed, they would revolutionize our understanding of the fundamental structures and interactions of our universe.« less
Liu, Ming Xiong
2017-03-14
In this study, we present the current status and prospects of the dark sector physics search program of the SeaQuest/E1067 fixed target dimuon experiment at Fermilab Main Injector. There has been tremendous excitement and progress in searching for new physics in the dark sector in recent years. Dark sector refers to a collection of currently unknown particles that do not directly couple with the Standard Model (SM) strong and electroweak (EW) interactions but assumed to carry gravitational force, thus could be candidates of the missing Dark Matter (DM). Such particles may interact with the SM particles through “portal” interactions. Twomore » of the simple possibilities are being investigated in our initial search: (1) dark photon and (2) dark Higgs. They could be within immediate reach of current or near future experimental search. We show there is a unique opportunity today at Fermilab to directly search for these particles in a highly motivated but uncharted parameter space in high-energy proton–nucleus collisions in the beam-dump mode using the 120 GeV proton beam from the Main Injector. Our current search window covers the mass range 0.2–10 GeV/c 2, and in the near future, by adding an electromagnetic calorimeter (EMCal) to the spectrometer, we can further explore the lower mass region down to about ~1 MeV/c 2 through the di-electron channel. If dark photons (and/or dark Higgs) were observed, they would revolutionize our understanding of the fundamental structures and interactions of our universe.« less
Bias Correction of MODIS AOD using DragonNET to obtain improved estimation of PM2.5
NASA Astrophysics Data System (ADS)
Gross, B.; Malakar, N. K.; Atia, A.; Moshary, F.; Ahmed, S. A.; Oo, M. M.
2014-12-01
MODIS AOD retreivals using the Dark Target algorithm is strongly affected by the underlying surface reflection properties. In particular, the operational algorithms make use of surface parameterizations trained on global datasets and therefore do not account properly for urban surface differences. This parameterization continues to show an underestimation of the surface reflection which results in a general over-biasing in AOD retrievals. Recent results using the Dragon-Network datasets as well as high resolution retrievals in the NYC area illustrate that this is even more significant at the newest C006 3 km retrievals. In the past, we used AERONET observation in the City College to obtain bias-corrected AOD, but the homogeneity assumptions using only one site for the region is clearly an issue. On the other hand, DragonNET observations provide ample opportunities to obtain better tuning the surface corrections while also providing better statistical validation. In this study we present a neural network method to obtain bias correction of the MODIS AOD using multiple factors including surface reflectivity at 2130nm, sun-view geometrical factors and land-class information. These corrected AOD's are then used together with additional WRF meteorological factors to improve estimates of PM2.5. Efforts to explore the portability to other urban areas will be discussed. In addition, annual surface ratio maps will be developed illustrating that among the land classes, the urban pixels constitute the largest deviations from the operational model.
Multitarget mixture reduction algorithm with incorporated target existence recursions
NASA Astrophysics Data System (ADS)
Ristic, Branko; Arulampalam, Sanjeev
2000-07-01
The paper derives a deferred logic data association algorithm based on the mixture reduction approach originally due to Salmond [SPIE vol.1305, 1990]. The novelty of the proposed algorithm provides the recursive formulae for both data association and target existence (confidence) estimation, thus allowing automatic track initiation and termination. T he track initiation performance of the proposed filter is investigated by computer simulations. It is observed that at moderately high levels of clutter density the proposed filter initiates tracks more reliably than its corresponding PDA filter. An extension of the proposed filter to the multi-target case is also presented. In addition, the paper compares the track maintenance performance of the MR algorithm with an MHT implementation.
OzDES multifibre spectroscopy for the Dark Energy Survey: First-year operation and results
Yuan, Fang
2015-07-29
The Australian Dark Energy Survey (OzDES) is a five-year, 100-night, spectroscopic survey on the Anglo-Australian Telescope, whose primary aim is to measure redshifts of approximately 2500 Type Ia supernovae host galaxies over the redshift range 0.1 < z < 1.2, and derive reverberation-mapped black hole masses for approximately 500 active galactic nuclei and quasars over 0.3 < z < 4.5. This treasure trove of data forms a major part of the spectroscopic follow-up for the Dark Energy Survey for which we are also targeting cluster galaxies, radio galaxies, strong lenses, and unidentified transients, as well as measuring luminous red galaxiesmore » and emission line galaxies to help calibrate photometric redshifts. Here, we present an overview of the OzDES programme and our first-year results. Between 2012 December and 2013 December, we observed over 10 000 objects and measured more than 6 000 redshifts. Our strategy of retargeting faint objects across many observing runs has allowed us to measure redshifts for galaxies as faint as m r = 25 mag. We outline our target selection and observing strategy, quantify the redshift success rate for different types of targets, and discuss the implications for our main science goals. In conclusion, we highlight a few interesting objects as examples of the fortuitous yet not totally unexpected discoveries that can come from such a large spectroscopic survey.« less
OzDES multifibre spectroscopy for the Dark Energy Survey: first-year operation and results
NASA Astrophysics Data System (ADS)
Yuan, Fang; Lidman, C.; Davis, T. M.; Childress, M.; Abdalla, F. B.; Banerji, M.; Buckley-Geer, E.; Carnero Rosell, A.; Carollo, D.; Castander, F. J.; D'Andrea, C. B.; Diehl, H. T.; Cunha, C. E.; Foley, R. J.; Frieman, J.; Glazebrook, K.; Gschwend, J.; Hinton, S.; Jouvel, S.; Kessler, R.; Kim, A. G.; King, A. L.; Kuehn, K.; Kuhlmann, S.; Lewis, G. F.; Lin, H.; Martini, P.; McMahon, R. G.; Mould, J.; Nichol, R. C.; Norris, R. P.; O'Neill, C. R.; Ostrovski, F.; Papadopoulos, A.; Parkinson, D.; Reed, S.; Romer, A. K.; Rooney, P. J.; Rozo, E.; Rykoff, E. S.; Sako, M.; Scalzo, R.; Schmidt, B. P.; Scolnic, D.; Seymour, N.; Sharp, R.; Sobreira, F.; Sullivan, M.; Thomas, R. C.; Tucker, D.; Uddin, S. A.; Wechsler, R. H.; Wester, W.; Wilcox, H.; Zhang, B.; Abbott, T.; Allam, S.; Bauer, A. H.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Burke, D. L.; Carrasco Kind, M.; Covarrubias, R.; Crocce, M.; da Costa, L. N.; DePoy, D. L.; Desai, S.; Doel, P.; Eifler, T. F.; Evrard, A. E.; Fausti Neto, A.; Flaugher, B.; Fosalba, P.; Gaztanaga, E.; Gerdes, D.; Gruen, D.; Gruendl, R. A.; Honscheid, K.; James, D.; Kuropatkin, N.; Lahav, O.; Li, T. S.; Maia, M. A. G.; Makler, M.; Marshall, J.; Miller, C. J.; Miquel, R.; Ogando, R.; Plazas, A. A.; Roodman, A.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thaler, J.; Walker, A. R.
2015-09-01
The Australian Dark Energy Survey (OzDES) is a five-year, 100-night, spectroscopic survey on the Anglo-Australian Telescope, whose primary aim is to measure redshifts of approximately 2500 Type Ia supernovae host galaxies over the redshift range 0.1 < z < 1.2, and derive reverberation-mapped black hole masses for approximately 500 active galactic nuclei and quasars over 0.3 < z < 4.5. This treasure trove of data forms a major part of the spectroscopic follow-up for the Dark Energy Survey for which we are also targeting cluster galaxies, radio galaxies, strong lenses, and unidentified transients, as well as measuring luminous red galaxies and emission line galaxies to help calibrate photometric redshifts. Here, we present an overview of the OzDES programme and our first-year results. Between 2012 December and 2013 December, we observed over 10 000 objects and measured more than 6 000 redshifts. Our strategy of retargeting faint objects across many observing runs has allowed us to measure redshifts for galaxies as faint as mr = 25 mag. We outline our target selection and observing strategy, quantify the redshift success rate for different types of targets, and discuss the implications for our main science goals. Finally, we highlight a few interesting objects as examples of the fortuitous yet not totally unexpected discoveries that can come from such a large spectroscopic survey.
Katz, Ben; Minke, Baruch
2012-01-01
Drosophila photoreceptor cells use the ubiquitous G-protein-mediated phospholipase C (PLC) cascade to achieve ultimate single photon sensitivity. This is manifested in the single photon responses (quantum bumps). In photoreceptor cells, dark activation of Gqα molecules occurs spontaneously and produces unitary dark events (dark bumps). A high rate of spontaneous Gqα activation and dark bump production potentially hampers single photon detection. We found that in wild type flies the in vivo rate of spontaneous Gqα activation is very high. Nevertheless, this high rate is not manifested in a substantially high rate of dark bumps. Therefore, it is unclear how phototransduction suppresses dark bump production, arising from spontaneous Gqα activation, while still maintaining high-fidelity representation of single photons. In this study we show that reduced PLC catalytic activity selectively suppressed production of dark bumps but not light-induced bumps. Manipulations of PLC activity using PLC mutant flies and Ca2+ modulations revealed that a critical level of PLC activity is required to induce bump production. The required minimal level of PLC activity, selectively suppressed random production of single Gqα-activated dark bumps despite a high rate of spontaneous Gqα activation. This minimal PLC activity level is reliably obtained by photon induced synchronized activation of several neighboring Gqα molecules activating several PLC molecules, but not by random activation of single Gqα molecules. We thus demonstrate how a G-protein-mediated transduction system, with PLC as its target, selectively suppresses its intrinsic noise while preserving reliable signaling. PMID:22357856
Yock, Adam D; Kim, Gwe-Ya
2017-09-01
To present the k-means clustering algorithm as a tool to address treatment planning considerations characteristic of stereotactic radiosurgery using a single isocenter for multiple targets. For 30 patients treated with stereotactic radiosurgery for multiple brain metastases, the geometric centroids and radii of each met were determined from the treatment planning system. In-house software used this as well as weighted and unweighted versions of the k-means clustering algorithm to group the targets to be treated with a single isocenter, and to position each isocenter. The algorithm results were evaluated using within-cluster sum of squares as well as a minimum target coverage metric that considered the effect of target size. Both versions of the algorithm were applied to an example patient to demonstrate the prospective determination of the appropriate number and location of isocenters. Both weighted and unweighted versions of the k-means algorithm were applied successfully to determine the number and position of isocenters. Comparing the two, both the within-cluster sum of squares metric and the minimum target coverage metric resulting from the unweighted version were less than those from the weighted version. The average magnitudes of the differences were small (-0.2 cm 2 and 0.1% for the within cluster sum of squares and minimum target coverage, respectively) but statistically significant (Wilcoxon signed-rank test, P < 0.01). The differences between the versions of the k-means clustering algorithm represented an advantage of the unweighted version for the within-cluster sum of squares metric, and an advantage of the weighted version for the minimum target coverage metric. While additional treatment planning considerations have a large influence on the final treatment plan quality, both versions of the k-means algorithm provide automatic, consistent, quantitative, and objective solutions to the tasks associated with SRS treatment planning using a single isocenter for multiple targets. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Lifelong-RL: Lifelong Relaxation Labeling for Separating Entities and Aspects in Opinion Targets.
Shu, Lei; Liu, Bing; Xu, Hu; Kim, Annice
2016-11-01
It is well-known that opinions have targets. Extracting such targets is an important problem of opinion mining because without knowing the target of an opinion, the opinion is of limited use. So far many algorithms have been proposed to extract opinion targets. However, an opinion target can be an entity or an aspect (part or attribute) of an entity. An opinion about an entity is an opinion about the entity as a whole, while an opinion about an aspect is just an opinion about that specific attribute or aspect of an entity. Thus, opinion targets should be separated into entities and aspects before use because they represent very different things about opinions. This paper proposes a novel algorithm, called Lifelong-RL , to solve the problem based on lifelong machine learning and relaxation labeling . Extensive experiments show that the proposed algorithm Lifelong-RL outperforms baseline methods markedly.
Zhou, Hong; Zhou, Michael; Li, Daisy; Manthey, Joseph; Lioutikova, Ekaterina; Wang, Hong; Zeng, Xiao
2017-11-17
The beauty and power of the genome editing mechanism, CRISPR Cas9 endonuclease system, lies in the fact that it is RNA-programmable such that Cas9 can be guided to any genomic loci complementary to a 20-nt RNA, single guide RNA (sgRNA), to cleave double stranded DNA, allowing the introduction of wanted mutations. Unfortunately, it has been reported repeatedly that the sgRNA can also guide Cas9 to off-target sites where the DNA sequence is homologous to sgRNA. Using human genome and Streptococcus pyogenes Cas9 (SpCas9) as an example, this article mathematically analyzed the probabilities of off-target homologies of sgRNAs and discovered that for large genome size such as human genome, potential off-target homologies are inevitable for sgRNA selection. A highly efficient computationl algorithm was developed for whole genome sgRNA design and off-target homology searches. By means of a dynamically constructed sequence-indexed database and a simplified sequence alignment method, this algorithm achieves very high efficiency while guaranteeing the identification of all existing potential off-target homologies. Via this algorithm, 1,876,775 sgRNAs were designed for the 19,153 human mRNA genes and only two sgRNAs were found to be free of off-target homology. By means of the novel and efficient sgRNA homology search algorithm introduced in this article, genome wide sgRNA design and off-target analysis were conducted and the results confirmed the mathematical analysis that for a sgRNA sequence, it is almost impossible to escape potential off-target homologies. Future innovations on the CRISPR Cas9 gene editing technology need to focus on how to eliminate the Cas9 off-target activity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mabhouti, H; Sanli, E; Cebe, M
Purpose: Brain stereotactic radiosurgery involves the use of precisely directed, single session radiation to create a desired radiobiologic response within the brain target with acceptable minimal effects on surrounding structures or tissues. In this study, the dosimetric comparison of Truebeam 2.0 and Cyberknife M6 treatment plans were made. Methods: For Truebeam 2.0 machine, treatment planning were done using 2 full arc VMAT technique with 6 FFF beam on the CT scan of Randophantom simulating the treatment of sterotactic treatments for one brain metastasis. The dose distribution were calculated using Eclipse treatment planning system with Acuros XB algorithm. The treatment planningmore » of the same target were also done for Cyberknife M6 machine with Multiplan treatment planning system using Monte Carlo algorithm. Using the same film batch, the net OD to dose calibration curve was obtained using both machine by delivering 0- 800 cGy. Films were scanned 48 hours after irradiation using an Epson 1000XL flatbed scanner. Dose distribution were measured using EBT3 film dosimeter. The measured and calculated doses were compared. Results: The dose distribution in the target and 2 cm beyond the target edge were calculated on TPSs and measured using EBT3 film. For cyberknife plans, the gamma analysis passing rates between measured and calculated dose distributions were 99.2% and 96.7% for target and peripheral region of target respectively. For Truebeam plans, the gamma analysis passing rates were 99.1% and 95.5% for target and peripheral region of target respectively. Conclusion: Although, target dose distribution calculated accurately by Acuros XB and Monte Carlo algorithms, Monte carlo calculation algorithm predicts dose distribution around the peripheral region of target more accurately than Acuros algorithm.« less
Gaussian mixture models-based ship target recognition algorithm in remote sensing infrared images
NASA Astrophysics Data System (ADS)
Yao, Shoukui; Qin, Xiaojuan
2018-02-01
Since the resolution of remote sensing infrared images is low, the features of ship targets become unstable. The issue of how to recognize ships with fuzzy features is an open problem. In this paper, we propose a novel ship target recognition algorithm based on Gaussian mixture models (GMMs). In the proposed algorithm, there are mainly two steps. At the first step, the Hu moments of these ship target images are calculated, and the GMMs are trained on the moment features of ships. At the second step, the moment feature of each ship image is assigned to the trained GMMs for recognition. Because of the scale, rotation, translation invariance property of Hu moments and the power feature-space description ability of GMMs, the GMMs-based ship target recognition algorithm can recognize ship reliably. Experimental results of a large simulating image set show that our approach is effective in distinguishing different ship types, and obtains a satisfactory ship recognition performance.
Effect of supersonic relative motion between baryons and dark matter on collapsed objects
NASA Astrophysics Data System (ADS)
Asaba, Shinsuke; Ichiki, Kiyotomo; Tashiro, Hiroyuki
2016-01-01
Great attention is given to the first star formation and the epoch of reionization as main targets of planned large radio interferometries (e.g. Square Kilometre Array). Recently, it is claimed that the supersonic relative velocity between baryons and cold dark matter can suppress the abundance of first stars and impact the cosmological reionization process. Therefore, in order to compare observed results with theoretical predictions it is important to examine the effect of the supersonic relative motion on the small-scale structure formation. In this paper, we investigate this effect on the nonlinear structure formation in the context of the spherical collapse model in order to understand the fundamental physics in a simple configuration. We show the evolution of the dark matter sphere with the relative velocity by both using N-body simulations and numerically calculating the equation of motion for the dark matter mass shell. The effects of the relative motion in the spherical collapse model appear as the delay of the collapse time of dark matter halos and the decrease of the baryon mass fraction within the dark matter sphere. Based on these results, we provide the fitting formula of the critical density contrast for collapses with the relative motion effect and calculate the mass function of dark matter halos in the Press-Schechter formalism. As a result, the relative velocity decreases the abundance of dark matter halos whose mass is smaller than 108M⊙/h .
Model of Image Artifacts from Dust Particles
NASA Technical Reports Server (NTRS)
Willson, Reg
2008-01-01
A mathematical model of image artifacts produced by dust particles on lenses has been derived. Machine-vision systems often have to work with camera lenses that become dusty during use. Dust particles on the front surface of a lens produce image artifacts that can potentially affect the performance of a machine-vision algorithm. The present model satisfies a need for a means of synthesizing dust image artifacts for testing machine-vision algorithms for robustness (or the lack thereof) in the presence of dust on lenses. A dust particle can absorb light or scatter light out of some pixels, thereby giving rise to a dark dust artifact. It can also scatter light into other pixels, thereby giving rise to a bright dust artifact. For the sake of simplicity, this model deals only with dark dust artifacts. The model effectively represents dark dust artifacts as an attenuation image consisting of an array of diffuse darkened spots centered at image locations corresponding to the locations of dust particles. The dust artifacts are computationally incorporated into a given test image by simply multiplying the brightness value of each pixel by a transmission factor that incorporates the factor of attenuation, by dust particles, of the light incident on that pixel. With respect to computation of the attenuation and transmission factors, the model is based on a first-order geometric (ray)-optics treatment of the shadows cast by dust particles on the image detector. In this model, the light collected by a pixel is deemed to be confined to a pair of cones defined by the location of the pixel s image in object space, the entrance pupil of the lens, and the location of the pixel in the image plane (see Figure 1). For simplicity, it is assumed that the size of a dust particle is somewhat less than the diameter, at the front surface of the lens, of any collection cone containing all or part of that dust particle. Under this assumption, the shape of any individual dust particle artifact is the shape (typically, circular) of the aperture, and the contribution of the particle to the attenuation factor for a given pixel is the fraction of the cross-sectional area of the collection cone occupied by the particle. Assuming that dust particles do not overlap, the net transmission factor for a given pixel is calculated as one minus the sum of attenuation factors contributed by all dust particles affecting that pixel. In a test, the model was used to synthesize attenuation images for random distributions of dust particles on the front surface of a lens at various relative aperture (F-number) settings. As shown in Figure 2, the attenuation images resembled dust artifacts in real test images recorded while the lens was aimed at a white target.
The research on the mean shift algorithm for target tracking
NASA Astrophysics Data System (ADS)
CAO, Honghong
2017-06-01
The traditional mean shift algorithm for target tracking is effective and high real-time, but there still are some shortcomings. The traditional mean shift algorithm is easy to fall into local optimum in the tracking process, the effectiveness of the method is weak when the object is moving fast. And the size of the tracking window never changes, the method will fail when the size of the moving object changes, as a result, we come up with a new method. We use particle swarm optimization algorithm to optimize the mean shift algorithm for target tracking, Meanwhile, SIFT (scale-invariant feature transform) and affine transformation make the size of tracking window adaptive. At last, we evaluate the method by comparing experiments. Experimental result indicates that the proposed method can effectively track the object and the size of the tracking window changes.
Metabolomic Responses of Arabidopsis Suspension Cells to Bicarbonate under Light and Dark Conditions
Misra, Biswapriya B.; Yin, Zepeng; Geng, Sisi; de Armas, Evaldo; Chen, Sixue
2016-01-01
Global CO2 level presently recorded at 400 ppm is expected to reach 550 ppm in 2050, an increment likely to impact plant growth and productivity. Using targeted LC-MS and GC-MS platforms we quantified 229 and 29 metabolites, respectively in a time-course study to reveal short-term responses to different concentrations (1, 3, and 10 mM) of bicarbonate (HCO3−) under light and dark conditions. Results indicate that HCO3− treatment responsive metabolomic changes depend on the HCO3− concentration, time of treatment, and light/dark. Interestingly, 3 mM HCO3− concentration treatment induced more significantly changed metabolites than either lower or higher concentrations used. Flavonoid biosynthesis and glutathione metabolism were common to both light and dark-mediated responses in addition to showing concentration-dependent changes. Our metabolomics results provide insights into short-term plant cellular responses to elevated HCO3− concentrations as a result of ambient increases in CO2 under light and dark. PMID:27762345
SPE analysis of high efficiency PMTs for the DEAP-3600 dark matter detector
NASA Astrophysics Data System (ADS)
Olsen, Kevin; Hallin, Aksel; DEAP/CLEAN Collaboration
2011-09-01
The Dark matter Experiment using Argon Pulse-shape discrimination is a collaborative effort to develop a next-generation, tonne-scale dark matter detector at SNOLAB. The detector will feature a single-phase liquid argon (LAr) target surrounded by an array of 266 photomultiplier tubes (PMTs). A new high-efficiency Hamamatsu R877-100 PMT has been delivered to the University of Alberta for evaluation by the DEAP collaboration. The increase in efficiency could lead to a much greater light yield, but other experiments have reported a slower rise time [1],[2]. We have placed the PMT in a small dark box and had a base and preamplifier designed to be used with either an oscilloscope or a multi-channel analyzer. With this setup we have demonstrated the PMT's ability to distinguish single photo-electrons (SPE) and characterized the PMT by measuring the SPE pulse height spectrum, the peak-to-valley ratio, the dark pulse rate, the baseline, time resolution and SPE efficiency for varying the high voltage supplied to the PMT.
Status and Prospects for Indirect Dark Matter Searches with the Fermi Large Area Telescope
NASA Astrophysics Data System (ADS)
Charles, Eric; Fermi-LAT Collaboration
2014-01-01
During the first five years of operation of the Fermi Large Area Telescope (LAT) the LAT collaboration has performed numerous searches for signatures of Dark Matter interactions in both gamma-ray and cosmic-ray data. These searches feature many different target types, including dwarf spheroidal galaxies, galaxy clusters, the Milky Way halo and inner Galaxy and unassociated LAT sources. They make use of a variety of techniques, and have been performed in both the spatial and spectral domains, as well as via less conventional strategies such as examining the potential Dark Matter contribution to both large scale and small scale anisotropies. To date no clear gamma-ray or cosmic-ray signal from dark matter annihilation or decay has been observed, and the deepest current limits for annihilation exclude many Dark Matter particle models with the canonical thermal relic cross section and masses up to 30 GeV. In this contribution we will briefly review the status of each of the searches by the LAT collaboration. We will also discuss the limiting factors for the various search strategies and examine the prospects for the future.
Optimization of Self-Directed Target Coverage in Wireless Multimedia Sensor Network
Yang, Yang; Wang, Yufei; Pi, Dechang; Wang, Ruchuan
2014-01-01
Video and image sensors in wireless multimedia sensor networks (WMSNs) have directed view and limited sensing angle. So the methods to solve target coverage problem for traditional sensor networks, which use circle sensing model, are not suitable for WMSNs. Based on the FoV (field of view) sensing model and FoV disk model proposed, how expected multimedia sensor covers the target is defined by the deflection angle between target and the sensor's current orientation and the distance between target and the sensor. Then target coverage optimization algorithms based on expected coverage value are presented for single-sensor single-target, multisensor single-target, and single-sensor multitargets problems distinguishingly. Selecting the orientation that sensor rotated to cover every target falling in the FoV disk of that sensor for candidate orientations and using genetic algorithm to multisensor multitargets problem, which has NP-complete complexity, then result in the approximated minimum subset of sensors which covers all the targets in networks. Simulation results show the algorithm's performance and the effect of number of targets on the resulting subset. PMID:25136667
On the Evolution of Dark Matter Halo Properties Following Major and Minor Mergers
NASA Astrophysics Data System (ADS)
Wu, Peter; Zhang, Shawn; Lee, Christoph; Primack, Joel
2018-01-01
We conducted an analysis on dark matter halo properties following major and minor mergers to advance our understanding of halo evolution. In this work, we analyzed ~80,000 dark matter halos from the Bolshoi-Planck cosmological simulation and studied halo evolution during relaxation after major mergers. We then applied a Gaussian filter to the property evolutions and characterized peak distributions, frequencies, and variabilities for several halo properties, including centering, spin, shape (prolateness), scale radius, and virial ratio. However, there were also halos that experienced relaxation without the presence of major mergers. We hypothesized that this was due to minor mergers unrecorded by the simulation analysis. By using property peaks to create a novel merger detection algorithm, we attempted to find minor mergers and match them to the unaccounted relaxed halos. Not only did we find evidence that minor mergers were the causes, but we also found similarities between major and minor merger effects, showing the significance of minor mergers for future studies. Through our dark matter merger statistics, we expect our work to ultimately serve as vital parameters towards better understanding galaxy formation and evolution. Most of this work was carried out by high school students working under the auspices of the Science Internship Program (SIP) at UC Santa Cruz.
Henry, Clémence; Bledsoe, Samuel W; Siekman, Allison; Kollman, Alec; Waters, Brian M; Feil, Regina; Stitt, Mark; Lagrimini, L Mark
2014-11-01
Energy resources in plants are managed in continuously changing environments, such as changes occurring during the day/night cycle. Shading is an environmental disruption that decreases photosynthesis, compromises energy status, and impacts on crop productivity. The trehalose pathway plays a central but not well-defined role in maintaining energy balance. Here, we characterized the maize trehalose pathway genes and deciphered the impacts of the diurnal cycle and disruption of the day/night cycle on trehalose pathway gene expression and sugar metabolism. The maize genome encodes 14 trehalose-6-phosphate synthase (TPS) genes, 11 trehalose-6-phosphate phosphatase (TPP) genes, and one trehalase gene. Transcript abundance of most of these genes was impacted by the day/night cycle and extended dark stress, as were sucrose, hexose sugars, starch, and trehalose-6-phosphate (T6P) levels. After extended darkness, T6P levels inversely followed class II TPS and sucrose non-fermenting-related protein kinase 1 (SnRK1) target gene expression. Most significantly, T6P no longer tracked sucrose levels after extended darkness. These results showed: (i) conservation of the trehalose pathway in maize; (ii) that sucrose, hexose, starch, T6P, and TPS/TPP transcripts respond to the diurnal cycle; and(iii) that extended darkness disrupts the correlation between T6P and sucrose/hexose pools and affects SnRK1 target gene expression. A model for the role of the trehalose pathway in sensing of sucrose and energy status in maize seedlings is proposed. © The Author 2014. Published by Oxford University Press on behalf of the Society for Experimental Biology.
Near-Infrared Photon-Counting Camera for High-Sensitivity Observations
NASA Technical Reports Server (NTRS)
Jurkovic, Michael
2012-01-01
The dark current of a transferred-electron photocathode with an InGaAs absorber, responsive over the 0.9-to-1.7- micron range, must be reduced to an ultralow level suitable for low signal spectral astrophysical measurements by lowering the temperature of the sensor incorporating the cathode. However, photocathode quantum efficiency (QE) is known to reduce to zero at such low temperatures. Moreover, it has not been demonstrated that the target dark current can be reached at any temperature using existing photocathodes. Changes in the transferred-electron photocathode epistructure (with an In- GaAs absorber lattice-matched to InP and exhibiting responsivity over the 0.9- to-1.7- m range) and fabrication processes were developed and implemented that resulted in a demonstrated >13x reduction in dark current at -40 C while retaining >95% of the approximately equal to 25% saturated room-temperature QE. Further testing at lower temperature is needed to confirm a >25 C predicted reduction in cooling required to achieve an ultralow dark-current target suitable for faint spectral astronomical observations that are not otherwise possible. This reduction in dark current makes it possible to increase the integration time of the imaging sensor, thus enabling a much higher near-infrared (NIR) sensitivity than is possible with current technology. As a result, extremely faint phenomena and NIR signals emitted from distant celestial objects can be now observed and imaged (such as the dynamics of redshifting galaxies, and spectral measurements on extra-solar planets in search of water and bio-markers) that were not previously possible. In addition, the enhanced NIR sensitivity also directly benefits other NIR imaging applications, including drug and bomb detection, stand-off detection of improvised explosive devices (IED's), Raman spectroscopy and microscopy for life/physical science applications, and semiconductor product defect detection.
Ramakrishna, Ramnarain; Sarkar, Dipayan; Manduri, Avani; Iyer, Shreyas Ganesan; Shetty, Kalidas
2017-10-01
Sprouts of cereal grains, such as barley ( Hordeum vulgare L.), are a good source of beneficial phenolic bioactives. Such health relevant phenolic bioactives of cereal sprouts can be targeted to manage chronic hyperglycemia and oxidative stress commonly associated with type 2 diabetes (T2D). Therefore improving phenolic bioactives by stimulating plant endogenous defense responses such as protective pentose phosphate pathway (PPP) during sprouting has significant merit. Based on this metabolic rationale, this study aimed to enhance phenolic bioactives and associated antioxidant and anti-hyperglycemic functions in dark germinated barley sprouts using exogenous elicitor treatments. Dark-germinated sprouts of two malting barley cultivars (Pinnacle and Celebration), treated with chitosan oligosaccharide (COS) and marine protein hydrolysate (GP), were evaluated. Total soluble phenolic content (TSP), phenolic acid profiles, total antioxidant activity (TA) and in vitro inhibitory activities of hyperglycemia relevant α-amylase and α-glucosidase enzymes of the dark germinated barley sprouts were evaluated at day 2, 4, and 6 post elicitor treatments. Overall, TSP content, TA, and α-amylase inhibitory activity of dark germinated barley sprouts decreased, while α-glucosidase inhibitory activity and gallic acid content increased from day 2 to day 6. Among barley cultivars, high phenolic antioxidant-linked anti-hyperglycemic bioactives were observed in Celebration. Furthermore, GP and COS seed elicitor treatments in selective doses improved T2D relevant phenolic-linked anti-hyperglycemic bioactives of barley spouts at day 6. Therefore, such seed elicitation approach can be strategically used to develop bioactive enriched functional food ingredients from cereal sprouts targeting chronic hyperglycemia and oxidative stress linked to T2D.
McMahon, Ryan; Papiez, Lech; Rangaraj, Dharanipathy
2007-08-01
An algorithm is presented that allows for the control of multileaf collimation (MLC) leaves based entirely on real-time calculations of the intensity delivered over the target. The algorithm is capable of efficiently correcting generalized delivery errors without requiring the interruption of delivery (self-correcting trajectories), where a generalized delivery error represents anything that causes a discrepancy between the delivered and intended intensity profiles. The intensity actually delivered over the target is continually compared to its intended value. For each pair of leaves, these comparisons are used to guide the control of the following leaf and keep this discrepancy below a user-specified value. To demonstrate the basic principles of the algorithm, results of corrected delivery are shown for a leading leaf positional error during dynamic-MLC (DMLC) IMRT delivery over a rigid moving target. It is then shown that, with slight modifications, the algorithm can be used to track moving targets in real time. The primary results of this article indicate that the algorithm is capable of accurately delivering DMLC IMRT over a rigid moving target whose motion is (1) completely unknown prior to delivery and (2) not faster than the maximum MLC leaf velocity over extended periods of time. These capabilities are demonstrated for clinically derived intensity profiles and actual tumor motion data, including situations when the target moves in some instances faster than the maximum admissible MLC leaf velocity. The results show that using the algorithm while calculating the delivered intensity every 50 ms will provide a good level of accuracy when delivering IMRT over a rigid moving target translating along the direction of MLC leaf travel. When the maximum velocities of the MLC leaves and target were 4 and 4.2 cm/s, respectively, the resulting error in the two intensity profiles used was 0.1 +/- 3.1% and -0.5 +/- 2.8% relative to the maximum of the intensity profiles. For the same target motion, the error was shown to increase rapidly as (1) the maximum MLC leaf velocity was reduced below 75% of the maximum target velocity and (2) the system response time was increased.
How to understand the results of studies of glutamine supplementation.
Wernerman, Jan
2015-11-03
The lack of understanding of the mechanisms behind possible beneficial and possible harmful effects of glutamine supplementation makes the design of interventional studies of glutamine supplementations difficult, perhaps even hazardous. What is the interventional target, and how might it relate to outcomes? Taking one step further and aggregating results from interventional studies into meta-analyses does not diminish the difficulties. Therefore, conducting basic research seems to be a better idea than groping in the dark and exposing patients to potential harm in this darkness.
Implementation of a sensor guided flight algorithm for target tracking by small UAS
NASA Astrophysics Data System (ADS)
Collins, Gaemus E.; Stankevitz, Chris; Liese, Jeffrey
2011-06-01
Small xed-wing UAS (SUAS) such as Raven and Unicorn have limited power, speed, and maneuverability. Their missions can be dramatically hindered by environmental conditions (wind, terrain), obstructions (buildings, trees) blocking clear line of sight to a target, and/or sensor hardware limitations (xed stare, limited gimbal motion, lack of zoom). Toyon's Sensor Guided Flight (SGF) algorithm was designed to account for SUAS hardware shortcomings and enable long-term tracking of maneuvering targets by maintaining persistent eyes-on-target. SGF was successfully tested in simulation with high-delity UAS, sensor, and environment models, but real- world ight testing with 60 Unicorn UAS revealed surprising second order challenges that were not highlighted by the simulations. This paper describes the SGF algorithm, our rst round simulation results, our second order discoveries from ight testing, and subsequent improvements that were made to the algorithm.
Small target detection using objectness and saliency
NASA Astrophysics Data System (ADS)
Zhang, Naiwen; Xiao, Yang; Fang, Zhiwen; Yang, Jian; Wang, Li; Li, Tao
2017-10-01
We are motived by the need for generic object detection algorithm which achieves high recall for small targets in complex scenes with acceptable computational efficiency. We propose a novel object detection algorithm, which has high localization quality with acceptable computational cost. Firstly, we obtain the objectness map as in BING[1] and use NMS to get the top N points. Then, k-means algorithm is used to cluster them into K classes according to their location. We set the center points of the K classes as seed points. For each seed point, an object potential region is extracted. Finally, a fast salient object detection algorithm[2] is applied to the object potential regions to highlight objectlike pixels, and a series of efficient post-processing operations are proposed to locate the targets. Our method runs at 5 FPS on 1000*1000 images, and significantly outperforms previous methods on small targets in cluttered background.
A chest-shape target automatic detection method based on Deformable Part Models
NASA Astrophysics Data System (ADS)
Zhang, Mo; Jin, Weiqi; Li, Li
2016-10-01
Automatic weapon platform is one of the important research directions at domestic and overseas, it needs to accomplish fast searching for the object to be shot under complex background. Therefore, fast detection for given target is the foundation of further task. Considering that chest-shape target is common target of shoot practice, this paper treats chestshape target as the target and studies target automatic detection method based on Deformable Part Models. The algorithm computes Histograms of Oriented Gradient(HOG) features of the target and trains a model using Latent variable Support Vector Machine(SVM); In this model, target image is divided into several parts then we can obtain foot filter and part filters; Finally, the algorithm detects the target at the HOG features pyramid with method of sliding window. The running time of extracting HOG pyramid with lookup table can be shorten by 36%. The result indicates that this algorithm can detect the chest-shape target in natural environments indoors or outdoors. The true positive rate of detection reaches 76% with many hard samples, and the false positive rate approaches 0. Running on a PC (Intel(R)Core(TM) i5-4200H CPU) with C++ language, the detection time of images with the resolution of 640 × 480 is 2.093s. According to TI company run library about image pyramid and convolution for DM642 and other hardware, our detection algorithm is expected to be implemented on hardware platform, and it has application prospect in actual system.
NASA Astrophysics Data System (ADS)
Lin, Ye; Zhang, Haijiang; Jia, Xiaofeng
2018-03-01
For microseismic monitoring of hydraulic fracturing, microseismic migration can be used to image the fracture network with scattered microseismic waves. Compared with conventional microseismic location-based fracture characterization methods, microseismic migration can better constrain the stimulated reservoir volume regardless of the completeness of detected and located microseismic sources. However, the imaging results from microseismic migration may suffer from the contamination of other structures and thus the target fracture zones may not be illuminated properly. To solve this issue, in this study we propose a target-oriented staining algorithm for microseismic reverse-time migration. In the staining algorithm, the target area is first stained by constructing an imaginary velocity field and then a synchronized source wavefield only concerning the target structure is produced. As a result, a synchronized image from imaging with the synchronized source wavefield mainly contains the target structures. Synthetic tests based on a downhole microseismic monitoring system show that the target-oriented microseismic reverse-time migration method improves the illumination of target areas.
The small low SNR target tracking using sparse representation information
NASA Astrophysics Data System (ADS)
Yin, Lifan; Zhang, Yiqun; Wang, Shuo; Sun, Chenggang
2017-11-01
Tracking small targets, such as missile warheads, from a remote distance is a difficult task since the targets are "points" which are similar to sensor's noise points. As a result, traditional tracking algorithms only use the information contained in point measurement, such as the position information and intensity information, as characteristics to identify targets from noise points. But in fact, as a result of the diffusion of photon, any small target is not a point in the focal plane array and it occupies an area which is larger than one sensor cell. So, if we can take the geometry characteristic into account as a new dimension of information, it will be of helpful in distinguishing targets from noise points. In this paper, we use a novel method named sparse representation (SR) to depict the geometry information of target intensity and define it as the SR information of target. Modeling the intensity spread and solving its SR coefficients, the SR information is represented by establishing its likelihood function. Further, the SR information likelihood is incorporated in the conventional Probability Hypothesis Density (PHD) filter algorithm with point measurement. To illustrate the different performances of algorithm with or without the SR information, the detection capability and estimation error have been compared through simulation. Results demonstrate the proposed method has higher estimation accuracy and probability of detecting target than the conventional algorithm without the SR information.
NASA Technical Reports Server (NTRS)
Gillis, J. J.; Jolliff, B. L.; Elphic, R. C.; Maurice, S.; Feldman, W. C.; Lawrence, D. J.
2001-01-01
We present a new algorithm for extracting TiO2 concentrations from Clementine UVVIS data, which accounts for soil darkness and UV/VIS ratio. The accuracy of these TiO2 estimates are examined with Lunar Prospector thermal/epithermal neutron flux data. Additional information is contained in the original extended abstract.
Nagata, Koichi; Pethel, Timothy D
2017-07-01
Although anisotropic analytical algorithm (AAA) and Acuros XB (AXB) are both radiation dose calculation algorithms that take into account the heterogeneity within the radiation field, Acuros XB is inherently more accurate. The purpose of this retrospective method comparison study was to compare them and evaluate the dose discrepancy within the planning target volume (PTV). Radiation therapy (RT) plans of 11 dogs with intranasal tumors treated by radiation therapy at the University of Georgia were evaluated. All dogs were planned for intensity-modulated radiation therapy using nine coplanar X-ray beams that were equally spaced, then dose calculated with anisotropic analytical algorithm. The same plan with the same monitor units was then recalculated using Acuros XB for comparisons. Each dog's planning target volume was separated into air, bone, and tissue and evaluated. The mean dose to the planning target volume estimated by Acuros XB was 1.3% lower. It was 1.4% higher for air, 3.7% lower for bone, and 0.9% lower for tissue. The volume of planning target volume covered by the prescribed dose decreased by 21% when Acuros XB was used due to increased dose heterogeneity within the planning target volume. Anisotropic analytical algorithm relatively underestimates the dose heterogeneity and relatively overestimates the dose to the bone and tissue within the planning target volume for the radiation therapy planning of canine intranasal tumors. This can be clinically significant especially if the tumor cells are present within the bone, because it may result in relative underdosing of the tumor. © 2017 American College of Veterinary Radiology.
A Hybrid Search Algorithm for Swarm Robots Searching in an Unknown Environment
Li, Shoutao; Li, Lina; Lee, Gordon; Zhang, Hao
2014-01-01
This paper proposes a novel method to improve the efficiency of a swarm of robots searching in an unknown environment. The approach focuses on the process of feeding and individual coordination characteristics inspired by the foraging behavior in nature. A predatory strategy was used for searching; hence, this hybrid approach integrated a random search technique with a dynamic particle swarm optimization (DPSO) search algorithm. If a search robot could not find any target information, it used a random search algorithm for a global search. If the robot found any target information in a region, the DPSO search algorithm was used for a local search. This particle swarm optimization search algorithm is dynamic as all the parameters in the algorithm are refreshed synchronously through a communication mechanism until the robots find the target position, after which, the robots fall back to a random searching mode. Thus, in this searching strategy, the robots alternated between two searching algorithms until the whole area was covered. During the searching process, the robots used a local communication mechanism to share map information and DPSO parameters to reduce the communication burden and overcome hardware limitations. If the search area is very large, search efficiency may be greatly reduced if only one robot searches an entire region given the limited resources available and time constraints. In this research we divided the entire search area into several subregions, selected a target utility function to determine which subregion should be initially searched and thereby reduced the residence time of the target to improve search efficiency. PMID:25386855
A hybrid search algorithm for swarm robots searching in an unknown environment.
Li, Shoutao; Li, Lina; Lee, Gordon; Zhang, Hao
2014-01-01
This paper proposes a novel method to improve the efficiency of a swarm of robots searching in an unknown environment. The approach focuses on the process of feeding and individual coordination characteristics inspired by the foraging behavior in nature. A predatory strategy was used for searching; hence, this hybrid approach integrated a random search technique with a dynamic particle swarm optimization (DPSO) search algorithm. If a search robot could not find any target information, it used a random search algorithm for a global search. If the robot found any target information in a region, the DPSO search algorithm was used for a local search. This particle swarm optimization search algorithm is dynamic as all the parameters in the algorithm are refreshed synchronously through a communication mechanism until the robots find the target position, after which, the robots fall back to a random searching mode. Thus, in this searching strategy, the robots alternated between two searching algorithms until the whole area was covered. During the searching process, the robots used a local communication mechanism to share map information and DPSO parameters to reduce the communication burden and overcome hardware limitations. If the search area is very large, search efficiency may be greatly reduced if only one robot searches an entire region given the limited resources available and time constraints. In this research we divided the entire search area into several subregions, selected a target utility function to determine which subregion should be initially searched and thereby reduced the residence time of the target to improve search efficiency.
UGS video target detection and discrimination
NASA Astrophysics Data System (ADS)
Roberts, G. Marlon; Fitzgerald, James; McCormack, Michael; Steadman, Robert; Vitale, Joseph D.
2007-04-01
This project focuses on developing electro-optic algorithms which rank images by their likelihood of containing vehicles and people. These algorithms have been applied to images obtained from Textron's Terrain Commander 2 (TC2) Unattended Ground Sensor system. The TC2 is a multi-sensor surveillance system used in military applications. It combines infrared, acoustic, seismic, magnetic, and electro-optic sensors to detect nearby targets. When targets are detected by the seismic and acoustic sensors, the system is triggered and images are taken in the visible and infrared spectrum. The original Terrain Commander system occasionally captured and transmitted an excessive number of images, sometimes triggered by undesirable targets such as swaying trees. This wasted communications bandwidth, increased power consumption, and resulted in a large amount of end-user time being spent evaluating unimportant images. The algorithms discussed here help alleviate these problems. These algorithms are currently optimized for infra-red images, which give the best visibility in a wide range of environments, but could be adapted to visible imagery as well. It is important that the algorithms be robust, with minimal dependency on user input. They should be effective when tracking varying numbers of targets of different sizes and orientations, despite the low resolutions of the images used. Most importantly, the algorithms must be appropriate for implementation on a low-power processor in real time. This would enable us to maintain frame rates of 2 Hz for effective surveillance operations. Throughout our project we have implemented several algorithms, and used an appropriate methodology to quantitatively compare their performance. They are discussed in this paper.
She, Ji; Wang, Fei; Zhou, Jianjiang
2016-01-01
Radar networks are proven to have numerous advantages over traditional monostatic and bistatic radar. With recent developments, radar networks have become an attractive platform due to their low probability of intercept (LPI) performance for target tracking. In this paper, a joint sensor selection and power allocation algorithm for multiple-target tracking in a radar network based on LPI is proposed. It is found that this algorithm can minimize the total transmitted power of a radar network on the basis of a predetermined mutual information (MI) threshold between the target impulse response and the reflected signal. The MI is required by the radar network system to estimate target parameters, and it can be calculated predictively with the estimation of target state. The optimization problem of sensor selection and power allocation, which contains two variables, is non-convex and it can be solved by separating power allocation problem from sensor selection problem. To be specific, the optimization problem of power allocation can be solved by using the bisection method for each sensor selection scheme. Also, the optimization problem of sensor selection can be solved by a lower complexity algorithm based on the allocated powers. According to the simulation results, it can be found that the proposed algorithm can effectively reduce the total transmitted power of a radar network, which can be conducive to improving LPI performance. PMID:28009819
Image-algebraic design of multispectral target recognition algorithms
NASA Astrophysics Data System (ADS)
Schmalz, Mark S.; Ritter, Gerhard X.
1994-06-01
In this paper, we discuss methods for multispectral ATR (Automated Target Recognition) of small targets that are sensed under suboptimal conditions, such as haze, smoke, and low light levels. In particular, we discuss our ongoing development of algorithms and software that effect intelligent object recognition by selecting ATR filter parameters according to ambient conditions. Our algorithms are expressed in terms of IA (image algebra), a concise, rigorous notation that unifies linear and nonlinear mathematics in the image processing domain. IA has been implemented on a variety of parallel computers, with preprocessors available for the Ada and FORTRAN languages. An image algebra C++ class library has recently been made available. Thus, our algorithms are both feasible implementationally and portable to numerous machines. Analyses emphasize the aspects of image algebra that aid the design of multispectral vision algorithms, such as parameterized templates that facilitate the flexible specification of ATR filters.
McMahon, Ryan; Berbeco, Ross; Nishioka, Seiko; Ishikawa, Masayori; Papiez, Lech
2008-09-01
An MLC control algorithm for delivering intensity modulated radiation therapy (IMRT) to targets that are undergoing two-dimensional (2D) rigid motion in the beam's eye view (BEV) is presented. The goal of this method is to deliver 3D-derived fluence maps over a moving patient anatomy. Target motion measured prior to delivery is first used to design a set of planned dynamic-MLC (DMLC) sliding-window leaf trajectories. During actual delivery, the algorithm relies on real-time feedback to compensate for target motion that does not agree with the motion measured during planning. The methodology is based on an existing one-dimensional (ID) algorithm that uses on-the-fly intensity calculations to appropriately adjust the DMLC leaf trajectories in real-time during exposure delivery [McMahon et al., Med. Phys. 34, 3211-3223 (2007)]. To extend the 1D algorithm's application to 2D target motion, a real-time leaf-pair shifting mechanism has been developed. Target motion that is orthogonal to leaf travel is tracked by appropriately shifting the positions of all MLC leaves. The performance of the tracking algorithm was tested for a single beam of a fractionated IMRT treatment, using a clinically derived intensity profile and a 2D target trajectory based on measured patient data. Comparisons were made between 2D tracking, 1D tracking, and no tracking. The impact of the tracking lag time and the frequency of real-time imaging were investigated. A study of the dependence of the algorithm's performance on the level of agreement between the motion measured during planning and delivery was also included. Results demonstrated that tracking both components of the 2D motion (i.e., parallel and orthogonal to leaf travel) results in delivered fluence profiles that are superior to those that track the component of motion that is parallel to leaf travel alone. Tracking lag time effects may lead to relatively large intensity delivery errors compared to the other sources of error investigated. However, the algorithm presented is robust in the sense that it does not rely on a high level of agreement between the target motion measured during treatment planning and delivery.
NASA Astrophysics Data System (ADS)
Antuña-Marrero, Juan Carlos; Cachorro Revilla, Victoria; García Parrado, Frank; de Frutos Baraja, Ángel; Rodríguez Vega, Albeth; Mateos, David; Estevan Arredondo, René; Toledano, Carlos
2018-04-01
In the present study, we report the first comparison between the aerosol optical depth (AOD) and Ångström exponent (AE) of the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments on the Terra (AODt) and Aqua (AODa) satellites and those measured using a sun photometer (AODSP) at Camagüey, Cuba, for the period 2008 to 2014. The comparison of Terra and Aqua data includes AOD derived with both deep blue (DB) and dark target (DT) algorithms from MODIS Collection 6. Combined Terra and Aqua (AODta) data were also considered. Assuming an interval of ±30 min around the overpass time and an area of 25 km around the sun photometer site, two coincidence criteria were considered: individual pairs of observations and both spatial and temporal mean values, which we call collocated daily means. The usual statistics (root mean square error, RMSE; mean absolute error, MAE; median bias, BIAS), together with linear regression analysis, are used for this comparison. Results show very similar values for both coincidence criteria: the DT algorithm generally displays better statistics and higher homogeneity than the DB algorithm in the behaviour of AODt, AODa, AODta compared to AODSP. For collocated daily means, (a) RMSEs of 0.060 and 0.062 were obtained for Terra and Aqua with the DT algorithm and 0.084 and 0.065 for the DB algorithm, (b) MAE follows the same patterns, (c) BIAS for both Terra and Aqua presents positive and negative values but its absolute values are lower for the DT algorithm; (d) combined AODta data also give lower values of these three statistical indicators for the DT algorithm; (e) both algorithms present good correlations for comparing AODt, AODa, AODta vs. AODSP, with a slight overestimation of satellite data compared to AODSP, (f). The DT algorithm yields better figures with slopes of 0.96 (Terra), 0.96 (Aqua) and 0.96 (Terra + Aqua) compared to the DB algorithm (1.07, 0.90, 0.99), which displays greater variability. Multi-annual monthly means of AODta establish a first climatology that is more comparable to that given by the sun photometer and their statistical evaluation reveals better agreement with AODSP for the DT algorithm. Results of the AE comparison showed similar results to those reported in the literature concerning the two algorithms' capacity for retrieval. A comparison between broadband aerosol optical depth (BAOD), derived from broadband pyrheliometer observations at the Camagüey site and three other meteorological stations in Cuba, and AOD observations from MODIS on board Terra and Aqua show a poor correlation with slopes below 0.4 for both algorithms. Aqua (Terra) showed RMSE values of 0.073 (0.080) and 0.088 (0.087) for the DB and DT algorithms. As expected, RMSE values are higher than those from the MODIS-sun photometer comparison, but within the same order of magnitude. Results from the BAOD derived from solar radiation measurements demonstrate its reliability in describing climatological AOD series estimates.
Synthetic aperture radar target detection, feature extraction, and image formation techniques
NASA Technical Reports Server (NTRS)
Li, Jian
1994-01-01
This report presents new algorithms for target detection, feature extraction, and image formation with the synthetic aperture radar (SAR) technology. For target detection, we consider target detection with SAR and coherent subtraction. We also study how the image false alarm rates are related to the target template false alarm rates when target templates are used for target detection. For feature extraction from SAR images, we present a computationally efficient eigenstructure-based 2D-MODE algorithm for two-dimensional frequency estimation. For SAR image formation, we present a robust parametric data model for estimating high resolution range signatures of radar targets and for forming high resolution SAR images.
Algorithm research on infrared imaging target extraction based on GAC model
NASA Astrophysics Data System (ADS)
Li, Yingchun; Fan, Youchen; Wang, Yanqing
2016-10-01
Good target detection and tracking technique is significantly meaningful to increase infrared target detection distance and enhance resolution capacity. For the target detection problem about infrared imagining, firstly, the basic principles of level set method and GAC model are is analyzed in great detail. Secondly, "convergent force" is added according to the defect that GAC model is stagnant outside the deep concave region and cannot reach deep concave edge to build the promoted GAC model. Lastly, the self-adaptive detection method in combination of Sobel operation and GAC model is put forward by combining the advantages that subject position of the target could be detected with Sobel operator and the continuous edge of the target could be obtained through GAC model. In order to verify the effectiveness of the model, the two groups of experiments are carried out by selecting the images under different noise effects. Besides, the comparative analysis is conducted with LBF and LIF models. The experimental result shows that target could be better locked through LIF and LBF algorithms for the slight noise effect. The accuracy of segmentation is above 0.8. However, as for the strong noise effect, the target and noise couldn't be distinguished under the strong interference of GAC, LIF and LBF algorithms, thus lots of non-target parts are extracted during iterative process. The accuracy of segmentation is below 0.8. The accurate target position is extracted through the algorithm proposed in this paper. Besides, the accuracy of segmentation is above 0.8.
The rise and fall of a challenger: the Bullet Cluster in Λ cold dark matter simulations
NASA Astrophysics Data System (ADS)
Thompson, Robert; Davé, Romeel; Nagamine, Kentaro
2015-09-01
The Bullet Cluster has provided some of the best evidence for the Λ cold dark matter (ΛCDM) model via direct empirical proof of the existence of collisionless dark matter, while posing a serious challenge owing to the unusually high inferred pairwise velocities of its progenitor clusters. Here, we investigate the probability of finding such a high-velocity pair in large-volume N-body simulations, particularly focusing on differences between halo-finding algorithms. We find that algorithms that do not account for the kinematics of infalling groups yield vastly different statistics and probabilities. When employing the ROCKSTAR halo finder that considers particle velocities, we find numerous Bullet-like pair candidates that closely match not only the high pairwise velocity, but also the mass, mass ratio, separation distance, and collision angle of the initial conditions that have been shown to produce the Bullet Cluster in non-cosmological hydrodynamic simulations. The probability of finding a high pairwise velocity pair among haloes with Mhalo ≥ 1014 M⊙ is 4.6 × 10-4 using ROCKSTAR, while it is ≈34 × lower using a friends-of-friends (FoF)-based approach as in previous studies. This is because the typical spatial extent of Bullet progenitors is such that FoF tends to group them into a single halo despite clearly distinct kinematics. Further requiring an appropriately high average mass among the two progenitors, we find the comoving number density of potential Bullet-like candidates to be of the order of ≈10-10 Mpc-3. Our findings suggest that ΛCDM straightforwardly produces massive, high relative velocity halo pairs analogous to Bullet Cluster progenitors, and hence the Bullet Cluster does not present a challenge to the ΛCDM model.
Automatic detection of diabetic retinopathy features in ultra-wide field retinal images
NASA Astrophysics Data System (ADS)
Levenkova, Anastasia; Sowmya, Arcot; Kalloniatis, Michael; Ly, Angelica; Ho, Arthur
2017-03-01
Diabetic retinopathy (DR) is a major cause of irreversible vision loss. DR screening relies on retinal clinical signs (features). Opportunities for computer-aided DR feature detection have emerged with the development of Ultra-WideField (UWF) digital scanning laser technology. UWF imaging covers 82% greater retinal area (200°), against 45° in conventional cameras3 , allowing more clinically relevant retinopathy to be detected4 . UWF images also provide a high resolution of 3078 x 2702 pixels. Currently DR screening uses 7 overlapping conventional fundus images, and the UWF images provide similar results1,4. However, in 40% of cases, more retinopathy was found outside the 7-field ETDRS) fields by UWF and in 10% of cases, retinopathy was reclassified as more severe4 . This is because UWF imaging allows examination of both the central retina and more peripheral regions, with the latter implicated in DR6 . We have developed an algorithm for automatic recognition of DR features, including bright (cotton wool spots and exudates) and dark lesions (microaneurysms and blot, dot and flame haemorrhages) in UWF images. The algorithm extracts features from grayscale (green "red-free" laser light) and colour-composite UWF images, including intensity, Histogram-of-Gradient and Local binary patterns. Pixel-based classification is performed with three different classifiers. The main contribution is the automatic detection of DR features in the peripheral retina. The method is evaluated by leave-one-out cross-validation on 25 UWF retinal images with 167 bright lesions, and 61 other images with 1089 dark lesions. The SVM classifier performs best with AUC of 94.4% / 95.31% for bright / dark lesions.
NASA Technical Reports Server (NTRS)
Kaufman, Yoram J.; Gobron, Nadine; Pinty, Bernard; Widlowski, Jean-Luc; Verstraete, Michel M.; Lau, William K. M. (Technical Monitor)
2002-01-01
Data from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument that flies in polar orbit on the Terra platform, are used to derive the aerosol optical thickness and properties over land and ocean. The relationships between visible reflectance (at blue, rho(sub blue), and red, rho(sub red)) and mid-infrared (at 2.1 microns, rho(sub 2.1)) are used in the MODIS aerosol retrieval algorithm to derive global distribution of aerosols over the land. These relations have been established from a series of measurements indicating that rho(sub blue) is approximately 0.5 rho(sub red) is approximately 0.25 rho(sub 2.1). Here we use a model to describe the transfer of radiation through a vegetation canopy composed of randomly oriented leaves to assess the theoretical foundations for these relationships. Calculations for a wide range of leaf area indices and vegetation fractions show that rho(sub blue) is consistently about 1/4 of rho(sub 2.1) as used by MODIS for the whole range of analyzed cases, except for very dark soils, such as those found in burn scars. For its part, the ratio rho(sub red)/rho(sub 2.1) varies from less than the empirically derived value of 1/2 for dense and dark vegetation, to more than 1/2 for bright mixture of soil and vegetation. This is in agreement with measurements over uniform dense vegetation, but not with measurements over mixed dark scenes. In the later case the discrepancy is probably mitigated by shadows due to uneven canopy and terrain on a large scale. It is concluded that the value of this ratio should ideally be made dependent on the land cover type in the operational processing of MODIS data, especially over dense forests.
NASA Astrophysics Data System (ADS)
Michel, Patrick; Jutzi, Martin; Richardson, Derek C.
2014-11-01
In recent years, we have shown by numerical impact simulations that collisions and gravitational reaccumulation together can explain the formation of asteroid families and satellites (e.g. [1]). We also found that the presence of microporosity influences the outcome of a catastrophic disruption ([2], [3]). The size-frequency distributions (SFDs) resulting from the disruption of 100 km-diameter targets consisting of either monolithic non-porous basalt or non-porous basalt blocks held together by gravity (termed rubble piles by the investigators) has already been determined ([4], [5]). Using the same wide range of collision speeds, impact angles, and impactor sizes, we extended those studies to targets consisting of porous material represented by parameters for pumice. Dark-type asteroid families, such as C-type, are often considered to contain a high fraction of porosity (including microporosity). To determine the impact conditions for dark-type asteroid family formation, a comparison is needed between the actual family SFD and that of impact disruptions of porous bodies. Moreover, the comparison between the disruptions of non-porous, rubble-pile, and porous targets is important to assess the influence of various internal structures on the outcome. Our results show that in terms of largest remnants, in general, the outcomes for porous bodies are more similar to the ones for non-porous targets ([4]) than for rubble-pile targets ([5]). In particular, the latter targets are much weaker (the largest remnants are much smaller). We suspect that this is because the pressure-dependent shear strength between the individual components of the rubble pile is not properly modeled, which makes the body behave more like a fluid than an actual rubble pile. We will present our results and implications in terms of SFDs as well as ejection velocities over the entire considered parameter space. We will also check whether we find good agreement with existing dark-type asteroid families, allowing us to say something about their history. [1] Michel et al. 2001. Science 294, 1696.[2] Jutzi et al. 2008. Icarus 198, 242.[3] Jutzi et al. 2010. Icarus 207, 54.[4] Durda et al. 2007, Icarus 186, 498.[5] Benavidez et al. 2012. Icarus 219, 57.
Cosmogenic production of tritium in dark matter detectors
NASA Astrophysics Data System (ADS)
Amaré, J.; Castel, J.; Cebrián, S.; Coarasa, I.; Cuesta, C.; Dafni, T.; Galán, J.; García, E.; Garza, J. G.; Iguaz, F. J.; Irastorza, I. G.; Luzón, G.; Martínez, M.; Mirallas, H.; Oliván, M. A.; Ortigoza, Y.; Ortiz de Solórzano, A.; Puimedón, J.; Ruiz-Chóliz, E.; Sarsa, M. L.; Villar, J. A.; Villar, P.
2018-01-01
The direct detection of dark matter particles requires ultra-low background conditions at energies below a few tens of keV. Radioactive isotopes are produced via cosmogenic activation in detectors and other materials and those isotopes constitute a background source which has to be under control. In particular, tritium is specially relevant due to its decay properties (very low endpoint energy and long half-life) when induced in the detector medium, and because it can be generated in any material as a spallation product. Quantification of cosmogenic production of tritium is not straightforward, neither experimentally nor by calculations. In this work, a method for the calculation of production rates at sea level has been developed and applied to some of the materials typically used as targets in dark matter detectors (germanium, sodium iodide, argon and neon); it is based on a selected description of tritium production cross sections over the entire energy range of cosmic nucleons. Results have been compared to available data in the literature, either based on other calculations or from measurements. The obtained tritium production rates, ranging from a few tens to a few hundreds of nuclei per kg and per day at sea level, point to a significant contribution to the background in dark matter experiments, requiring the application of specific protocols for target material purification, material storing underground and limiting the time the detector is on surface during the building process in order to minimize the exposure to the most dangerous cosmic ray components.
Expanded Processing Techniques for EMI Systems
2012-07-01
possible to perform better target detection using physics-based algorithms and the entire data set, rather than simulating a simpler data set and mapping...possible to perform better target detection using physics-based algorithms and the entire data set, rather than simulating a simpler data set and...54! Figure 4.25: Plots of simulated MetalMapper data for two oblate spheroidal targets
Space moving target detection and tracking method in complex background
NASA Astrophysics Data System (ADS)
Lv, Ping-Yue; Sun, Sheng-Li; Lin, Chang-Qing; Liu, Gao-Rui
2018-06-01
The background of the space-borne detectors in real space-based environment is extremely complex and the signal-to-clutter ratio is very low (SCR ≈ 1), which increases the difficulty for detecting space moving targets. In order to solve this problem, an algorithm combining background suppression processing based on two-dimensional least mean square filter (TDLMS) and target enhancement based on neighborhood gray-scale difference (GSD) is proposed in this paper. The latter can filter out most of the residual background clutter processed by the former such as cloud edge. Through this procedure, both global and local SCR have obtained substantial improvement, indicating that the target has been greatly enhanced. After removing the detector's inherent clutter region through connected domain processing, the image only contains the target point and the isolated noise, in which the isolated noise could be filtered out effectively through multi-frame association. The proposed algorithm in this paper has been compared with some state-of-the-art algorithms for moving target detection and tracking tasks. The experimental results show that the performance of this algorithm is the best in terms of SCR gain, background suppression factor (BSF) and detection results.
A Max-Flow Based Algorithm for Connected Target Coverage with Probabilistic Sensors
Shan, Anxing; Xu, Xianghua; Cheng, Zongmao; Wang, Wensheng
2017-01-01
Coverage is a fundamental issue in the research field of wireless sensor networks (WSNs). Connected target coverage discusses the sensor placement to guarantee the needs of both coverage and connectivity. Existing works largely leverage on the Boolean disk model, which is only a coarse approximation to the practical sensing model. In this paper, we focus on the connected target coverage issue based on the probabilistic sensing model, which can characterize the quality of coverage more accurately. In the probabilistic sensing model, sensors are only be able to detect a target with certain probability. We study the collaborative detection probability of target under multiple sensors. Armed with the analysis of collaborative detection probability, we further formulate the minimum ϵ-connected target coverage problem, aiming to minimize the number of sensors satisfying the requirements of both coverage and connectivity. We map it into a flow graph and present an approximation algorithm called the minimum vertices maximum flow algorithm (MVMFA) with provable time complex and approximation ratios. To evaluate our design, we analyze the performance of MVMFA theoretically and also conduct extensive simulation studies to demonstrate the effectiveness of our proposed algorithm. PMID:28587084
A Max-Flow Based Algorithm for Connected Target Coverage with Probabilistic Sensors.
Shan, Anxing; Xu, Xianghua; Cheng, Zongmao; Wang, Wensheng
2017-05-25
Coverage is a fundamental issue in the research field of wireless sensor networks (WSNs). Connected target coverage discusses the sensor placement to guarantee the needs of both coverage and connectivity. Existing works largely leverage on the Boolean disk model, which is only a coarse approximation to the practical sensing model. In this paper, we focus on the connected target coverage issue based on the probabilistic sensing model, which can characterize the quality of coverage more accurately. In the probabilistic sensing model, sensors are only be able to detect a target with certain probability. We study the collaborative detection probability of target under multiple sensors. Armed with the analysis of collaborative detection probability, we further formulate the minimum ϵ -connected target coverage problem, aiming to minimize the number of sensors satisfying the requirements of both coverage and connectivity. We map it into a flow graph and present an approximation algorithm called the minimum vertices maximum flow algorithm (MVMFA) with provable time complex and approximation ratios. To evaluate our design, we analyze the performance of MVMFA theoretically and also conduct extensive simulation studies to demonstrate the effectiveness of our proposed algorithm.
Interactive target tracking for persistent wide-area surveillance
NASA Astrophysics Data System (ADS)
Ersoy, Ilker; Palaniappan, Kannappan; Seetharaman, Guna S.; Rao, Raghuveer M.
2012-06-01
Persistent aerial surveillance is an emerging technology that can provide continuous, wide-area coverage from an aircraft-based multiple-camera system. Tracking targets in these data sets is challenging for vision algorithms due to large data (several terabytes), very low frame rate, changing viewpoint, strong parallax and other imperfections due to registration and projection. Providing an interactive system for automated target tracking also has additional challenges that require online algorithms that are seamlessly integrated with interactive visualization tools to assist the user. We developed an algorithm that overcomes these challenges and demonstrated it on data obtained from a wide-area imaging platform.
UAVs Task and Motion Planning in the Presence of Obstacles and Prioritized Targets
Gottlieb, Yoav; Shima, Tal
2015-01-01
The intertwined task assignment and motion planning problem of assigning a team of fixed-winged unmanned aerial vehicles to a set of prioritized targets in an environment with obstacles is addressed. It is assumed that the targets’ locations and initial priorities are determined using a network of unattended ground sensors used to detect potential threats at restricted zones. The targets are characterized by a time-varying level of importance, and timing constraints must be fulfilled before a vehicle is allowed to visit a specific target. It is assumed that the vehicles are carrying body-fixed sensors and, thus, are required to approach a designated target while flying straight and level. The fixed-winged aerial vehicles are modeled as Dubins vehicles, i.e., having a constant speed and a minimum turning radius constraint. The investigated integrated problem of task assignment and motion planning is posed in the form of a decision tree, and two search algorithms are proposed: an exhaustive algorithm that improves over run time and provides the minimum cost solution, encoded in the tree, and a greedy algorithm that provides a quick feasible solution. To satisfy the target’s visitation timing constraint, a path elongation motion planning algorithm amidst obstacles is provided. Using simulations, the performance of the algorithms is compared, evaluated and exemplified. PMID:26610522
Finding viable models in SUSY parameter spaces with signal specific discovery potential
NASA Astrophysics Data System (ADS)
Burgess, Thomas; Lindroos, Jan Øye; Lipniacka, Anna; Sandaker, Heidi
2013-08-01
Recent results from ATLAS giving a Higgs mass of 125.5 GeV, further constrain already highly constrained supersymmetric models such as pMSSM or CMSSM/mSUGRA. As a consequence, finding potentially discoverable and non-excluded regions of model parameter space is becoming increasingly difficult. Several groups have invested large effort in studying the consequences of Higgs mass bounds, upper limits on rare B-meson decays, and limits on relic dark matter density on constrained models, aiming at predicting superpartner masses, and establishing likelihood of SUSY models compared to that of the Standard Model vis-á-vis experimental data. In this paper a framework for efficient search for discoverable, non-excluded regions of different SUSY spaces giving specific experimental signature of interest is presented. The method employs an improved Markov Chain Monte Carlo (MCMC) scheme exploiting an iteratively updated likelihood function to guide search for viable models. Existing experimental and theoretical bounds as well as the LHC discovery potential are taken into account. This includes recent bounds on relic dark matter density, the Higgs sector and rare B-mesons decays. A clustering algorithm is applied to classify selected models according to expected phenomenology enabling automated choice of experimental benchmarks and regions to be used for optimizing searches. The aim is to provide experimentalist with a viable tool helping to target experimental signatures to search for, once a class of models of interest is established. As an example a search for viable CMSSM models with τ-lepton signatures observable with the 2012 LHC data set is presented. In the search 105209 unique models were probed. From these, ten reference benchmark points covering different ranges of phenomenological observables at the LHC were selected.
Rosenwasser, Shilo; Rot, Ilona; Sollner, Evelyn; Meyer, Andreas J.; Smith, Yoav; Leviatan, Noam; Fluhr, Robert; Friedman, Haya
2011-01-01
Treatment of Arabidopsis (Arabidopsis thaliana) leaves by extended darkness generates a genetically activated senescence program that culminates in cell death. The transcriptome of leaves subjected to extended darkness was found to contain a variety of reactive oxygen species (ROS)-specific signatures. The levels of transcripts constituting the transcriptome footprints of chloroplasts and cytoplasm ROS stresses decreased in leaves, as early as the second day of darkness. In contrast, an increase was detected in transcripts associated with mitochondrial and peroxisomal ROS stresses. The sequential changes in the redox state of the organelles during darkness were examined by redox-sensitive green fluorescent protein probes (roGFP) that were targeted to specific organelles. In plastids, roGFP showed a decreased level of oxidation as early as the first day of darkness, followed by a gradual increase to starting levels. However, in mitochondria, the level of oxidation of roGFP rapidly increased as early as the first day of darkness, followed by an increase in the peroxisomal level of oxidation of roGFP on the second day. No changes in the probe oxidation were observed in the cytoplasm until the third day. The increase in mitochondrial roGFP degree of oxidation was abolished by sucrose treatment, implying that oxidation is caused by energy deprivation. The dynamic redox state visualized by roGFP probes and the analysis of microarray results are consistent with a scenario in which ROS stresses emanating from the mitochondria and peroxisomes occur early during darkness at a presymptomatic stage and jointly contribute to the senescence program. PMID:21372201
ACS Internal CTE Monitor and Short Darks
NASA Astrophysics Data System (ADS)
Ogaz, Sara
2013-10-01
This is a continuation of Program 13156 and is to be executed once a cycle for internal CTE and short darks, respectively.INTERNAL CTE MONITOR:The charge transfer efficiency {CTE} of the ACS CCD detectors will decline as damage due to on-orbit radiation exposure accumulates. This degradation will be monitored once a cycle to determine the useful lifetime of the CCDs. All the data for this program is acquired using internal targets {lamps} only, so all of the exposures should be taken during Earth occultation time {but not during SAA passages}. This program emulates the ACS pre-flight ground calibration and post-launch SMOV testing {program 8948}, so that results from each epoch can be directly compared. Extended Pixel Edge Response {EPER} data will be obtained over a range of signal levels for the Wide Field Channel {WFC}. The signal levels are 125, 500, 1620, 5000, 10000, and 60000 electrons at gain 2.Since Cycle 18, this monitoring program was reduced {compared to 11881} considering that there is also an external CTE monitoring program.SHORT DARKS:To improve the pixel-based CTE model at signals below 10 DN, short dark frames are needed to obtain a statistically useful sample of clean, warm pixel trails. This program obtains a set of dark frames for each of the following exposure times: 66 s {60 s for some subarrays} and 339 s. These short darks and the 1040 s darks obtained from the CCD Daily Monitor will sample warm and hot pixels over logarithmically increasing brightness. Subarray short darks were added in Cycle 19 to study CTE tails in different subarray readout modes.
NASA Astrophysics Data System (ADS)
Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Voit, Michael; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen
2017-05-01
Real-time motion video analysis is a challenging and exhausting task for the human observer, particularly in safety and security critical domains. Hence, customized video analysis systems providing functions for the analysis of subtasks like motion detection or target tracking are welcome. While such automated algorithms relieve the human operators from performing basic subtasks, they impose additional interaction duties on them. Prior work shows that, e.g., for interaction with target tracking algorithms, a gaze-enhanced user interface is beneficial. In this contribution, we present an investigation on interaction with an independent motion detection (IDM) algorithm. Besides identifying an appropriate interaction technique for the user interface - again, we compare gaze-based and traditional mouse-based interaction - we focus on the benefit an IDM algorithm might provide for an UAS video analyst. In a pilot study, we exposed ten subjects to the task of moving target detection in UAS video data twice, once performing with automatic support, once performing without it. We compare the two conditions considering performance in terms of effectiveness (correct target selections). Additionally, we report perceived workload (measured using the NASA-TLX questionnaire) and user satisfaction (measured using the ISO 9241-411 questionnaire). The results show that a combination of gaze input and automated IDM algorithm provides valuable support for the human observer, increasing the number of correct target selections up to 62% and reducing workload at the same time.
Tensor Fukunaga-Koontz transform for small target detection in infrared images
NASA Astrophysics Data System (ADS)
Liu, Ruiming; Wang, Jingzhuo; Yang, Huizhen; Gong, Chenglong; Zhou, Yuanshen; Liu, Lipeng; Zhang, Zhen; Shen, Shuli
2016-09-01
Infrared small targets detection plays a crucial role in warning and tracking systems. Some novel methods based on pattern recognition technology catch much attention from researchers. However, those classic methods must reshape images into vectors with the high dimensionality. Moreover, vectorizing breaks the natural structure and correlations in the image data. Image representation based on tensor treats images as matrices and can hold the natural structure and correlation information. So tensor algorithms have better classification performance than vector algorithms. Fukunaga-Koontz transform is one of classification algorithms and it is a vector version method with the disadvantage of all vector algorithms. In this paper, we first extended the Fukunaga-Koontz transform into its tensor version, tensor Fukunaga-Koontz transform. Then we designed a method based on tensor Fukunaga-Koontz transform for detecting targets and used it to detect small targets in infrared images. The experimental results, comparison through signal-to-clutter, signal-to-clutter gain and background suppression factor, have validated the advantage of the target detection based on the tensor Fukunaga-Koontz transform over that based on the Fukunaga-Koontz transform.
NASA Astrophysics Data System (ADS)
Meng, Siqi; Ren, Kan; Lu, Dongming; Gu, Guohua; Chen, Qian; Lu, Guojun
2018-03-01
Synthetic aperture radar (SAR) is an indispensable and useful method for marine monitoring. With the increase of SAR sensors, high resolution images can be acquired and contain more target structure information, such as more spatial details etc. This paper presents a novel adaptive parameter transform (APT) domain constant false alarm rate (CFAR) to highlight targets. The whole method is based on the APT domain value. Firstly, the image is mapped to the new transform domain by the algorithm. Secondly, the false candidate target pixels are screened out by the CFAR detector to highlight the target ships. Thirdly, the ship pixels are replaced by the homogeneous sea pixels. And then, the enhanced image is processed by Niblack algorithm to obtain the wake binary image. Finally, normalized Hough transform (NHT) is used to detect wakes in the binary image, as a verification of the presence of the ships. Experiments on real SAR images validate that the proposed transform does enhance the target structure and improve the contrast of the image. The algorithm has a good performance in the ship and ship wake detection.
Virtual local target method for avoiding local minimum in potential field based robot navigation.
Zou, Xi-Yong; Zhu, Jing
2003-01-01
A novel robot navigation algorithm with global path generation capability is presented. Local minimum is a most intractable but is an encountered frequently problem in potential field based robot navigation. Through appointing appropriately some virtual local targets on the journey, it can be solved effectively. The key concept employed in this algorithm are the rules that govern when and how to appoint these virtual local targets. When the robot finds itself in danger of local minimum, a virtual local target is appointed to replace the global goal temporarily according to the rules. After the virtual target is reached, the robot continues on its journey by heading towards the global goal. The algorithm prevents the robot from running into local minima anymore. Simulation results showed that it is very effective in complex obstacle environments.
Lim, Hansaim; Gray, Paul; Xie, Lei; Poleksic, Aleksandar
2016-01-01
Conventional one-drug-one-gene approach has been of limited success in modern drug discovery. Polypharmacology, which focuses on searching for multi-targeted drugs to perturb disease-causing networks instead of designing selective ligands to target individual proteins, has emerged as a new drug discovery paradigm. Although many methods for single-target virtual screening have been developed to improve the efficiency of drug discovery, few of these algorithms are designed for polypharmacology. Here, we present a novel theoretical framework and a corresponding algorithm for genome-scale multi-target virtual screening based on the one-class collaborative filtering technique. Our method overcomes the sparseness of the protein-chemical interaction data by means of interaction matrix weighting and dual regularization from both chemicals and proteins. While the statistical foundation behind our method is general enough to encompass genome-wide drug off-target prediction, the program is specifically tailored to find protein targets for new chemicals with little to no available interaction data. We extensively evaluate our method using a number of the most widely accepted gene-specific and cross-gene family benchmarks and demonstrate that our method outperforms other state-of-the-art algorithms for predicting the interaction of new chemicals with multiple proteins. Thus, the proposed algorithm may provide a powerful tool for multi-target drug design. PMID:27958331
Lim, Hansaim; Gray, Paul; Xie, Lei; Poleksic, Aleksandar
2016-12-13
Conventional one-drug-one-gene approach has been of limited success in modern drug discovery. Polypharmacology, which focuses on searching for multi-targeted drugs to perturb disease-causing networks instead of designing selective ligands to target individual proteins, has emerged as a new drug discovery paradigm. Although many methods for single-target virtual screening have been developed to improve the efficiency of drug discovery, few of these algorithms are designed for polypharmacology. Here, we present a novel theoretical framework and a corresponding algorithm for genome-scale multi-target virtual screening based on the one-class collaborative filtering technique. Our method overcomes the sparseness of the protein-chemical interaction data by means of interaction matrix weighting and dual regularization from both chemicals and proteins. While the statistical foundation behind our method is general enough to encompass genome-wide drug off-target prediction, the program is specifically tailored to find protein targets for new chemicals with little to no available interaction data. We extensively evaluate our method using a number of the most widely accepted gene-specific and cross-gene family benchmarks and demonstrate that our method outperforms other state-of-the-art algorithms for predicting the interaction of new chemicals with multiple proteins. Thus, the proposed algorithm may provide a powerful tool for multi-target drug design.
Enhancement of tracking performance in electro-optical system based on servo control algorithm
NASA Astrophysics Data System (ADS)
Choi, WooJin; Kim, SungSu; Jung, DaeYoon; Seo, HyoungKyu
2017-10-01
Modern electro-optical surveillance and reconnaissance systems require tracking capability to get exact images of target or to accurately direct the line of sight to target which is moving or still. This leads to the tracking system composed of image based tracking algorithm and servo control algorithm. In this study, we focus on the servo control function to minimize the overshoot in the tracking motion and do not miss the target. The scheme is to limit acceleration and velocity parameters in the tracking controller, depending on the target state information in the image. We implement the proposed techniques by creating a system model of DIRCM and simulate the same environment, validate the performance on the actual equipment.
An Efficient Moving Target Detection Algorithm Based on Sparsity-Aware Spectrum Estimation
Shen, Mingwei; Wang, Jie; Wu, Di; Zhu, Daiyin
2014-01-01
In this paper, an efficient direct data domain space-time adaptive processing (STAP) algorithm for moving targets detection is proposed, which is achieved based on the distinct spectrum features of clutter and target signals in the angle-Doppler domain. To reduce the computational complexity, the high-resolution angle-Doppler spectrum is obtained by finding the sparsest coefficients in the angle domain using the reduced-dimension data within each Doppler bin. Moreover, we will then present a knowledge-aided block-size detection algorithm that can discriminate between the moving targets and the clutter based on the extracted spectrum features. The feasibility and effectiveness of the proposed method are validated through both numerical simulations and raw data processing results. PMID:25222035
An improved algorithm of mask image dodging for aerial image
NASA Astrophysics Data System (ADS)
Zhang, Zuxun; Zou, Songbai; Zuo, Zhiqi
2011-12-01
The technology of Mask image dodging based on Fourier transform is a good algorithm in removing the uneven luminance within a single image. At present, the difference method and the ratio method are the methods in common use, but they both have their own defects .For example, the difference method can keep the brightness uniformity of the whole image, but it is deficient in local contrast; meanwhile the ratio method can work better in local contrast, but sometimes it makes the dark areas of the original image too bright. In order to remove the defects of the two methods effectively, this paper on the basis of research of the two methods proposes a balance solution. Experiments show that the scheme not only can combine the advantages of the difference method and the ratio method, but also can avoid the deficiencies of the two algorithms.
The effect of visual context on manual localization of remembered targets
NASA Technical Reports Server (NTRS)
Barry, S. R.; Bloomberg, J. J.; Huebner, W. P.
1997-01-01
This paper examines the contribution of egocentric cues and visual context to manual localization of remembered targets. Subjects pointed in the dark to the remembered position of a target previously viewed without or within a structured visual scene. Without a remembered visual context, subjects pointed to within 2 degrees of the target. The presence of a visual context with cues of straight ahead enhanced pointing performance to the remembered location of central but not off-center targets. Thus, visual context provides strong visual cues of target position and the relationship of body position to target location. Without a visual context, egocentric cues provide sufficient input for accurate pointing to remembered targets.
NASA Astrophysics Data System (ADS)
Plante, Guillaume
An impressive array of astrophysical observations suggest that 83% of the matter in the universe is in a form of non-luminous, cold, collisionless, non-baryonic dark matter. Several extensions of the Standard Model of particle physics aimed at solving the hierarchy problem predict stable weakly interacting massive particles (WIMPs) that could naturally have the right cosmological relic abundance today to compose most of the dark matter if their interactions with normal matter are on the order of a weak scale cross section. These candidates also have the added benefit that their properties and interaction rates can be computed in a well defined particle physics model. A considerable experimental effort is currently under way to uncover the nature of dark matter. One method of detecting WIMP dark matter is to look for its interactions in terrestrial detectors where it is expected to scatter off nuclei. In 2007, the XENON10 experiment took the lead over the most sensitive direct detection dark matter search in operation, the CDMS II experiment, by probing spin-independent WIMP-nucleon interaction cross sections down to sigmachi N ˜ 5 x 10-44 cm 2 at 30 GeV/c2. Liquefied noble gas detectors are now among the technologies at the forefront of direct detection experiments. Liquid xenon (LXe), in particular, is a well suited target for WIMP direct detection. It is easily scalable to larger target masses, allows discrimination between nuclear recoils and electronic recoils, and has an excellent stopping power to shield against external backgrounds. A particle losing energy in LXe creates both ionization electrons and scintillation light. In a dual-phase LXe time projection chamber (TPC) the ionization electrons are drifted and extracted into the gas phase where they are accelerated to amplify the charge signal into a proportional scintillation signal. These two signals allow the three-dimensional localization of events with millimeter precision and the ability to fiducialize the target volume, yielding an inner core with a very low background. Additionally, the ratio of ionization and scintillation can be used to discriminate between nuclear recoils, from neutrons or WIMPs, and electronic recoils, from gamma or beta backgrounds. In these detectors, the energy scale is based on the scintillation signal of nuclear recoils and consequently the precise knowledge of the scintillation efficiency of nuclear recoils in LXe is of prime importance. Inspired by the success of the XENON10 experiment, the XENON collaboration designed and built a new, ten times larger, with a one hundred times lower background, LXe TPC to search for dark matter. It is currently the most sensitive direct detection experiment in operation. In order to shed light on the response of LXe to low energy nuclear recoils a new single phase detector designed specifically for the measurement of the scintillation efficiency of nuclear recoils was also built. In 2011, the XENON100 dark matter results from 100 live days set the most stringent limit on the spin-independent WIMP-nucleon interaction cross section over a wide range of masses, down to sigma chi N ˜ 7 x 10-45 cm2 at 50 GeV/c2, almost an order of magnitude improvement over XENON10 in less than four years. This thesis describes the research conducted in the context of the XENON100 dark matter search experiment. I describe the initial simulation results and ideas that influenced the design of the XENON100 detector, the construction and assembly steps that lead into its concrete realization, the detector and its subsystems, a subset of the calibration results of the detector, and finally dark matter exclusion limits. I also describe in detail the new improved measurement of the important quantity for the interpretation of results from LXe dark matter searches, the scintillation efficiency of low-energy nuclear recoils in LXe.
Dark Matter Limits From a 2L C3F8 Filled Bubble Chamber
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robinson, Alan Edward
2015-12-01
The PICO-2L C3F8 bubble chamber search forWeakly Interacting Massive Particle (WIMP) dark matter was operated in the SNOLAB underground laboratory at the same location as the previous CF3I lled COUPP-4kg detector. Neutron calibrations using photoneutron sources in C3F8 and CF3I lled calibration bubble chambers were performed to verify the sensitivity of these target uids to dark matter scattering. This data was combined with similar measurements using a low-energy neutron beam at the University of Montreal and in situ calibrations of the PICO-2L and COUPP-4kg detectors. C3F8 provides much greater sensitivity to WIMP-proton scattering than CF3I in bubble chamber detectors. PICO-2Lmore » searched for dark matter recoils with energy thresholds below 10 keV. Radiopurity assays of detector materials were performed and the expected neutron recoil background was evaluated to be 1.6+0:3« less
Scattering of dark particles with light mediators
NASA Astrophysics Data System (ADS)
Soper, Davison E.; Spannowsky, Michael; Wallace, Chris J.; Tait, Tim M. P.
2014-12-01
We present a treatment of the high energy scattering of dark Dirac fermions from nuclei, mediated by the exchange of a light vector boson. The dark fermions are produced by proton-nucleus interactions in a fixed target and, after traversing shielding that screens out strongly interacting products, appear similarly to neutrino neutral current scattering in a detector. Using the Fermilab experiment E613 as an example, we place limits on a secluded dark matter scenario. Visible scattering in the detector includes both the familiar regime of large momentum transfer to the nucleus (Q2) described by deeply inelastic scattering, as well as small Q2 kinematics described by the exchanged vector mediator fluctuating into a quark-antiquark pair whose interaction with the nucleus is described by a saturation model. We find that the improved description of the low Q2 scattering leads to important corrections, resulting in more robust constraints in a regime where a description entirely in terms of deeply inelastic scattering cannot be trusted.
Studying generalised dark matter interactions with extended halo-independent methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kahlhoefer, Felix; Wild, Sebastian
2016-10-20
The interpretation of dark matter direct detection experiments is complicated by the fact that neither the astrophysical distribution of dark matter nor the properties of its particle physics interactions with nuclei are known in detail. To address both of these issues in a very general way we develop a new framework that combines the full formalism of non-relativistic effective interactions with state-of-the-art halo-independent methods. This approach makes it possible to analyse direct detection experiments for arbitrary dark matter interactions and quantify the goodness-of-fit independent of astrophysical uncertainties. We employ this method in order to demonstrate that the degeneracy between astrophysicalmore » uncertainties and particle physics unknowns is not complete. Certain models can be distinguished in a halo-independent way using a single ton-scale experiment based on liquid xenon, while other models are indistinguishable with a single experiment but can be separated using combined information from several target elements.« less
Extended maximum likelihood halo-independent analysis of dark matter direct detection data
Gelmini, Graciela B.; Georgescu, Andreea; Gondolo, Paolo; ...
2015-11-24
We extend and correct a recently proposed maximum-likelihood halo-independent method to analyze unbinned direct dark matter detection data. Instead of the recoil energy as independent variable we use the minimum speed a dark matter particle must have to impart a given recoil energy to a nucleus. This has the advantage of allowing us to apply the method to any type of target composition and interaction, e.g. with general momentum and velocity dependence, and with elastic or inelastic scattering. We prove the method and provide a rigorous statistical interpretation of the results. As first applications, we find that for dark mattermore » particles with elastic spin-independent interactions and neutron to proton coupling ratio f n/f p=-0.7, the WIMP interpretation of the signal observed by CDMS-II-Si is compatible with the constraints imposed by all other experiments with null results. We also find a similar compatibility for exothermic inelastic spin-independent interactions with f n/f p=-0.8.« less
Improved Limits for Higgs-Portal Dark Matter from LHC Searches.
Hoferichter, Martin; Klos, Philipp; Menéndez, Javier; Schwenk, Achim
2017-11-03
Searches for invisible Higgs decays at the Large Hadron Collider constrain dark matter Higgs-portal models, where dark matter interacts with the standard model fields via the Higgs boson. While these searches complement dark matter direct-detection experiments, a comparison of the two limits depends on the coupling of the Higgs boson to the nucleons forming the direct-detection nuclear target, typically parametrized in a single quantity f_{N}. We evaluate f_{N} using recent phenomenological and lattice-QCD calculations, and include for the first time the coupling of the Higgs boson to two nucleons via pion-exchange currents. We observe a partial cancellation for Higgs-portal models that makes the two-nucleon contribution anomalously small. Our results, summarized as f_{N}=0.308(18), show that the uncertainty of the Higgs-nucleon coupling has been vastly overestimated in the past. The improved limits highlight that state-of-the-art nuclear physics input is key to fully exploiting experimental searches.
Z boson mediated dark matter beyond the effective theory
Kearney, John; Orlofsky, Nicholas; Pierce, Aaron
2017-02-17
Here, direct detection bounds are beginning to constrain a very simple model of weakly interacting dark matter—a Majorana fermion with a coupling to the Z boson. In a particularly straightforward gauge-invariant realization, this coupling is introduced via a higher-dimensional operator. While attractive in its simplicity, this model generically induces a large ρ parameter. An ultraviolet completion that avoids an overly large contribution to ρ is the singlet-doublet model. We revisit this model, focusing on the Higgs blind spot region of parameter space where spin-independent interactions are absent. This model successfully reproduces dark matter with direct detection mediated by the Zmore » boson but whose cosmology may depend on additional couplings and states. Future direct detection experiments should effectively probe a significant portion of this parameter space, aside from a small coannihilating region. As such, Z-mediated thermal dark matter as realized in the singlet-doublet model represents an interesting target for future searches.« less
2012-08-06
This image taken by NASA Curiosity shows what lies ahead for the rover -- its main science target, informally called Mount Sharp. The rover shadow can be seen in the foreground, and the dark bands beyond are dunes.
Aircraft Detection in High-Resolution SAR Images Based on a Gradient Textural Saliency Map.
Tan, Yihua; Li, Qingyun; Li, Yansheng; Tian, Jinwen
2015-09-11
This paper proposes a new automatic and adaptive aircraft target detection algorithm in high-resolution synthetic aperture radar (SAR) images of airport. The proposed method is based on gradient textural saliency map under the contextual cues of apron area. Firstly, the candidate regions with the possible existence of airport are detected from the apron area. Secondly, directional local gradient distribution detector is used to obtain a gradient textural saliency map in the favor of the candidate regions. In addition, the final targets will be detected by segmenting the saliency map using CFAR-type algorithm. The real high-resolution airborne SAR image data is used to verify the proposed algorithm. The results demonstrate that this algorithm can detect aircraft targets quickly and accurately, and decrease the false alarm rate.
Electro-optic tracking R&D for defense surveillance
NASA Astrophysics Data System (ADS)
Sutherland, Stuart; Woodruff, Chris J.
1995-09-01
Two aspects of work on automatic target detection and tracking for electro-optic (EO) surveillance are described. Firstly, a detection and tracking algorithm test-bed developed by DSTO and running on a PC under Windows NT is being used to assess candidate algorithms for unresolved and minimally resolved target detection. The structure of this test-bed is described and examples are given of its user interfaces and outputs. Secondly, a development by Australian industry under a Defence-funded contract, of a reconfigurable generic track processor (GTP) is outlined. The GTP will include reconfigurable image processing stages and target tracking algorithms. It will be used to demonstrate to the Australian Defence Force automatic detection and tracking capabilities, and to serve as a hardware base for real time algorithm refinement.
NASA Technical Reports Server (NTRS)
Limbacher, James A.; Kahn, Ralph A.
2017-01-01
As aerosol amount and type are key factors in the 'atmospheric correction' required for remote-sensing chlorophyll alpha concentration (Chl) retrievals, the Multi-angle Imaging SpectroRadiometer (MISR) can contribute to ocean color analysis despite a lack of spectral channels optimized for this application. Conversely, an improved ocean surface constraint should also improve MISR aerosol-type products, especially spectral single-scattering albedo (SSA) retrievals. We introduce a coupled, self-consistent retrieval of Chl together with aerosol over dark water. There are time-varying MISR radiometric calibration errors that significantly affect key spectral reflectance ratios used in the retrievals. Therefore, we also develop and apply new calibration corrections to the MISR top-of-atmosphere (TOA) reflectance data, based on comparisons with coincident MODIS (Moderate Resolution Imaging Spectroradiometer) observations and trend analysis of the MISR TOA bidirectional reflectance factors (BRFs) over three pseudo-invariant desert sites. We run the MISR research retrieval algorithm (RA) with the corrected MISR reflectances to generate MISR-retrieved Chl and compare the MISR Chl values to a set of 49 coincident SeaBASS (SeaWiFS Bio-optical Archive and Storage System) in situ observations. Where Chl(sub in situ) less than 1.5 mg m(exp -3), the results from our Chl model are expected to be of highest quality, due to algorithmic assumption validity. Comparing MISR RA Chl to the 49 coincident SeaBASS observations, we report a correlation coefficient (r) of 0.86, a root-mean-square error (RMSE) of 0.25, and a median absolute error (MAE) of 0.10. Statistically, a two-sample Kolmogorov- Smirnov test indicates that it is not possible to distinguish between MISR Chl and available SeaBASS in situ Chl values (p greater than 0.1). We also compare MODIS-Terra and MISR RA Chl statistically, over much broader regions. With about 1.5 million MISR-MODIS collocations having MODIS Chl less than 1.5 mg m(exp -3), MISR and MODIS show very good agreement: r = 0.96, MAE = 0.09, and RMSE = 0.15. The new dark water aerosol/Chl RA can retrieve Chl in low-Chl, case I waters, independent of other imagers such as MODIS, via a largely physical algorithm, compared to the commonly applied statistical ones. At a minimum, MISR's multi-angle data should help reduce uncertainties in the MODIS-Terra ocean color retrieval where coincident measurements are made, while also allowing for a more robust retrieval of particle properties such as spectral single-scattering albedo.
NASA Astrophysics Data System (ADS)
Limbacher, James A.; Kahn, Ralph A.
2017-04-01
As aerosol amount and type are key factors in the atmospheric correction
required for remote-sensing chlorophyll a concentration (Chl) retrievals, the Multi-angle Imaging SpectroRadiometer (MISR) can contribute to ocean color analysis despite a lack of spectral channels optimized for this application. Conversely, an improved ocean surface constraint should also improve MISR aerosol-type products, especially spectral single-scattering albedo (SSA) retrievals. We introduce a coupled, self-consistent retrieval of Chl together with aerosol over dark water. There are time-varying MISR radiometric calibration errors that significantly affect key spectral reflectance ratios used in the retrievals. Therefore, we also develop and apply new calibration corrections to the MISR top-of-atmosphere (TOA) reflectance data, based on comparisons with coincident MODIS (Moderate Resolution Imaging Spectroradiometer) observations and trend analysis of the MISR TOA bidirectional reflectance factors (BRFs) over three pseudo-invariant desert sites. We run the MISR research retrieval algorithm (RA) with the corrected MISR reflectances to generate MISR-retrieved Chl and compare the MISR Chl values to a set of 49 coincident SeaBASS (SeaWiFS Bio-optical Archive and Storage System) in situ observations. Where Chlin situ < 1.5 mg m-3, the results from our Chl model are expected to be of highest quality, due to algorithmic assumption validity. Comparing MISR RA Chl to the 49 coincident SeaBASS observations, we report a correlation coefficient (r) of 0.86, a root-mean-square error (RMSE) of 0.25, and a median absolute error (MAE) of 0.10. Statistically, a two-sample Kolmogorov-Smirnov test indicates that it is not possible to distinguish between MISR Chl and available SeaBASS in situ Chl values (p > 0.1). We also compare MODIS-Terra and MISR RA Chl statistically, over much broader regions. With about 1.5 million MISR-MODIS collocations having MODIS Chl < 1.5 mg m-3, MISR and MODIS show very good agreement: r = 0. 96, MAE = 0.09, and RMSE = 0.15. The new dark water aerosol/Chl RA can retrieve Chl in low-Chl, case I waters, independent of other imagers such as MODIS, via a largely physical algorithm, compared to the commonly applied statistical ones. At a minimum, MISR's multi-angle data should help reduce uncertainties in the MODIS-Terra ocean color retrieval where coincident measurements are made, while also allowing for a more robust retrieval of particle properties such as spectral single-scattering albedo.
Clever eye algorithm for target detection of remote sensing imagery
NASA Astrophysics Data System (ADS)
Geng, Xiurui; Ji, Luyan; Sun, Kang
2016-04-01
Target detection algorithms for hyperspectral remote sensing imagery, such as the two most commonly used remote sensing detection algorithms, the constrained energy minimization (CEM) and matched filter (MF), can usually be attributed to the inner product between a weight filter (or detector) and a pixel vector. CEM and MF have the same expression except that MF requires data centralization first. However, this difference leads to a difference in the target detection results. That is to say, the selection of the data origin could directly affect the performance of the detector. Therefore, does there exist another data origin other than the zero and mean-vector points for a better target detection performance? This is a very meaningful issue in the field of target detection, but it has not been paid enough attention yet. In this study, we propose a novel objective function by introducing the data origin as another variable, and the solution of the function is corresponding to the data origin with the minimal output energy. The process of finding the optimal solution can be vividly regarded as a clever eye automatically searching the best observing position and direction in the feature space, which corresponds to the largest separation between the target and background. Therefore, this new algorithm is referred to as the clever eye algorithm (CE). Based on the Sherman-Morrison formula and the gradient ascent method, CE could derive the optimal target detection result in terms of energy. Experiments with both synthetic and real hyperspectral data have verified the effectiveness of our method.
NASA Technical Reports Server (NTRS)
Wood, S. J.; Campbell, D. J.; Reschke, M. F.; Prather, L.; Clement, G.
2016-01-01
The translational Vestibulo-Ocular Reflex (tVOR) is an important otolith-mediated response to stabilize gaze during natural locomotion. One goal of this study was to develop a measure of the tVOR using a simple hand-operated chair that provided passive vertical motion. Binocular eye movements were recorded with a tight-fitting video mask in ten healthy subjects. Vertical motion was provided by a modified spring-powered chair (swopper.com) at approximately 2 Hz (+/- 2 cm displacement) to approximate the head motion during walking. Linear acceleration was measured with wireless inertial sensors (Xsens) mounted on the head and torso. Eye movements were recorded while subjects viewed near (0.5m) and far (approximately 4m) targets, and then imagined these targets in darkness. Subjects also provided perceptual estimates of target distances. Consistent with the kinematic properties shown in previous studies, the tVOR gain was greater with near targets, and greater with vision than in darkness. We conclude that this portable chair system can provide a field measure of otolith-ocular function at frequencies sufficient to elicit a robust tVOR.
Static and dynamic structural characterization of nanomaterial catalysts
NASA Astrophysics Data System (ADS)
Masiel, Daniel Joseph
Heterogeneous catalysts systems are pervasive in industry, technology and academia. These systems often involve nanostructured transition metal particles that have crucial interfaces with either their supports or solid products. Understanding the nature of these interfaces as well as the structure of the catalysts and support materials themselves is crucial for the advancement of catalysis in general. Recent developments in the field of transmission electron microscopy (TEM) including dynamic transmission electron microscopy (DTEM), electron tomography, and in situ techniques stand poised to provide fresh insight into nanostructured catalyst systems. Several electron microscopy techniques are applied in this study to elucidate the mechanism of silica nanocoil growth and to discern the role of the support material and catalyst size in carbon dioxide and steam reforming of methane. The growth of silica nanocoils by faceted cobalt nanoparticles is a process that was initially believed to take place via a vapor-liquid-solid growth mechanism similar to other nanowire growth techniques. The extensive TEM work described here suggests that the process may instead occur via transport of silicate and silica species over the nanoparticle surface. Electron tomography studies of the interface between the catalyst particles and the wire indicate that they grow from edges between facets. Studies on reduction of the Co 3O4 nanoparticle precursors to the faceted pure cobalt catalysts were carried out using DTEM and in situ heating. Supported catalyst systems for methane reforming were studied using dark field scanning TEM to better understand sintering effects and the increased activity of Ni/Co catalysts supported by carbon nanotubes. Several novel electron microscopy techniques are described including annular dark field DTEM and a metaheuristic algorithm for solving the phase problem of coherent diffractive imaging. By inserting an annular dark field aperture into the back focal plane of the objective lens in a DTEM, time-resolved dark field images can be produced that have vastly improved contrast for supported catalyst materials compared to bright field DTEM imaging. A new algorithm called swarm optimized phase retrieval is described that uses a population-based approach to solve for the missing phases of diffraction data from discrete particles.
Joerger, Markus; Ferreri, Andrés J M; Krähenbühl, Stephan; Schellens, Jan H M; Cerny, Thomas; Zucca, Emanuele; Huitema, Alwin D R
2012-02-01
There is no consensus regarding optimal dosing of high dose methotrexate (HDMTX) in patients with primary CNS lymphoma. Our aim was to develop a convenient dosing algorithm to target AUC(MTX) in the range between 1000 and 1100 µmol l(-1) h. A population covariate model from a pooled dataset of 131 patients receiving HDMTX was used to simulate concentration-time curves of 10,000 patients and test the efficacy of a dosing algorithm based on 24 h MTX plasma concentrations to target the prespecified AUC(MTX) . These data simulations included interindividual, interoccasion and residual unidentified variability. Patients received a total of four simulated cycles of HDMTX and adjusted MTX dosages were given for cycles two to four. The dosing algorithm proposes MTX dose adaptations ranging from +75% in patients with MTX C(24) < 0.5 µmol l(-1) up to -35% in patients with MTX C(24) > 12 µmol l(-1). The proposed dosing algorithm resulted in a marked improvement of the proportion of patients within the AUC(MTX) target between 1000 and 1100 µmol l(-1) h (11% with standard MTX dose, 35% with the adjusted dose) and a marked reduction of the interindividual variability of MTX exposure. A simple and practical dosing algorithm for HDMTX has been developed based on MTX 24 h plasma concentrations, and its potential efficacy in improving the proportion of patients within a prespecified target AUC(MTX) and reducing the interindividual variability of MTX exposure has been shown by data simulations. The clinical benefit of this dosing algorithm should be assessed in patients with primary central nervous system lymphoma (PCNSL). © 2011 The Authors. British Journal of Clinical Pharmacology © 2011 The British Pharmacological Society.
NASA Astrophysics Data System (ADS)
Nishiura, Takanobu; Nakamura, Satoshi
2002-11-01
It is very important to capture distant-talking speech for a hands-free speech interface with high quality. A microphone array is an ideal candidate for this purpose. However, this approach requires localizing the target talker. Conventional talker localization algorithms in multiple sound source environments not only have difficulty localizing the multiple sound sources accurately, but also have difficulty localizing the target talker among known multiple sound source positions. To cope with these problems, we propose a new talker localization algorithm consisting of two algorithms. One is DOA (direction of arrival) estimation algorithm for multiple sound source localization based on CSP (cross-power spectrum phase) coefficient addition method. The other is statistical sound source identification algorithm based on GMM (Gaussian mixture model) for localizing the target talker position among localized multiple sound sources. In this paper, we particularly focus on the talker localization performance based on the combination of these two algorithms with a microphone array. We conducted evaluation experiments in real noisy reverberant environments. As a result, we confirmed that multiple sound signals can be identified accurately between ''speech'' or ''non-speech'' by the proposed algorithm. [Work supported by ATR, and MEXT of Japan.
Optimized programming algorithm for cylindrical and directional deep brain stimulation electrodes.
Anderson, Daria Nesterovich; Osting, Braxton; Vorwerk, Johannes; Dorval, Alan D; Butson, Christopher R
2018-04-01
Deep brain stimulation (DBS) is a growing treatment option for movement and psychiatric disorders. As DBS technology moves toward directional leads with increased numbers of smaller electrode contacts, trial-and-error methods of manual DBS programming are becoming too time-consuming for clinical feasibility. We propose an algorithm to automate DBS programming in near real-time for a wide range of DBS lead designs. Magnetic resonance imaging and diffusion tensor imaging are used to build finite element models that include anisotropic conductivity. The algorithm maximizes activation of target tissue and utilizes the Hessian matrix of the electric potential to approximate activation of neurons in all directions. We demonstrate our algorithm's ability in an example programming case that targets the subthalamic nucleus (STN) for the treatment of Parkinson's disease for three lead designs: the Medtronic 3389 (four cylindrical contacts), the direct STNAcute (two cylindrical contacts, six directional contacts), and the Medtronic-Sapiens lead (40 directional contacts). The optimization algorithm returns patient-specific contact configurations in near real-time-less than 10 s for even the most complex leads. When the lead was placed centrally in the target STN, the directional leads were able to activate over 50% of the region, whereas the Medtronic 3389 could activate only 40%. When the lead was placed 2 mm lateral to the target, the directional leads performed as well as they did in the central position, but the Medtronic 3389 activated only 2.9% of the STN. This DBS programming algorithm can be applied to cylindrical electrodes as well as novel directional leads that are too complex with modern technology to be manually programmed. This algorithm may reduce clinical programming time and encourage the use of directional leads, since they activate a larger volume of the target area than cylindrical electrodes in central and off-target lead placements.
Thundat, Thomas G.; Oden, Patrick I.; Datskos, Panagiotis G.
2000-01-01
A non-contact infrared thermometer measures target temperatures remotely without requiring the ratio of the target size to the target distance to the thermometer. A collection means collects and focusses target IR radiation on an IR detector. The detector measures thermal energy of the target over a spectrum using micromechanical sensors. A processor means calculates the collected thermal energy in at least two different spectral regions using a first algorithm in program form and further calculates the ratio of the thermal energy in the at least two different spectral regions to obtain the target temperature independent of the target size, distance to the target and emissivity using a second algorithm in program form.
Automatic detection of Martian dark slope streaks by machine learning using HiRISE images
NASA Astrophysics Data System (ADS)
Wang, Yexin; Di, Kaichang; Xin, Xin; Wan, Wenhui
2017-07-01
Dark slope streaks (DSSs) on the Martian surface are one of the active geologic features that can be observed on Mars nowadays. The detection of DSS is a prerequisite for studying its appearance, morphology, and distribution to reveal its underlying geological mechanisms. In addition, increasingly massive amounts of Mars high resolution data are now available. Hence, an automatic detection method for locating DSSs is highly desirable. In this research, we present an automatic DSS detection method by combining interest region extraction and machine learning techniques. The interest region extraction combines gradient and regional grayscale information. Moreover, a novel recognition strategy is proposed that takes the normalized minimum bounding rectangles (MBRs) of the extracted regions to calculate the Local Binary Pattern (LBP) feature and train a DSS classifier using the Adaboost machine learning algorithm. Comparative experiments using five different feature descriptors and three different machine learning algorithms show the superiority of the proposed method. Experimental results utilizing 888 extracted region samples from 28 HiRISE images show that the overall detection accuracy of our proposed method is 92.4%, with a true positive rate of 79.1% and false positive rate of 3.7%, which in particular indicates great performance of the method at eliminating non-DSS regions.
NASA Astrophysics Data System (ADS)
Forte, Paulo M. F.; Felgueiras, P. E. R.; Ferreira, Flávio P.; Sousa, M. A.; Nunes-Pereira, Eduardo J.; Bret, Boris P. J.; Belsley, Michael S.
2017-01-01
An automatic optical inspection system for detecting local defects on specular surfaces is presented. The system uses an image display to produce a sequence of structured diffuse illumination patterns and a digital camera to acquire the corresponding sequence of images. An image enhancement algorithm, which measures the local intensity variations between bright- and dark-field illumination conditions, yields a final image in which the defects are revealed with a high contrast. Subsequently, an image segmentation algorithm, which compares statistically the enhanced image of the inspected surface with the corresponding image for a defect-free template, allows separating defects from non-defects with an adjusting decision threshold. The method can be applied to shiny surfaces of any material including metal, plastic and glass. The described method was tested on the plastic surface of a car dashboard system. We were able to detect not only scratches but also dust and fingerprints. In our experiment we observed a detection contrast increase from about 40%, when using an extended light source, to more than 90% when using a structured light source. The presented method is simple, robust and can be carried out with short cycle times, making it appropriate for applications in industrial environments.
Wang, Zhirui; Xu, Jia; Huang, Zuzhen; Zhang, Xudong; Xia, Xiang-Gen; Long, Teng; Bao, Qian
2016-03-16
To detect and estimate ground slowly moving targets in airborne single-channel synthetic aperture radar (SAR), a road-aided ground moving target indication (GMTI) algorithm is proposed in this paper. First, the road area is extracted from a focused SAR image based on radar vision. Second, after stationary clutter suppression in the range-Doppler domain, a moving target is detected and located in the image domain via the watershed method. The target's position on the road as well as its radial velocity can be determined according to the target's offset distance and traffic rules. Furthermore, the target's azimuth velocity is estimated based on the road slope obtained via polynomial fitting. Compared with the traditional algorithms, the proposed method can effectively cope with slowly moving targets partly submerged in a stationary clutter spectrum. In addition, the proposed method can be easily extended to a multi-channel system to further improve the performance of clutter suppression and motion estimation. Finally, the results of numerical experiments are provided to demonstrate the effectiveness of the proposed algorithm.
The Dark Energy Spectroscopic Instrument (DESI)
NASA Astrophysics Data System (ADS)
Flaugher, Brenna; Bebek, Chris
2014-07-01
The Dark Energy Spectroscopic Instrument (DESI) is a Stage IV ground-based dark energy experiment that will study baryon acoustic oscillations (BAO) and the growth of structure through redshift-space distortions with a wide-area galaxy and quasar spectroscopic redshift survey. The DESI instrument consists of a new wide-field (3.2 deg. linear field of view) corrector plus a multi-object spectrometer with up to 5000 robotically positioned optical fibers and will be installed at prime focus on the Mayall 4m telescope at Kitt Peak, Arizona. The fibers feed 10 three-arm spectrographs producing spectra that cover a wavelength range from 360-980 nm and have resolution of 2000-5500 depending on the wavelength. The DESI instrument is designed for a 14,000 sq. deg. multi-year survey of targets that trace the evolution of dark energy out to redshift 3.5 using the redshifts of luminous red galaxies (LRGs), emission line galaxies (ELGs) and quasars. DESI is the successor to the successful Stage-III BOSS spectroscopic redshift survey and complements imaging surveys such as the Stage-III Dark Energy Survey (DES, currently operating) and the Stage-IV Large Synoptic Survey Telescope (LSST, planned start early in the next decade).
A Three-Dimensional Target Depth-Resolution Method with a Single-Vector Sensor
Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin
2018-01-01
This paper mainly studies and verifies the target number category-resolution method in multi-target cases and the target depth-resolution method of aerial targets. Firstly, target depth resolution is performed by using the sign distribution of the reactive component of the vertical complex acoustic intensity; the target category and the number resolution in multi-target cases is realized with a combination of the bearing-time recording information; and the corresponding simulation verification is carried out. The algorithm proposed in this paper can distinguish between the single-target multi-line spectrum case and the multi-target multi-line spectrum case. This paper presents an improved azimuth-estimation method for multi-target cases, which makes the estimation results more accurate. Using the Monte Carlo simulation, the feasibility of the proposed target number and category-resolution algorithm in multi-target cases is verified. In addition, by studying the field characteristics of the aerial and surface targets, the simulation results verify that there is only amplitude difference between the aerial target field and the surface target field under the same environmental parameters, and an aerial target can be treated as a special case of a surface target; the aerial target category resolution can then be realized based on the sign distribution of the reactive component of the vertical acoustic intensity so as to realize three-dimensional target depth resolution. By processing data from a sea experiment, the feasibility of the proposed aerial target three-dimensional depth-resolution algorithm is verified. PMID:29649173
A Three-Dimensional Target Depth-Resolution Method with a Single-Vector Sensor.
Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin
2018-04-12
This paper mainly studies and verifies the target number category-resolution method in multi-target cases and the target depth-resolution method of aerial targets. Firstly, target depth resolution is performed by using the sign distribution of the reactive component of the vertical complex acoustic intensity; the target category and the number resolution in multi-target cases is realized with a combination of the bearing-time recording information; and the corresponding simulation verification is carried out. The algorithm proposed in this paper can distinguish between the single-target multi-line spectrum case and the multi-target multi-line spectrum case. This paper presents an improved azimuth-estimation method for multi-target cases, which makes the estimation results more accurate. Using the Monte Carlo simulation, the feasibility of the proposed target number and category-resolution algorithm in multi-target cases is verified. In addition, by studying the field characteristics of the aerial and surface targets, the simulation results verify that there is only amplitude difference between the aerial target field and the surface target field under the same environmental parameters, and an aerial target can be treated as a special case of a surface target; the aerial target category resolution can then be realized based on the sign distribution of the reactive component of the vertical acoustic intensity so as to realize three-dimensional target depth resolution. By processing data from a sea experiment, the feasibility of the proposed aerial target three-dimensional depth-resolution algorithm is verified.
Respiration-rate estimation of a moving target using impulse-based ultra wideband radars.
Sharafi, Azadeh; Baboli, Mehran; Eshghi, Mohammad; Ahmadian, Alireza
2012-03-01
Recently, Ultra-wide band signals have become attractive for their particular advantage of having high spatial resolution and good penetration ability which makes them suitable in medical applications. One of these applications is wireless detection of heart rate and respiration rate. Two hypothesis of static environment and fixed patient are considered in the method presented in previous literatures which are not valid for long term monitoring of ambulant patients. In this article, a new method to detect the respiration rate of a moving target is presented. The first algorithm is applied to the simulated and experimental data for detecting respiration rate of a fixed target. Then, the second algorithm is developed to detect respiration rate of a moving target. The proposed algorithm uses correlation for body movement cancellation, and then detects the respiration rate based on energy in frequency domain. The results of algorithm prove an accuracy of 98.4 and 97% in simulated and experimental data, respectively.
FPGA based hardware optimized implementation of signal processing system for LFM pulsed radar
NASA Astrophysics Data System (ADS)
Azim, Noor ul; Jun, Wang
2016-11-01
Signal processing is one of the main parts of any radar system. Different signal processing algorithms are used to extract information about different parameters like range, speed, direction etc, of a target in the field of radar communication. This paper presents LFM (Linear Frequency Modulation) pulsed radar signal processing algorithms which are used to improve target detection, range resolution and to estimate the speed of a target. Firstly, these algorithms are simulated in MATLAB to verify the concept and theory. After the conceptual verification in MATLAB, the simulation is converted into implementation on hardware using Xilinx FPGA. Chosen FPGA is Xilinx Virtex-6 (XC6LVX75T). For hardware implementation pipeline optimization is adopted and also other factors are considered for resources optimization in the process of implementation. Focusing algorithms in this work for improving target detection, range resolution and speed estimation are hardware optimized fast convolution processing based pulse compression and pulse Doppler processing.
Circadian oscillations of cytosolic and chloroplastic free calcium in plants
NASA Technical Reports Server (NTRS)
Johnson, C. H.; Knight, M. R.; Kondo, T.; Masson, P.; Sedbrook, J.; Haley, A.; Trewavas, A.
1995-01-01
Tobacco and Arabidopsis plants, expressing a transgene for the calcium-sensitive luminescent protein apoaequorin, revealed circadian oscillations in free cytosolic calcium that can be phase-shifted by light-dark signals. When apoaequorin was targeted to the chloroplast, circadian chloroplast calcium rhythms were likewise observed after transfer of the seedlings to constant darkness. Circadian oscillations in free calcium concentrations can be expected to control many calcium-dependent enzymes and processes accounting for circadian outputs. Regulation of calcium flux is therefore fundamental to the organization of circadian systems.
Research on Aircraft Target Detection Algorithm Based on Improved Radial Gradient Transformation
NASA Astrophysics Data System (ADS)
Zhao, Z. M.; Gao, X. M.; Jiang, D. N.; Zhang, Y. Q.
2018-04-01
Aiming at the problem that the target may have different orientation in the unmanned aerial vehicle (UAV) image, the target detection algorithm based on the rotation invariant feature is studied, and this paper proposes a method of RIFF (Rotation-Invariant Fast Features) based on look up table and polar coordinate acceleration to be used for aircraft target detection. The experiment shows that the detection performance of this method is basically equal to the RIFF, and the operation efficiency is greatly improved.
NASA Astrophysics Data System (ADS)
Ayyad, Yassid; Mittig, Wolfgang; Bazin, Daniel; Beceiro-Novo, Saul; Cortesi, Marco
2018-02-01
The three-dimensional reconstruction of particle tracks in a time projection chamber is a challenging task that requires advanced classification and fitting algorithms. In this work, we have developed and implemented a novel algorithm based on the Random Sample Consensus Model (RANSAC). The RANSAC is used to classify tracks including pile-up, to remove uncorrelated noise hits, as well as to reconstruct the vertex of the reaction. The algorithm, developed within the Active Target Time Projection Chamber (AT-TPC) framework, was tested and validated by analyzing the 4He+4He reaction. Results, performance and quality of the proposed algorithm are presented and discussed in detail.
New Models and Methods for the Electroweak Scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carpenter, Linda
2017-09-26
This is the Final Technical Report to the US Department of Energy for grant DE-SC0013529, New Models and Methods for the Electroweak Scale, covering the time period April 1, 2015 to March 31, 2017. The goal of this project was to maximize the understanding of fundamental weak scale physics in light of current experiments, mainly the ongoing run of the Large Hadron Collider and the space based satellite experiements searching for signals Dark Matter annihilation or decay. This research program focused on the phenomenology of supersymmetry, Higgs physics, and Dark Matter. The properties of the Higgs boson are currently beingmore » measured by the Large Hadron collider, and could be a sensitive window into new physics at the weak scale. Supersymmetry is the leading theoretical candidate to explain the natural nessof the electroweak theory, however new model space must be explored as the Large Hadron collider has disfavored much minimal model parameter space. In addition the nature of Dark Matter, the mysterious particle that makes up 25% of the mass of the universe is still unknown. This project sought to address measurements of the Higgs boson couplings to the Standard Model particles, new LHC discovery scenarios for supersymmetric particles, and new measurements of Dark Matter interactions with the Standard Model both in collider production and annihilation in space. Accomplishments include new creating tools for analyses of Dark Matter models in Dark Matter which annihilates into multiple Standard Model particles, including new visualizations of bounds for models with various Dark Matter branching ratios; benchmark studies for new discovery scenarios of Dark Matter at the Large Hardon Collider for Higgs-Dark Matter and gauge boson-Dark Matter interactions; New target analyses to detect direct decays of the Higgs boson into challenging final states like pairs of light jets, and new phenomenological analysis of non-minimal supersymmetric models, namely the set of Dirac Gaugino Models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ritz, Steve; Jeltema, Tesla
One of the greatest mysteries in modern cosmology is the fact that the expansion of the universe is observed to be accelerating. This acceleration may stem from dark energy, an additional energy component of the universe, or may indicate that the theory of general relativity is incomplete on cosmological scales. The growth rate of large-scale structure in the universe and particularly the largest collapsed structures, clusters of galaxies, is highly sensitive to the underlying cosmology. Clusters will provide one of the single most precise methods of constraining dark energy with the ongoing Dark Energy Survey (DES). The accuracy of themore » cosmological constraints derived from DES clusters necessarily depends on having an optimized and well-calibrated algorithm for selecting clusters as well as an optical richness estimator whose mean relation and scatter compared to cluster mass are precisely known. Calibrating the galaxy cluster richness-mass relation and its scatter was the focus of the funded work. Specifically, we employ X-ray observations and optical spectroscopy with the Keck telescopes of optically-selected clusters to calibrate the relationship between optical richness (the number of galaxies in a cluster) and underlying mass. This work also probes aspects of cluster selection like the accuracy of cluster centering which are critical to weak lensing cluster studies.« less
Agent Collaborative Target Localization and Classification in Wireless Sensor Networks
Wang, Xue; Bi, Dao-wei; Ding, Liang; Wang, Sheng
2007-01-01
Wireless sensor networks (WSNs) are autonomous networks that have been frequently deployed to collaboratively perform target localization and classification tasks. Their autonomous and collaborative features resemble the characteristics of agents. Such similarities inspire the development of heterogeneous agent architecture for WSN in this paper. The proposed agent architecture views WSN as multi-agent systems and mobile agents are employed to reduce in-network communication. According to the architecture, an energy based acoustic localization algorithm is proposed. In localization, estimate of target location is obtained by steepest descent search. The search algorithm adapts to measurement environments by dynamically adjusting its termination condition. With the agent architecture, target classification is accomplished by distributed support vector machine (SVM). Mobile agents are employed for feature extraction and distributed SVM learning to reduce communication load. Desirable learning performance is guaranteed by combining support vectors and convex hull vectors. Fusion algorithms are designed to merge SVM classification decisions made from various modalities. Real world experiments with MICAz sensor nodes are conducted for vehicle localization and classification. Experimental results show the proposed agent architecture remarkably facilitates WSN designs and algorithm implementation. The localization and classification algorithms also prove to be accurate and energy efficient.
Optical Guidance for a Robotic Submarine
NASA Astrophysics Data System (ADS)
Schulze, Karl R.; LaFlash, Chris
2002-11-01
There is a need for autonomous submarines that can quickly and safely complete jobs, such as the recovery of a downed aircraft's black box recorder. In order to complete this feat, it is necessary to use an optical processing algorithm that distinguishes a desired target and uses the feedback from the algorithm to retrieve the target. The algorithm itself uses many bit mask filters for particle information, and then uses a unique rectation method in order to resolve complete objects. The algorithm has been extensively tested on an AUV platform, and proven to succeed repeatedly in approximately five or more feet of water clarity.
A Circuit-Based Quantum Algorithm Driven by Transverse Fields for Grover's Problem
NASA Technical Reports Server (NTRS)
Jiang, Zhang; Rieffel, Eleanor G.; Wang, Zhihui
2017-01-01
We designed a quantum search algorithm, giving the same quadratic speedup achieved by Grover's original algorithm; we replace Grover's diffusion operator (hard to implement) with a product diffusion operator generated by transverse fields (easy to implement). In our algorithm, the problem Hamiltonian (oracle) and the transverse fields are applied to the system alternatively. We construct such a sequence that the corresponding unitary generates a closed transition between the initial state (even superposition of all states) and a modified target state, which has a high degree of overlap with the original target state.
NASA Astrophysics Data System (ADS)
Gedalin, Daniel; Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Rotman, Stanley R.; Stern, Adrian
2017-04-01
Compressive sensing theory was proposed to deal with the high quantity of measurements demanded by traditional hyperspectral systems. Recently, a compressive spectral imaging technique dubbed compressive sensing miniature ultraspectral imaging (CS-MUSI) was presented. This system uses a voltage controlled liquid crystal device to create multiplexed hyperspectral cubes. We evaluate the utility of the data captured using the CS-MUSI system for the task of target detection. Specifically, we compare the performance of the matched filter target detection algorithm in traditional hyperspectral systems and in CS-MUSI multiplexed hyperspectral cubes. We found that the target detection algorithm performs similarly in both cases, despite the fact that the CS-MUSI data is up to an order of magnitude less than that in conventional hyperspectral cubes. Moreover, the target detection is approximately an order of magnitude faster in CS-MUSI data.
NASA Astrophysics Data System (ADS)
Shi, Yi Fang; Park, Seung Hyo; Song, Taek Lyul
2017-12-01
The target tracking using multistatic passive radar in a digital audio/video broadcast (DAB/DVB) network with illuminators of opportunity faces two main challenges: the first challenge is that one has to solve the measurement-to-illuminator association ambiguity in addition to the conventional association ambiguity between the measurements and targets, which introduces a significantly complex three-dimensional (3-D) data association problem among the target-measurement illuminator, this is because all the illuminators transmit the same carrier frequency signals and signals transmitted by different illuminators but reflected via the same target become indistinguishable; the other challenge is that only the bistatic range and range-rate measurements are available while the angle information is unavailable or of very poor quality. In this paper, the authors propose a new target tracking algorithm directly in three-dimensional (3-D) Cartesian coordinates with the capability of track management using the probability of target existence as a track quality measure. The proposed algorithm is termed sequential processing-joint integrated probabilistic data association (SP-JIPDA), which applies the modified sequential processing technique to resolve the additional association ambiguity between measurements and illuminators. The SP-JIPDA algorithm sequentially operates the JIPDA tracker to update each track for each illuminator with all the measurements in the common measurement set at each time. For reasons of fair comparison, the existing modified joint probabilistic data association (MJPDA) algorithm that addresses the 3-D data association problem via "supertargets" using gate grouping and provides tracks directly in 3-D Cartesian coordinates, is enhanced by incorporating the probability of target existence as an effective track quality measure for track management. Both algorithms deal with nonlinear observations using the extended Kalman filtering. A simulation study is performed to verify the superiority of the proposed SP-JIPDA algorithm over the MJIPDA in this multistatic passive radar system.
Effects of Cerebellar Disease on Sequences of Rapid Eye Movements
King, Susan; Chen, Athena L.; Joshi, Anand; Serra, Alessandro; Leigh, R. John
2011-01-01
Summary Studying saccades can illuminate the more complex decision-making processes required for everyday movements. The double-step task, in which a target jumps to two successive locations before the subject has time to react, has proven a powerful research tool to investigate the brain’s ability to program sequential responses. We asked how patients with a range of cerebellar disorders responded to the double-step task, specifically, whether the initial saccadic response made to a target is affected by the appearance of a second target jump. We also sought to determine whether cerebellar patients were able to make corrective saccades towards the remembered second target location, if it were turned off soon after presentation. We tested saccades to randomly interleaved single- and double-step target jumps to eight locations on a circle. Patient’s initial responses to double-step stimuli showed 50% more error than saccades to single target jumps, and often, they failed to make a saccade to the first target jump. The presence of a second target jump had similar, but smaller effects in control subjects (error increased by 18%). During memory-guided double-step trials, both patients and controls made corrective saccades in darkness to the remembered location of the second jump. We conclude that in cerebellar patients, the second target jump interferes with programming of the saccade to the first target jump of a double-step stimulus; this defect highlights patients’ impaired ability to respond appropriately to sudden, conflicting changes in their environment. Conversely, since cerebellar patients can make corrective memory-guided saccades in darkness, they retain the ability to remember spatial locations, possibly due to non-retinal neural signals (corollary discharge) from cerebral hemispheric areas concerned with spatial localization. PMID:21385592
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Z; Yu, G; Qin, S
Purpose: This study investigated that how the quality of adapted plan was affected by inter-fractional anatomy deformation by using one-step and two-step optimization for on line adaptive radiotherapy (ART) procedure. Methods: 10 lung carcinoma patients were chosen randomly to produce IMRT plan by one-step and two-step algorithms respectively, and the prescribed dose was set as 60 Gy on the planning target volume (PTV) for all patients. To simulate inter-fractional target deformation, four specific cases were created by systematic anatomy variation; including target superior shift 0.5 cm, 0.3cm contraction, 0.3 cm expansion and 45-degree rotation. Based on these four anatomy deformation,more » adapted plan, regenerated plan and non-adapted plan were created to evaluate quality of adaptation. Adapted plans were generated automatically by using one-step and two-step algorithms respectively to optimize original plans, and regenerated plans were manually created by experience physicists. Non-adapted plans were produced by recalculating the dose distribution based on corresponding original plans. The deviations among these three plans were statistically analyzed by paired T-test. Results: In PTV superior shift case, adapted plans had significantly better PTV coverage by using two-step algorithm compared with one-step one, and meanwhile there was a significant difference of V95 by comparison with adapted and non-adapted plans (p=0.0025). In target contraction deformation, with almost same PTV coverage, the total lung received lower dose using one-step algorithm than two-step algorithm (p=0.0143,0.0126 for V20, Dmean respectively). In other two deformation cases, there were no significant differences observed by both two optimized algorithms. Conclusion: In geometry deformation such as target contraction, with comparable PTV coverage, one-step algorithm gave better OAR sparing than two-step algorithm. Reversely, the adaptation by using two-step algorithm had higher efficiency and accuracy as target occurred position displacement. We want to thank Dr. Lei Xing and Dr. Yong Yang in the Stanford University School of Medicine for this work. This work was jointly supported by NSFC (61471226), Natural Science Foundation for Distinguished Young Scholars of Shandong Province (JQ201516), and China Postdoctoral Science Foundation (2015T80739, 2014M551949).« less
Zhu, Wei; Wang, Wei; Yuan, Gannan
2016-06-01
In order to improve the tracking accuracy, model estimation accuracy and quick response of multiple model maneuvering target tracking, the interacting multiple models five degree cubature Kalman filter (IMM5CKF) is proposed in this paper. In the proposed algorithm, the interacting multiple models (IMM) algorithm processes all the models through a Markov Chain to simultaneously enhance the model tracking accuracy of target tracking. Then a five degree cubature Kalman filter (5CKF) evaluates the surface integral by a higher but deterministic odd ordered spherical cubature rule to improve the tracking accuracy and the model switch sensitivity of the IMM algorithm. Finally, the simulation results demonstrate that the proposed algorithm exhibits quick and smooth switching when disposing different maneuver models, and it also performs better than the interacting multiple models cubature Kalman filter (IMMCKF), interacting multiple models unscented Kalman filter (IMMUKF), 5CKF and the optimal mode transition matrix IMM (OMTM-IMM).
An improved KCF tracking algorithm based on multi-feature and multi-scale
NASA Astrophysics Data System (ADS)
Wu, Wei; Wang, Ding; Luo, Xin; Su, Yang; Tian, Weiye
2018-02-01
The purpose of visual tracking is to associate the target object in a continuous video frame. In recent years, the method based on the kernel correlation filter has become the research hotspot. However, the algorithm still has some problems such as video capture equipment fast jitter, tracking scale transformation. In order to improve the ability of scale transformation and feature description, this paper has carried an innovative algorithm based on the multi feature fusion and multi-scale transform. The experimental results show that our method solves the problem that the target model update when is blocked or its scale transforms. The accuracy of the evaluation (OPE) is 77.0%, 75.4% and the success rate is 69.7%, 66.4% on the VOT and OTB datasets. Compared with the optimal one of the existing target-based tracking algorithms, the accuracy of the algorithm is improved by 6.7% and 6.3% respectively. The success rates are improved by 13.7% and 14.2% respectively.
Maximum angular accuracy of pulsed laser radar in photocounting limit.
Elbaum, M; Diament, P; King, M; Edelson, W
1977-07-01
To estimate the angular position of targets with pulsed laser radars, their images may be sensed with a fourquadrant noncoherent detector and the image photocounting distribution processed to obtain the angular estimates. The limits imposed on the accuracy of angular estimation by signal and background radiation shot noise, dark current noise, and target cross-section fluctuations are calculated. Maximum likelihood estimates of angular positions are derived for optically rough and specular targets and their performances compared with theoretical lower bounds.
GOCI Yonsei Aerosol Retrieval (YAER) algorithm and validation during DRAGON-NE Asia 2012 campaign
NASA Astrophysics Data System (ADS)
Choi, M.; Kim, J.; Lee, J.; Kim, M.; Park, Y. Je; Jeong, U.; Kim, W.; Holben, B.; Eck, T. F.; Lim, J. H.; Song, C. K.
2015-09-01
The Geostationary Ocean Color Imager (GOCI) onboard the Communication, Ocean, and Meteorology Satellites (COMS) is the first multi-channel ocean color imager in geostationary orbit. Hourly GOCI top-of-atmosphere radiance has been available for the retrieval of aerosol optical properties over East Asia since March 2011. This study presents improvements to the GOCI Yonsei Aerosol Retrieval (YAER) algorithm over ocean and land together with validation results during the DRAGON-NE Asia 2012 campaign. Optical properties of aerosol are retrieved from the GOCI YAER algorithm including aerosol optical depth (AOD) at 550 nm, fine-mode fraction (FMF) at 550 nm, single scattering albedo (SSA) at 440 nm, Angstrom exponent (AE) between 440 and 860 nm, and aerosol type from selected aerosol models in calculating AOD. Assumed aerosol models are compiled from global Aerosol Robotic Networks (AERONET) inversion data, and categorized according to AOD, FMF, and SSA. Nonsphericity is considered, and unified aerosol models are used over land and ocean. Different assumptions for surface reflectance are applied over ocean and land. Surface reflectance over the ocean varies with geometry and wind speed, while surface reflectance over land is obtained from the 1-3 % darkest pixels in a 6 km × 6 km area during 30 days. In the East China Sea and Yellow Sea, significant area is covered persistently by turbid waters, for which the land algorithm is used for aerosol retrieval. To detect turbid water pixels, TOA reflectance difference at 660 nm is used. GOCI YAER products are validated using other aerosol products from AERONET and the MODIS Collection 6 aerosol data from "Dark Target (DT)" and "Deep Blue (DB)" algorithms during the DRAGON-NE Asia 2012 campaign from March to May 2012. Comparison of AOD from GOCI and AERONET gives a Pearson correlation coefficient of 0.885 and a linear regression equation with GOCI AOD =1.086 × AERONET AOD - 0.041. GOCI and MODIS AODs are more highly correlated over ocean than land. Over land, especially, GOCI AOD shows better agreement with MODIS DB than MODIS DT because of the choice of surface reflectance assumptions. Other GOCI YAER products show lower correlation with AERONET than AOD, but are still qualitatively useful.
A novel rotational invariants target recognition method for rotating motion blurred images
NASA Astrophysics Data System (ADS)
Lan, Jinhui; Gong, Meiling; Dong, Mingwei; Zeng, Yiliang; Zhang, Yuzhen
2017-11-01
The imaging of the image sensor is blurred due to the rotational motion of the carrier and reducing the target recognition rate greatly. Although the traditional mode that restores the image first and then identifies the target can improve the recognition rate, it takes a long time to recognize. In order to solve this problem, a rotating fuzzy invariants extracted model was constructed that recognizes target directly. The model includes three metric layers. The object description capability of metric algorithms that contain gray value statistical algorithm, improved round projection transformation algorithm and rotation-convolution moment invariants in the three metric layers ranges from low to high, and the metric layer with the lowest description ability among them is as the input which can eliminate non pixel points of target region from degenerate image gradually. Experimental results show that the proposed model can improve the correct target recognition rate of blurred image and optimum allocation between the computational complexity and function of region.
A Target Coverage Scheduling Scheme Based on Genetic Algorithms in Directional Sensor Networks
Gil, Joon-Min; Han, Youn-Hee
2011-01-01
As a promising tool for monitoring the physical world, directional sensor networks (DSNs) consisting of a large number of directional sensors are attracting increasing attention. As directional sensors in DSNs have limited battery power and restricted angles of sensing range, maximizing the network lifetime while monitoring all the targets in a given area remains a challenge. A major technique to conserve the energy of directional sensors is to use a node wake-up scheduling protocol by which some sensors remain active to provide sensing services, while the others are inactive to conserve their energy. In this paper, we first address a Maximum Set Covers for DSNs (MSCD) problem, which is known to be NP-complete, and present a greedy algorithm-based target coverage scheduling scheme that can solve this problem by heuristics. This scheme is used as a baseline for comparison. We then propose a target coverage scheduling scheme based on a genetic algorithm that can find the optimal cover sets to extend the network lifetime while monitoring all targets by the evolutionary global search technique. To verify and evaluate these schemes, we conducted simulations and showed that the schemes can contribute to extending the network lifetime. Simulation results indicated that the genetic algorithm-based scheduling scheme had better performance than the greedy algorithm-based scheme in terms of maximizing network lifetime. PMID:22319387
Identification of Upward-going Muons for Dark Matter Searches at the NOvA Experiment
NASA Astrophysics Data System (ADS)
Xiao, Liting
2014-03-01
We search for energetic neutrinos that could originate from dark matter particles annihilating in the core of the Sun using the newly built NOvA Far Detector at Fermilab. Only upward-going muons produced via charged-current interactions are selected as signal in order to eliminate backgrounds from cosmic ray muons, which dominate the downward-going flux. We investigate several algorithms so as to develop an effective way of reconstructing the directionality of cosmic tracks at the trigger level. These studies are a crucial part of understanding how NOvA may compete with other experiments that are performing similar searches. In order to be competitive NOvA must be capable of rejecting backgrounds from downward-going cosmic rays with very high efficiency while accepting most upward-going muons. Acknowledgements: The Jefferson Trust, Fermilab, UVA Department of Physics.
The DES Science Verification Weak Lensing Shear Catalogs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarvis, M.
We present weak lensing shear catalogs for 139 square degrees of data taken during the Science Verification (SV) time for the new Dark Energy Camera (DECam) being used for the Dark Energy Survey (DES). We describe our object selection, point spread function estimation and shear measurement procedures using two independent shear pipelines, IM3SHAPE and NGMIX, which produce catalogs of 2.12 million and 3.44 million galaxies respectively. We also detail a set of null tests for the shear measurements and find that they pass the requirements for systematic errors at the level necessary for weak lensing science applications using the SVmore » data. Furthermore, we discuss some of the planned algorithmic improvements that will be necessary to produce sufficiently accurate shear catalogs for the full 5-year DES, which is expected to cover 5000 square degrees.« less
The DES Science Verification Weak Lensing Shear Catalogs
Jarvis, M.
2016-05-01
We present weak lensing shear catalogs for 139 square degrees of data taken during the Science Verification (SV) time for the new Dark Energy Camera (DECam) being used for the Dark Energy Survey (DES). We describe our object selection, point spread function estimation and shear measurement procedures using two independent shear pipelines, IM3SHAPE and NGMIX, which produce catalogs of 2.12 million and 3.44 million galaxies respectively. We also detail a set of null tests for the shear measurements and find that they pass the requirements for systematic errors at the level necessary for weak lensing science applications using the SVmore » data. Furthermore, we discuss some of the planned algorithmic improvements that will be necessary to produce sufficiently accurate shear catalogs for the full 5-year DES, which is expected to cover 5000 square degrees.« less
The Observatory for Multi-Epoch Gravitational Lens Astrophysics (OMEGA)
NASA Astrophysics Data System (ADS)
Moustakas, Leonidas A.; Bolton, Adam J.; Booth, Jeffrey T.; Bullock, James S.; Cheng, Edward; Coe, Dan; Fassnacht, Christopher D.; Gorjian, Varoujan; Heneghan, Cate; Keeton, Charles R.; Kochanek, Christopher S.; Lawrence, Charles R.; Marshall, Philip J.; Metcalf, R. Benton; Natarajan, Priyamvada; Nikzad, Shouleh; Peterson, Bradley M.; Wambsganss, Joachim
2008-07-01
Dark matter in a universe dominated by a cosmological constant seeds the formation of structure and is the scaffolding for galaxy formation. The nature of dark matter remains one of the fundamental unsolved problems in astrophysics and physics even though it represents 85% of the mass in the universe, and nearly one quarter of its total mass-energy budget. The mass function of dark matter "substructure" on sub-galactic scales may be enormously sensitive to the mass and properties of the dark matter particle. On astrophysical scales, especially at cosmological distances, dark matter substructure may only be detected through its gravitational influence on light from distant varying sources. Specifically, these are largely active galactic nuclei (AGN), which are accreting super-massive black holes in the centers of galaxies, some of the most extreme objects ever found. With enough measurements of the flux from AGN at different wavelengths, and their variability over time, the detailed structure around AGN, and even the mass of the super-massive black hole can be measured. The Observatory for Multi-Epoch Gravitational Lens Astrophysics (OMEGA) is a mission concept for a 1.5-m near-UV through near-IR space observatory that will be dedicated to frequent imaging and spectroscopic monitoring of ~100 multiply-imaged active galactic nuclei over the whole sky. Using wavelength-tailored dichroics with extremely high transmittance, efficient imaging in six channels will be done simultaneously during each visit to each target. The separate spectroscopic mode, engaged through a flip-in mirror, uses an image slicer spectrograph. After a period of many visits to all targets, the resulting multidimensional movies can then be analyzed to a) measure the mass function of dark matter substructure; b) measure precise masses of the accreting black holes as well as the structure of their accretion disks and their environments over several decades of physical scale; and c) measure a combination of Hubble's local expansion constant and cosmological distances to unprecedented precision. We present the novel OMEGA instrumentation suite, and how its integrated design is ideal for opening the time domain of known cosmologically-distant variable sources, to achieve the stated scientific goals.
An algorithm of adaptive scale object tracking in occlusion
NASA Astrophysics Data System (ADS)
Zhao, Congmei
2017-05-01
Although the correlation filter-based trackers achieve the competitive results both on accuracy and robustness, there are still some problems in handling scale variations, object occlusion, fast motions and so on. In this paper, a multi-scale kernel correlation filter algorithm based on random fern detector was proposed. The tracking task was decomposed into the target scale estimation and the translation estimation. At the same time, the Color Names features and HOG features were fused in response level to further improve the overall tracking performance of the algorithm. In addition, an online random fern classifier was trained to re-obtain the target after the target was lost. By comparing with some algorithms such as KCF, DSST, TLD, MIL, CT and CSK, experimental results show that the proposed approach could estimate the object state accurately and handle the object occlusion effectively.
Aircraft Detection in High-Resolution SAR Images Based on a Gradient Textural Saliency Map
Tan, Yihua; Li, Qingyun; Li, Yansheng; Tian, Jinwen
2015-01-01
This paper proposes a new automatic and adaptive aircraft target detection algorithm in high-resolution synthetic aperture radar (SAR) images of airport. The proposed method is based on gradient textural saliency map under the contextual cues of apron area. Firstly, the candidate regions with the possible existence of airport are detected from the apron area. Secondly, directional local gradient distribution detector is used to obtain a gradient textural saliency map in the favor of the candidate regions. In addition, the final targets will be detected by segmenting the saliency map using CFAR-type algorithm. The real high-resolution airborne SAR image data is used to verify the proposed algorithm. The results demonstrate that this algorithm can detect aircraft targets quickly and accurately, and decrease the false alarm rate. PMID:26378543
Rapid Mapping Of Floods Using SAR Data: Opportunities And Critical Aspects
NASA Astrophysics Data System (ADS)
Pulvirenti, Luca; Pierdicca, Nazzareno; Chini, Marco
2013-04-01
The potentiality of spaceborne Synthetic Aperture Radar (SAR) for flood mapping was demonstrated by several past investigations. The synoptic view, the capability to operate in almost all-weather conditions and during both day time and night time and the sensitivity of the microwave band to water are the key features that make SAR data useful for monitoring inundation events. In addition, their high spatial resolution, which can reach 1m with the new generation of X-band instruments such as TerraSAR-X and COSMO-SkyMed (CSK), allows emergency managers to use flood maps at very high spatial resolution. CSK gives also the possibility of performing frequent observations of regions hit by floods, thanks to the four-satellite constellation. Current research on flood mapping using SAR is focused on the development of automatic algorithms to be used in near real time applications. The approaches are generally based on the low radar return from smooth open water bodies that behave as specular reflectors and appear dark in SAR images. The major advantage of automatic algorithms is the computational efficiency that makes them suitable for rapid mapping purposes. The choice of the threshold value that, in this kind of algorithms, separates flooded from non-flooded areas is a critical aspect because it depends on the characteristics of the observed scenario and on system parameters. To deal with this aspect an algorithm for automatic detection of the regions of low backscatter has been developed. It basically accomplishes three steps: 1) division of the SAR image in a set of non-overlapping sub-images or splits; 2) selection of inhomogeneous sub-images that contain (at least) two populations of pixels, one of which is formed by dark pixels; 3) the application in sequence of an automatic thresholding algorithm and a region growing algorithm in order to produce a homogeneous map of flooded areas. Besides the aforementioned choice of the threshold, rapid mapping of floods may present other critical aspects. Searching for low SAR backscatter areas only may cause inaccuracies because flooded soils do not always act as smooth open water bodies. The presence of wind or of vegetation emerging above the water surface may give rise to an increase of the radar backscatter. In particular, mapping flooded vegetation using SAR data may represent a difficult task since backscattering phenomena in the volume between canopy, trunks and floodwater are quite complex in the presence of vegetation. A typical phenomenon is the double-bounce effect involving soil and stems or trunks, which is generally enhanced by the floodwater, so that flooded vegetation may appear very bright in a SAR image. Even in the absence of dense vegetation or wind, some regions may appear dark because of artefacts due to topography (shadowing), absorption caused by wet snow, and attenuation caused by heavy precipitating clouds (X-band SARs). Examples of the aforementioned effects that may limit the reliability of flood maps will be presented at the conference and some indications to deal with these effects (e.g. presence of vegetation and of artefacts) will be provided.
NASA Astrophysics Data System (ADS)
Moriya, Gentaro; Chikatsu, Hirofumi
2011-07-01
Recently, pixel numbers and functions of consumer grade digital camera are amazingly increasing by modern semiconductor and digital technology, and there are many low-priced consumer grade digital cameras which have more than 10 mega pixels on the market in Japan. In these circumstances, digital photogrammetry using consumer grade cameras is enormously expected in various application fields. There is a large body of literature on calibration of consumer grade digital cameras and circular target location. Target location with subpixel accuracy had been investigated as a star tracker issue, and many target location algorithms have been carried out. It is widely accepted that the least squares models with ellipse fitting is the most accurate algorithm. However, there are still problems for efficient digital close range photogrammetry. These problems are reconfirmation of the target location algorithms with subpixel accuracy for consumer grade digital cameras, relationship between number of edge points along target boundary and accuracy, and an indicator for estimating the accuracy of normal digital close range photogrammetry using consumer grade cameras. With this motive, an empirical testing of several algorithms for target location with subpixel accuracy and an indicator for estimating the accuracy are investigated in this paper using real data which were acquired indoors using 7 consumer grade digital cameras which have 7.2 mega pixels to 14.7 mega pixels.
Real-time target tracking and locating system for UAV
NASA Astrophysics Data System (ADS)
Zhang, Chao; Tang, Linbo; Fu, Huiquan; Li, Maowen
2017-07-01
In order to achieve real-time target tracking and locating for UAV, a reliable processing system is built on the embedded platform. Firstly, the video image is acquired in real time by the photovoltaic system on the UAV. When the target information is known, KCF tracking algorithm is adopted to track the target. Then, the servo is controlled to rotate with the target, when the target is in the center of the image, the laser ranging module is opened to obtain the distance between the UAV and the target. Finally, to combine with UAV flight parameters obtained by BeiDou navigation system, through the target location algorithm to calculate the geodetic coordinates of the target. The results show that the system is stable for real-time tracking of targets and positioning.