Sample records for detection efficiency calculation

  1. A method to calculate the gamma ray detection efficiency of a cylindrical NaI (Tl) crystal

    NASA Astrophysics Data System (ADS)

    Ahmadi, S.; Ashrafi, S.; Yazdansetad, F.

    2018-05-01

    Given a wide range application of NaI(Tl) detector in industrial and medical sectors, computation of the related detection efficiency in different distances of a radioactive source, especially for calibration purposes, is the subject of radiation detection studies. In this work, a 2in both in radius and height cylindrical NaI (Tl) scintillator was used, and by changing the radial, axial, and diagonal positions of an isotropic 137Cs point source relative to the detector, the solid angles and the interaction probabilities of gamma photons with the detector's sensitive area have been calculated. The calculations present the geometric and intrinsic efficiency as the functions of detector's dimensions and the position of the source. The calculation model is in good agreement with experiment, and MCNPX simulation.

  2. Measurements of the response function and the detection efficiency of an NE213 scintillator for neutrons between 20 and 65 MeV

    NASA Astrophysics Data System (ADS)

    Meigo, S.

    1997-02-01

    For neutrons 25, 30 and 65 MeV, the response functions and detection efficiencies of an NE213 liquid scintillator were measured. Quasi-monoenergetic neutrons produced by the 7Li(p,N 0.1) reaction were employed for the measurement and the absolute flux of incident neutrons was determined within 4% accuracy using a proton recoil telescope. Response functions and detection efficiencies calculated with the Monte Carlo codes, CECIL and SCINFUL, were compared with the measured data. It was found that response functions calculated with SCINFUL agreed better with experimental ones than those with CECIL, however, the deuteron light output used in SCINFUL was too low. The response functions calculated with a revised SCINFUL agreed with the experimental ones quite well even for the deuteron bump and peak due to the C(n,d 0) reaction. It was confirmed that the detection efficiencies calculated with the original and the revised SCINFULs agreed with the experimental data within the experimental error, while those with CECIL were about 20% higher in the energy region above 30 MeV.

  3. Efficiency transfer using the GEANT4 code of CERN for HPGe gamma spectrometry.

    PubMed

    Chagren, S; Tekaya, M Ben; Reguigui, N; Gharbi, F

    2016-01-01

    In this work we apply the GEANT4 code of CERN to calculate the peak efficiency in High Pure Germanium (HPGe) gamma spectrometry using three different procedures. The first is a direct calculation. The second corresponds to the usual case of efficiency transfer between two different configurations at constant emission energy assuming a reference point detection configuration and the third, a new procedure, consists on the transfer of the peak efficiency between two detection configurations emitting the gamma ray in different energies assuming a "virtual" reference point detection configuration. No pre-optimization of the detector geometrical characteristics was performed before the transfer to test the ability of the efficiency transfer to reduce the effect of the ignorance on their real magnitude on the quality of the transferred efficiency. The obtained and measured efficiencies were found in good agreement for the two investigated methods of efficiency transfer. The obtained agreement proves that Monte Carlo method and especially the GEANT4 code constitute an efficient tool to obtain accurate detection efficiency values. The second investigated efficiency transfer procedure is useful to calibrate the HPGe gamma detector for any emission energy value for a voluminous source using one point source detection efficiency emitting in a different energy as a reference efficiency. The calculations preformed in this work were applied to the measurement exercise of the EUROMET428 project. A measurement exercise where an evaluation of the full energy peak efficiencies in the energy range 60-2000 keV for a typical coaxial p-type HpGe detector and several types of source configuration: point sources located at various distances from the detector and a cylindrical box containing three matrices was performed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. QUENCH: A software package for the determination of quenching curves in Liquid Scintillation counting.

    PubMed

    Cassette, Philippe

    2016-03-01

    In Liquid Scintillation Counting (LSC), the scintillating source is part of the measurement system and its detection efficiency varies with the scintillator used, the vial and the volume and the chemistry of the sample. The detection efficiency is generally determined using a quenching curve, describing, for a specific radionuclide, the relationship between a quenching index given by the counter and the detection efficiency. A quenched set of LS standard sources are prepared by adding a quenching agent and the quenching index and detection efficiency are determined for each source. Then a simple formula is fitted to the experimental points to define the quenching curve function. The paper describes a software package specifically devoted to the determination of quenching curves with uncertainties. The experimental measurements are described by their quenching index and detection efficiency with uncertainties on both quantities. Random Gaussian fluctuations of these experimental measurements are sampled and a polynomial or logarithmic function is fitted on each fluctuation by χ(2) minimization. This Monte Carlo procedure is repeated many times and eventually the arithmetic mean and the experimental standard deviation of each parameter are calculated, together with the covariances between these parameters. Using these parameters, the detection efficiency, corresponding to an arbitrary quenching index within the measured range, can be calculated. The associated uncertainty is calculated with the law of propagation of variances, including the covariance terms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Waveguide-integrated single- and multi-photon detection at telecom wavelengths using superconducting nanowires

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferrari, Simone; Kahl, Oliver; Kovalyuk, Vadim

    We investigate single- and multi-photon detection regimes of superconducting nanowire detectors embedded in silicon nitride nanophotonic circuits. At near-infrared wavelengths, simultaneous detection of up to three photons is observed for 120 nm wide nanowires biased far from the critical current, while narrow nanowires below 100 nm provide efficient single photon detection. A theoretical model is proposed to determine the different detection regimes and to calculate the corresponding internal quantum efficiency. The predicted saturation of the internal quantum efficiency in the single photon regime agrees well with plateau behavior observed at high bias currents.

  6. Design and spectrum calculation of 4H-SiC thermal neutron detectors using FLUKA and TCAD

    NASA Astrophysics Data System (ADS)

    Huang, Haili; Tang, Xiaoyan; Guo, Hui; Zhang, Yimen; Zhang, Yimeng; Zhang, Yuming

    2016-10-01

    SiC is a promising material for neutron detection in a harsh environment due to its wide band gap, high displacement threshold energy and high thermal conductivity. To increase the detection efficiency of SiC, a converter such as 6LiF or 10B is introduced. In this paper, pulse-height spectra of a PIN diode with a 6LiF conversion layer exposed to thermal neutrons (0.026 eV) are calculated using TCAD and Monte Carlo simulations. First, the conversion efficiency of a thermal neutron with respect to the thickness of 6LiF was calculated by using a FLUKA code, and a maximal efficiency of approximately 5% was achieved. Next, the energy distributions of both 3H and α induced by the 6LiF reaction according to different ranges of emission angle are analyzed. Subsequently, transient pulses generated by the bombardment of single 3H or α-particles are calculated. Finally, pulse height spectra are obtained with a detector efficiency of 4.53%. Comparisons of the simulated result with the experimental data are also presented, and the calculated spectrum shows an acceptable similarity to the experimental data. This work would be useful for radiation-sensing applications, especially for SiC detector design.

  7. detectIR: a novel program for detecting perfect and imperfect inverted repeats using complex numbers and vector calculation.

    PubMed

    Ye, Congting; Ji, Guoli; Li, Lei; Liang, Chun

    2014-01-01

    Inverted repeats are present in abundance in both prokaryotic and eukaryotic genomes and can form DNA secondary structures--hairpins and cruciforms that are involved in many important biological processes. Bioinformatics tools for efficient and accurate detection of inverted repeats are desirable, because existing tools are often less accurate and time consuming, sometimes incapable of dealing with genome-scale input data. Here, we present a MATLAB-based program called detectIR for the perfect and imperfect inverted repeat detection that utilizes complex numbers and vector calculation and allows genome-scale data inputs. A novel algorithm is adopted in detectIR to convert the conventional sequence string comparison in inverted repeat detection into vector calculation of complex numbers, allowing non-complementary pairs (mismatches) in the pairing stem and a non-palindromic spacer (loop or gaps) in the middle of inverted repeats. Compared with existing popular tools, our program performs with significantly higher accuracy and efficiency. Using genome sequence data from HIV-1, Arabidopsis thaliana, Homo sapiens and Zea mays for comparison, detectIR can find lots of inverted repeats missed by existing tools whose outputs often contain many invalid cases. detectIR is open source and its source code is freely available at: https://sourceforge.net/projects/detectir.

  8. Measurement of absolute response functions and detection efficiencies of an NE213 scintillator up to 600 MeV

    NASA Astrophysics Data System (ADS)

    Kajimoto, Tsuyoshi; Shigyo, Nobuhiro; Sanami, Toshiya; Ishibashi, Kenji; Haight, Robert C.; Fotiades, Nikolaos

    2011-02-01

    Absolute neutron response functions and detection efficiencies of an NE213 liquid scintillator that was 12.7 cm in diameter and 12.7 cm in thickness were measured for neutron energies between 15 and 600 MeV at the Weapons Neutron Research facility of the Los Alamos Neutron Science Center. The experiment was performed with continuous-energy neutrons on a spallation neutron source by 800-MeV proton incidence. The incident neutron flux was measured using a 238U fission ionization chamber. Measured response functions and detection efficiencies were compared with corresponding calculations using the SCINFUL-QMD code. The calculated and experimental values were in good agreement for data below 70 MeV. However, there were discrepancies in the energy region between 70 and 150 MeV. Thus, the code was partly modified and the revised code provided better agreement with the experimental data.

  9. Evaluation of species-dependent detection efficiencies in the aerosol mass spectrometer

    USDA-ARS?s Scientific Manuscript database

    Mass concentrations of chemical species calculated from the aerosol mass spectrometer (AMS) depend on two factors: particle collection efficiency (CE) and relative ionization efficiency (RIE, relative to the primary calibrant ammonium nitrate). While previous studies have characterized CE, RIE is re...

  10. Characteristic evaluation of a Lithium-6 loaded neutron coincidence spectrometer.

    PubMed

    Hayashi, M; Kaku, D; Watanabe, Y; Sagara, K

    2007-01-01

    Characteristics of a (6)Li-loaded neutron coincidence spectrometer were investigated from both measurements and Monte Carlo simulations. The spectrometer consists of three (6)Li-glass scintillators embedded in a liquid organic scintillator BC-501A, which can detect selectively neutrons that deposit the total energy in the BC-501A using a coincidence signal generated from the capture event of thermalised neutrons in the (6)Li-glass scintillators. The relative efficiency and the energy response were measured using 4.7, 7.2 and 9.0 MeV monoenergetic neutrons. The measured ones were compared with the Monte Carlo calculations performed by combining the neutron transport code PHITS and the scintillator response calculation code SCINFUL. The experimental light output spectra were in good agreement with the calculated ones in shape. The energy dependence of the detection efficiency was reproduced by the calculation. The response matrices for 1-10 MeV neutrons were finally obtained.

  11. Detection efficiency calculation for photons, electrons and positrons in a well detector. Part I: Analytical model

    NASA Astrophysics Data System (ADS)

    Pommé, S.

    2009-06-01

    An analytical model is presented to calculate the total detection efficiency of a well-type radiation detector for photons, electrons and positrons emitted from a radioactive source at an arbitrary position inside the well. The model is well suited to treat a typical set-up with a point source or cylindrical source and vial inside a NaI well detector, with or without lead shield surrounding it. It allows for fast absolute or relative total efficiency calibrations for a wide variety of geometrical configurations and also provides accurate input for the calculation of coincidence summing effects. Depending on its accuracy, it may even be applied in 4π-γ counting, a primary standardisation method for activity. Besides an accurate account of photon interactions, precautions are taken to simulate the special case of 511 keV annihilation quanta and to include realistic approximations for the range of (conversion) electrons and β -- and β +-particles.

  12. Guidelines for calculating and enhancing detection efficiency of PIT tag interrogation systems

    USGS Publications Warehouse

    Connolly, Patrick J.

    2010-01-01

    With increasing use of passive integrated transponder (PIT) tags and reliance on stationary PIT tag interrogation systems to monitor fish populations, guidelines are offered to inform users how best to use limited funding and human resources to create functional systems that maximize a desired level of detection and precision. The estimators of detection efficiency and their variability as described by Connolly et al. (2008) are explored over a span of likely performance metrics. These estimators were developed to estimate detection efficiency without relying on a known number of fish passing the system. I present graphical displays of the results derived from these estimators to show the potential efficiency and precision to be gained by adding an array or by increasing the number of PIT-tagged fish expected to move past an interrogation system.

  13. Radon detection in conical diffusion chambers: Monte Carlo calculations and experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rickards, J.; Golzarri, J. I.; Espinosa, G., E-mail: espinosa@fisica.unam.mx

    2015-07-23

    The operation of radon detection diffusion chambers of truncated conical shape was studied using Monte Carlo calculations. The efficiency was studied for alpha particles generated randomly in the volume of the chamber, and progeny generated randomly on the interior surface, which reach track detectors placed in different positions within the chamber. Incidence angular distributions, incidence energy spectra and path length distributions are calculated. Cases studied include different positions of the detector within the chamber, varying atmospheric pressure, and introducing a cutoff incidence angle and energy.

  14. Attachment of chloride anion to sugars: mechanistic investigation and discovery of a new dopant for efficient sugar ionization/detection in mass spectrometers.

    PubMed

    Boutegrabet, Lemia; Kanawati, Basem; Gebefügi, Istvan; Peyron, Dominique; Cayot, Philippe; Gougeon, Régis D; Schmitt-Kopplin, Philippe

    2012-10-08

    A new method for efficient ionization of sugars in the negative-ion mode of electrospray mass spectrometry is presented. Instead of using strongly hydrophobic dopants such as dichloromethane or chloroform, efficient ionization of sugars has been achieved by using aqueous HCl solution for the first time. This methodology makes it possible to use hydrophilic dopants, which are more appropriate for chromatographic separation techniques with efficient sugar ionization and detection in mass spectrometry. The interaction between chloride anions and monosaccharides (glucose and galactose) was studied by DFT in the gas phase and by implementing the polarizable continuum model (PCM) for calculations in solution at the high B3LYP/6-31+G(d,p)//B3LYP/6-311+G(2d,p) level of theory. In all optimized geometries of identified [M+Cl](-) anions, a non-covalent interaction exists. Differences were revealed between monodentate and bidentate complex anions, with the latter having noticeably higher binding energies. The calculated affinity of glucose and galactose toward the chloride anion in the gas phase and their chloride anion binding energies in solution are in excellent agreement with glucose and galactose [M+Cl](-) experimental intensity profiles that are represented as a function of the chloride ion concentration. Density functional calculations of gas-phase affinities toward chloride anion were also performed for the studied disaccharides sucrose and gentiobiose. All calculations are in excellent agreement with the experimental data. An example is introduced wherein HCl was used to effectively ionize sugars and form chlorinated adduct anions to detect sugars and glycosylated metabolites (anthocyanins) in real biological systems (Vitis vinifera grape extracts and wines), whereas they would not have been easily detectable under standard infusion electrospray mass spectrometry conditions as deprotonated species. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Preliminary Assessment of Detection Efficiency for the Geostationary Lightning Mapper Using Intercomparisons with Ground-Based Systems

    NASA Technical Reports Server (NTRS)

    Bateman, Monte; Mach, Douglas; Blakeslee, Richard J.; Koshak, William

    2018-01-01

    As part of the calibration/validation (cal/val) effort for the Geostationary Lightning Mapper (GLM) on GOES-16, we need to assess instrument performance (detection efficiency and accuracy). One major effort is to calculate the detection efficiency of GLM by comparing to multiple ground-based systems. These comparisons will be done pair-wise between GLM and each other source. A complication in this process is that the ground-based systems sense different properties of the lightning signal than does GLM (e.g., RF vs. optical). Also, each system has a different time and space resolution and accuracy. Preliminary results indicate that GLM is performing at or above its specification.

  16. A new method for evaluating radon and thoron alpha-activities per unit volume inside and outside various natural material samples by calculating SSNTD detection efficiencies for the emitted alpha-particles and measuring the resulting track densities.

    PubMed

    Misdaq, M A; Aitnouh, F; Khajmi, H; Ezzahery, H; Berrazzouk, S

    2001-08-01

    A Monte Carlo computer code for determining detection efficiencies of the CR-39 and LR-115 II solid-state nuclear track detectors (SSNTD) for alpha-particles emitted by the uranium and thorium series inside different natural material samples was developed. The influence of the alpha-particle initial energy on the SSNTD detection efficiencies was investigated. Radon (222Rn) and thoron (220Rn) alpha-activities per unit volume were evaluated inside and outside the natural material samples by exploiting data obtained for the detection efficiencies of the SSNTD utilized for the emitted alpha-particles, and measuring the resulting track densities. Results obtained were compared to those obtained by other methods. Radon emanation coefficients have been determined for some of the considered material samples.

  17. Perfluorotributylamine: A novel long-lived greenhouse gas

    NASA Astrophysics Data System (ADS)

    Hong, Angela C.; Young, Cora J.; Hurley, Michael D.; Wallington, Timothy J.; Mabury, Scott A.

    2013-11-01

    Perfluorinated compounds impact the Earth's radiative balance. Perfluorotributylamine (PFTBA) belongs to the perfluoroalkyl amine class of compounds; these have not yet been investigated as long-lived greenhouse gases (LLGHGs). Atmospheric measurements of PFTBA made in Toronto, ON, detected a mixing ratio of 0.18 parts per trillion by volume. An instantaneous radiative efficiency of 0.86 W m-2 ppb-1 was calculated from its IR absorption spectra, and a lower limit of 500 years was estimated for its atmospheric lifetime. PFTBA has the highest radiative efficiency of any compound detected in the atmosphere. If the concentration in Toronto is representative of the change in global background concentration since the preindustrial period, then the radiative forcing of PFTBA is 1.5 × 10-4 W m-2. We calculate the global warming potential of PFTBA over a 100 year time horizon to be 7100. Detection of PFTBA demonstrates that perfluoroalkyl amines are a class of LLGHGs worthy of future study.

  18. Optimizing a three-stage Compton camera for measuring prompt gamma rays emitted during proton radiotherapy

    PubMed Central

    Peterson, S W; Robertson, D; Polf, J

    2011-01-01

    In this work, we investigate the use of a three-stage Compton camera to measure secondary prompt gamma rays emitted from patients treated with proton beam radiotherapy. The purpose of this study was (1) to develop an optimal three-stage Compton camera specifically designed to measure prompt gamma rays emitted from tissue and (2) to determine the feasibility of using this optimized Compton camera design to measure and image prompt gamma rays emitted during proton beam irradiation. The three-stage Compton camera was modeled in Geant4 as three high-purity germanium detector stages arranged in parallel-plane geometry. Initially, an isotropic gamma source ranging from 0 to 15 MeV was used to determine lateral width and thickness of the detector stages that provided the optimal detection efficiency. Then, the gamma source was replaced by a proton beam irradiating a tissue phantom to calculate the overall efficiency of the optimized camera for detecting emitted prompt gammas. The overall calculated efficiencies varied from ~10−6 to 10−3 prompt gammas detected per proton incident on the tissue phantom for several variations of the optimal camera design studied. Based on the overall efficiency results, we believe it feasible that a three-stage Compton camera could detect a sufficient number of prompt gammas to allow measurement and imaging of prompt gamma emission during proton radiotherapy. PMID:21048295

  19. Augmenting groundwater monitoring networks near landfills with slurry cutoff walls.

    PubMed

    Hudak, Paul F

    2004-01-01

    This study investigated the use of slurry cutoff walls in conjunction with monitoring wells to detect contaminant releases from a solid waste landfill. The 50 m wide by 75 m long landfill was oriented oblique to regional groundwater flow in a shallow sand aquifer. Computer models calculated flow fields and the detection capability of six monitoring networks, four including a 1 m wide by 50 m long cutoff wall at various positions along the landfill's downgradient boundaries and upgradient of the landfill. Wells were positioned to take advantage of convergent flow induced downgradient of the cutoff walls. A five-well network with no cutoff wall detected 81% of contaminant plumes originating within the landfill's footprint before they reached a buffer zone boundary located 50 m from the landfill's downgradient corner. By comparison, detection efficiencies of networks augmented with cutoff walls ranged from 81 to 100%. The most efficient network detected 100% of contaminant releases with four wells, with a centrally located, downgradient cutoff wall. In general, cutoff walls increased detection efficiency by delaying transport of contaminant plumes to the buffer zone boundary, thereby allowing them to increase in size, and by inducing convergent flow at downgradient areas, thereby funneling contaminant plumes toward monitoring wells. However, increases in detection efficiency were too small to offset construction costs for cutoff walls. A 100% detection efficiency was also attained by an eight-well network with no cutoff wall, at approximately one-third the cost of the most efficient wall-augmented network.

  20. Crack Damage Detection Method via Multiple Visual Features and Efficient Multi-Task Learning Model.

    PubMed

    Wang, Baoxian; Zhao, Weigang; Gao, Po; Zhang, Yufeng; Wang, Zhe

    2018-06-02

    This paper proposes an effective and efficient model for concrete crack detection. The presented work consists of two modules: multi-view image feature extraction and multi-task crack region detection. Specifically, multiple visual features (such as texture, edge, etc.) of image regions are calculated, which can suppress various background noises (such as illumination, pockmark, stripe, blurring, etc.). With the computed multiple visual features, a novel crack region detector is advocated using a multi-task learning framework, which involves restraining the variability for different crack region features and emphasizing the separability between crack region features and complex background ones. Furthermore, the extreme learning machine is utilized to construct this multi-task learning model, thereby leading to high computing efficiency and good generalization. Experimental results of the practical concrete images demonstrate that the developed algorithm can achieve favorable crack detection performance compared with traditional crack detectors.

  1. Note: Fast neutron efficiency in CR-39 nuclear track detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cavallaro, S.

    2015-03-15

    CR-39 samples are commonly employed for fast neutron detection in fusion reactors and in inertial confinement fusion experiments. The literature reported efficiencies are strongly depending on experimental conditions and, in some cases, highly dispersed. The present note analyses the dependence of efficiency as a function of various parameters and experimental conditions in both the radiator-assisted and the stand-alone CR-39 configurations. Comparisons of literature experimental data with Monte Carlo calculations and optimized efficiency values are shown and discussed.

  2. Monte Carlo calculation of the sensitivity of a commercial dose calibrator to gamma and beta radiation.

    PubMed

    Laedermann, Jean-Pascal; Valley, Jean-François; Bulling, Shelley; Bochud, François O

    2004-06-01

    The detection process used in a commercial dose calibrator was modeled using the GEANT 3 Monte Carlo code. Dose calibrator efficiency for gamma and beta emitters, and the response to monoenergetic photons and electrons was calculated. The model shows that beta emitters below 2.5 MeV deposit energy indirectly in the detector through bremsstrahlung produced in the chamber wall or in the source itself. Higher energy beta emitters (E > 2.5 MeV) deposit energy directly in the chamber sensitive volume, and dose calibrator sensitivity increases abruptly for these radionuclides. The Monte Carlo calculations were compared with gamma and beta emitter measurements. The calculations show that the variation in dose calibrator efficiency with measuring conditions (source volume, container diameter, container wall thickness and material, position of the source within the calibrator) is relatively small and can be considered insignificant for routine measurement applications. However, dose calibrator efficiency depends strongly on the inner-wall thickness of the detector.

  3. A modified indirect mathematical model for evaluation of ethanol production efficiency in industrial-scale continuous fermentation processes.

    PubMed

    Canseco Grellet, M A; Castagnaro, A; Dantur, K I; De Boeck, G; Ahmed, P M; Cárdenas, G J; Welin, B; Ruiz, R M

    2016-10-01

    To calculate fermentation efficiency in a continuous ethanol production process, we aimed to develop a robust mathematical method based on the analysis of metabolic by-product formation. This method is in contrast to the traditional way of calculating ethanol fermentation efficiency, where the ratio between the ethanol produced and the sugar consumed is expressed as a percentage of the theoretical conversion yield. Comparison between the two methods, at industrial scale and in sensitivity studies, showed that the indirect method was more robust and gave slightly higher fermentation efficiency values, although fermentation efficiency of the industrial process was found to be low (~75%). The traditional calculation method is simpler than the indirect method as it only requires a few chemical determinations in samples collected. However, a minor error in any measured parameter will have an important impact on the calculated efficiency. In contrast, the indirect method of calculation requires a greater number of determinations but is much more robust since an error in any parameter will only have a minor effect on the fermentation efficiency value. The application of the indirect calculation methodology in order to evaluate the real situation of the process and to reach an optimum fermentation yield for an industrial-scale ethanol production is recommended. Once a high fermentation yield has been reached the traditional method should be used to maintain the control of the process. Upon detection of lower yields in an optimized process the indirect method should be employed as it permits a more accurate diagnosis of causes of yield losses in order to correct the problem rapidly. The low fermentation efficiency obtained in this study shows an urgent need for industrial process optimization where the indirect calculation methodology will be an important tool to determine process losses. © 2016 The Society for Applied Microbiology.

  4. Aerosol detection efficiency in inductively coupled plasma mass spectrometry

    NASA Astrophysics Data System (ADS)

    Hubbard, Joshua A.; Zigmond, Joseph A.

    2016-05-01

    An electrostatic size classification technique was used to segregate particles of known composition prior to being injected into an inductively coupled plasma mass spectrometer (ICP-MS). Size-segregated particles were counted with a condensation nuclei counter as well as sampled with an ICP-MS. By injecting particles of known size, composition, and aerosol concentration into the ICP-MS, efficiencies of the order of magnitude aerosol detection were calculated, and the particle size dependencies for volatile and refractory species were quantified. Similar to laser ablation ICP-MS, aerosol detection efficiency was defined as the rate at which atoms were detected in the ICP-MS normalized by the rate at which atoms were injected in the form of particles. This method adds valuable insight into the development of technologies like laser ablation ICP-MS where aerosol particles (of relatively unknown size and gas concentration) are generated during ablation and then transported into the plasma of an ICP-MS. In this study, we characterized aerosol detection efficiencies of volatile species gold and silver along with refractory species aluminum oxide, cerium oxide, and yttrium oxide. Aerosols were generated with electrical mobility diameters ranging from 100 to 1000 nm. In general, it was observed that refractory species had lower aerosol detection efficiencies than volatile species, and there were strong dependencies on particle size and plasma torch residence time. Volatile species showed a distinct transition point at which aerosol detection efficiency began decreasing with increasing particle size. This critical diameter indicated the largest particle size for which complete particle detection should be expected and agreed with theories published in other works. Aerosol detection efficiencies also displayed power law dependencies on particle size. Aerosol detection efficiencies ranged from 10- 5 to 10- 11. Free molecular heat and mass transfer theory was applied, but evaporative phenomena were not sufficient to explain the dependence of aerosol detection on particle diameter. Additional work is needed to correlate experimental data with theory for metal-oxides where thermodynamic property data are sparse relative to pure elements. Lastly, when matrix effects and the diffusion of ions inside the plasma were considered, mass loading was concluded to have had an effect on the dependence of detection efficiency on particle diameter.

  5. Li 2Se as a Neutron Scintillator

    DOE PAGES

    Du, Mao-Hua; Shi, Hongliang; Singh, David J.

    2015-06-23

    We show that Li 2Se:Te is a potential neutron scintillator material based on density functional calculations. Li 2Se exhibits a number of properties favorable for efficient neutron detection, such as a high Li concentration for neutron absorption, a small effective atomic mass and a low density for reduced sensitivity to background gamma rays, and a small band gap for a high light yield. Our calculations show that Te doping should lead to the formation of deep acceptor complex V Li-Te Se, which can facilitate efficient light emission, similar to the emission activation in Te doped ZnSe.

  6. Optimization of Collision Detection in Surgical Simulations

    NASA Astrophysics Data System (ADS)

    Custură-Crăciun, Dan; Cochior, Daniel; Neagu, Corneliu

    2014-11-01

    Just like flight and spaceship simulators already represent a standard, we expect that soon enough, surgical simulators should become a standard in medical applications. A simulations quality is strongly related to the image quality as well as the degree of realism of the simulation. Increased quality requires increased resolution, increased representation speed but more important, a larger amount of mathematical equations. To make it possible, not only that we need more efficient computers, but especially more calculation process optimizations. A simulator executes one of the most complex sets of calculations each time it detects a contact between the virtual objects, therefore optimization of collision detection is fatal for the work-speed of a simulator and hence in its quality

  7. DETECTORS AND EXPERIMENTAL METHODS: Measurement of the response function and the detection efficiency of an organic liquid scintillator for neutrons between 1 and 30 MeV

    NASA Astrophysics Data System (ADS)

    Huang, Han-Xiong; Ruan, Xi-Chao; Chen, Guo-Chang; Zhou, Zu-Ying; Li, Xia; Bao, Jie; Nie, Yang-Bo; Zhong, Qi-Ping

    2009-08-01

    The light output function of a varphi50.8 mm × 50.8 mm BC501A scintillation detector was measured in the neutron energy region of 1 to 30 MeV by fitting the pulse height (PH) spectra for neutrons with the simulations from the NRESP code at the edge range. Using the new light output function, the neutron detection efficiency was determined with two Monte-Carlo codes, NEFF and SCINFUL. The calculated efficiency was corrected by comparing the simulated PH spectra with the measured ones. The determined efficiency was verified at the near threshold region and normalized with a Proton-Recoil-Telescope (PRT) at the 8-14 MeV energy region.

  8. Energy efficiency in cognitive radio network: Study of cooperative sensing using different channel sensing methods

    NASA Astrophysics Data System (ADS)

    Cui, Chenxuan

    When cognitive radio (CR) operates, it starts by sensing spectrum and looking for idle bandwidth. There are several methods for CR to make a decision on either the channel is occupied or idle, for example, energy detection scheme, cyclostationary detection scheme and matching filtering detection scheme [1]. Among them, the most common method is energy detection scheme because of its algorithm and implementation simplicities [2]. There are two major methods for sensing, the first one is to sense single channel slot with varying bandwidth, whereas the second one is to sense multiple channels and each with same bandwidth. After sensing periods, samples are compared with a preset detection threshold and a decision is made on either the primary user (PU) is transmitting or not. Sometimes the sensing and decision results can be erroneous, for example, false alarm error and misdetection error may occur. In order to better control error probabilities and improve CR network performance (i.e. energy efficiency), we introduce cooperative sensing; in which several CR within a certain range detect and make decisions on channel availability together. The decisions are transmitted to and analyzed by a data fusion center (DFC) to make a final decision on channel availability. After the final decision is been made, DFC sends back the decision to the CRs in order to tell them to stay idle or start to transmit data to secondary receiver (SR) within a preset transmission time. After the transmission, a new cycle starts again with sensing. This thesis report is organized as followed: Chapter II review some of the papers on optimizing CR energy efficiency. In Chapter III, we study how to achieve maximal energy efficiency when CR senses single channel with changing bandwidth and with constrain on misdetection threshold in order to protect PU; furthermore, a case study is given and we calculate the energy efficiency. In Chapter IV, we study how to achieve maximal energy efficiency when CR senses multiple channels and each channel with same bandwidth, also, we preset a misdetection threshold and calculate the energy efficiency. A comparison will be shown between two sensing methods at the end of the chapter. Finally, Chapter V concludes this thesis.

  9. Measurement of tritium with high efficiency by using liquid scintillation counter with plastic scintillator.

    PubMed

    Furuta, Etsuko; Ohyama, Ryu-ichiro; Yokota, Shigeaki; Nakajo, Toshiya; Yamada, Yuka; Kawano, Takao; Uda, Tatsuhiko; Watanabe, Yasuo

    2014-11-01

    The detection efficiencies of tritium samples by using liquid scintillation counter with hydrophilic plastic scintillator (PS) was approximately 48% when the sample of 20 μL was held between 2 PS sheets treated by plasma. The activity and count rates showed a good relationship between 400 Bq to 410 KBq mL(-1). The calculated detection limit of 2 min measurement by the PS was 13 Bq mL(-1) when a confidence was 95%. The plasma method for PS produces no radioactive waste. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Improved Signal Control: An Analysis of the Effects of Automatic Gain Control for Optical Signal Detection.

    DTIC Science & Technology

    1982-12-01

    period f T - switching period a - AGC control parameter q - quantum efficiency of photon to electron conversions "I - binary "one" given in terms of the...of the photons striking the surface of the detector. This rate is defined as: X(t) = (np(t)A) / hf 0 (21) where n - quantum efficiency of the photon...mw to 10 mw [Ref 5, Table 1] for infrared wavelengths. 30 Assuming all of the source’s output power is detected, the rate is calculated to be an order

  11. Secure detection in quantum key distribution by real-time calibration of receiver

    NASA Astrophysics Data System (ADS)

    Marøy, Øystein; Makarov, Vadim; Skaar, Johannes

    2017-12-01

    The single-photon detectionefficiency of the detector unit is crucial for the security of common quantum key distribution protocols like Bennett-Brassard 1984 (BB84). A low value for the efficiency indicates a possible eavesdropping attack that exploits the photon receiver’s imperfections. We present a method for estimating the detection efficiency, and calculate the corresponding secure key generation rate. The estimation is done by testing gated detectors using a randomly activated photon source inside the receiver unit. This estimate gives a secure rate for any detector with non-unity single-photon detection efficiency, both inherit or due to blinding. By adding extra optical components to the receiver, we make sure that the key is extracted from photon states for which our estimate is valid. The result is a quantum key distribution scheme that is secure against any attack that exploits detector imperfections.

  12. Measurements of electron detection efficiencies in solid state detectors.

    NASA Technical Reports Server (NTRS)

    Lupton, J. E.; Stone, E. C.

    1972-01-01

    Detailed laboratory measurement of the electron response of solid state detectors as a function of incident electron energy, detector depletion depth, and energy-loss discriminator threshold. These response functions were determined by exposing totally depleted silicon surface barrier detectors with depletion depths between 50 and 1000 microns to the beam from a magnetic beta-ray spectrometer. The data were extended to 5000 microns depletion depth using the results of previously published Monte Carlo electron calculations. When the electron counting efficiency of a given detector is plotted as a function of energy-loss threshold for various incident energies, the efficiency curves are bounded by a smooth envelope which represents the upper limit to the detection efficiency. These upper limit curves, which scale in a simple way, make it possible to easily estimate the electron sensitivity of solid-state detector systems.

  13. Aerosol detection efficiency in inductively coupled plasma mass spectrometry

    DOE PAGES

    Hubbard, Joshua A.; Zigmond, Joseph A.

    2016-03-02

    We used an electrostatic size classification technique to segregate particles of known composition prior to being injected into an inductively coupled plasma mass spectrometer (ICP-MS). Moreover, we counted size-segregated particles with a condensation nuclei counter as well as sampled with an ICP-MS. By injecting particles of known size, composition, and aerosol concentration into the ICP-MS, efficiencies of the order of magnitude aerosol detection were calculated, and the particle size dependencies for volatile and refractory species were quantified. Similar to laser ablation ICP-MS, aerosol detection efficiency was defined as the rate at which atoms were detected in the ICP-MS normalized bymore » the rate at which atoms were injected in the form of particles. This method adds valuable insight into the development of technologies like laser ablation ICP-MS where aerosol particles (of relatively unknown size and gas concentration) are generated during ablation and then transported into the plasma of an ICP-MS. In this study, we characterized aerosol detection efficiencies of volatile species gold and silver along with refractory species aluminum oxide, cerium oxide, and yttrium oxide. Aerosols were generated with electrical mobility diameters ranging from 100 to 1000 nm. In general, it was observed that refractory species had lower aerosol detection efficiencies than volatile species, and there were strong dependencies on particle size and plasma torch residence time. Volatile species showed a distinct transition point at which aerosol detection efficiency began decreasing with increasing particle size. This critical diameter indicated the largest particle size for which complete particle detection should be expected and agreed with theories published in other works. Aerosol detection efficiencies also displayed power law dependencies on particle size. Aerosol detection efficiencies ranged from 10 -5 to 10 -11. Free molecular heat and mass transfer theory was applied, but evaporative phenomena were not sufficient to explain the dependence of aerosol detection on particle diameter. Additional work is needed to correlate experimental data with theory for metal-oxides where thermodynamic property data are sparse relative to pure elements. Finally, when matrix effects and the diffusion of ions inside the plasma were considered, mass loading was concluded to have had an effect on the dependence of detection efficiency on particle diameter.« less

  14. Measurement of X-ray emission efficiency for K-lines.

    PubMed

    Procop, M

    2004-08-01

    Results for the X-ray emission efficiency (counts per C per sr) of K-lines for selected elements (C, Al, Si, Ti, Cu, Ge) and for the first time also for compounds and alloys (SiC, GaP, AlCu, TiAlC) are presented. An energy dispersive X-ray spectrometer (EDS) of known detection efficiency (counts per photon) has been used to record the spectra at a takeoff angle of 25 degrees determined by the geometry of the secondary electron microscope's specimen chamber. Overall uncertainty in measurement could be reduced to 5 to 10% in dependence on the line intensity and energy. Measured emission efficiencies have been compared with calculated efficiencies based on models applied in standardless analysis. The widespread XPP and PROZA models give somewhat too low emission efficiencies. The best agreement between measured and calculated efficiencies could be achieved by replacing in the modular PROZA96 model the original expression for the ionization cross section by the formula given by Casnati et al. (1982) A discrepancy remains for carbon, probably due to the high overvoltage ratio.

  15. Applying ISO 11929:2010 Standard to detection limit calculation in least-squares based multi-nuclide gamma-ray spectrum evaluation

    NASA Astrophysics Data System (ADS)

    Kanisch, G.

    2017-05-01

    The concepts of ISO 11929 (2010) are applied to evaluation of radionuclide activities from more complex multi-nuclide gamma-ray spectra. From net peak areas estimated by peak fitting, activities and their standard uncertainties are calculated by weighted linear least-squares method with an additional step, where uncertainties of the design matrix elements are taken into account. A numerical treatment of the standard's uncertainty function, based on ISO 11929 Annex C.5, leads to a procedure for deriving decision threshold and detection limit values. The methods shown allow resolving interferences between radionuclide activities also in case of calculating detection limits where they can improve the latter by including more than one gamma line per radionuclide. The co"mmon single nuclide weighted mean is extended to an interference-corrected (generalized) weighted mean, which, combined with the least-squares method, allows faster detection limit calculations. In addition, a new grouped uncertainty budget was inferred, which for each radionuclide gives uncertainty budgets from seven main variables, such as net count rates, peak efficiencies, gamma emission intensities and others; grouping refers to summation over lists of peaks per radionuclide.

  16. System and method for automated object detection in an image

    DOEpatents

    Kenyon, Garrett T.; Brumby, Steven P.; George, John S.; Paiton, Dylan M.; Schultz, Peter F.

    2015-10-06

    A contour/shape detection model may use relatively simple and efficient kernels to detect target edges in an object within an image or video. A co-occurrence probability may be calculated for two or more edge features in an image or video using an object definition. Edge features may be differentiated between in response to measured contextual support, and prominent edge features may be extracted based on the measured contextual support. The object may then be identified based on the extracted prominent edge features.

  17. A suspended boron foil multi-wire proportional counter neutron detector

    NASA Astrophysics Data System (ADS)

    Nelson, Kyle A.; Edwards, Nathaniel S.; Hinson, Niklas J.; Wayant, Clayton D.; McGregor, Douglas S.

    2014-12-01

    Three natural boron foils, approximately 1.0 cm in diameter and 1.0 μm thick, were obtained from The Lebow Company and suspended in a multi-wire proportional counter. Suspending the B foils allowed the alpha particle and Li ion reaction products to escape simultaneously, one on each side of the foil, and be measured concurrently in the gas volume. The thermal neutron response pulse-height spectrum was obtained and two obvious peaks appear from the 94% and 6% branches of the 10B(n,α)7Li neutron reaction. Scanning electron microscope images were collected to obtain the exact B foil thicknesses and MCNP6 simulations were completed for those same B thicknesses. Pulse-height spectra obtained from the simulations were compared to experimental data and matched well. The theoretical intrinsic thermal-neutron detection efficiency for enriched 10B foils was calculated and is presented. Additionally, the intrinsic thermal neutron detection efficiency of the three natural B foils was calculated to be 3.2±0.2%.

  18. Model-based fault detection and isolation for intermittently active faults with application to motion-based thruster fault detection and isolation for spacecraft

    NASA Technical Reports Server (NTRS)

    Wilson, Edward (Inventor)

    2008-01-01

    The present invention is a method for detecting and isolating fault modes in a system having a model describing its behavior and regularly sampled measurements. The models are used to calculate past and present deviations from measurements that would result with no faults present, as well as with one or more potential fault modes present. Algorithms that calculate and store these deviations, along with memory of when said faults, if present, would have an effect on the said actual measurements, are used to detect when a fault is present. Related algorithms are used to exonerate false fault modes and finally to isolate the true fault mode. This invention is presented with application to detection and isolation of thruster faults for a thruster-controlled spacecraft. As a supporting aspect of the invention, a novel, effective, and efficient filtering method for estimating the derivative of a noisy signal is presented.

  19. Nonparametric method for failures detection and localization in the actuating subsystem of aircraft control system

    NASA Astrophysics Data System (ADS)

    Karpenko, S. S.; Zybin, E. Yu; Kosyanchuk, V. V.

    2018-02-01

    In this paper we design a nonparametric method for failures detection and localization in the aircraft control system that uses the measurements of the control signals and the aircraft states only. It doesn’t require a priori information of the aircraft model parameters, training or statistical calculations, and is based on algebraic solvability conditions for the aircraft model identification problem. This makes it possible to significantly increase the efficiency of detection and localization problem solution by completely eliminating errors, associated with aircraft model uncertainties.

  20. Analysis of area-time efficiency for an integrated focal plane architecture

    NASA Astrophysics Data System (ADS)

    Robinson, William H.; Wills, D. Scott

    2003-05-01

    Monolithic integration of photodetectors, analog-to-digital converters, digital processing, and data storage can improve the performance and efficiency of next-generation portable image products. Our approach combines these components into a single processing element, which is tiled to form a SIMD focal plane processor array with the capability to execute early image applications such as median filtering (noise removal), convolution (smoothing), and inside edge detection (segmentation). Digitizing and processing a pixel at the detection site presents new design challenges, including the allocation of silicon resources. This research investigates the area-time (A"T2) efficiency by adjusting the number of Pixels-per-Processing Element (PPE). Area calculations are based upon hardware implementations of components scaled for 250nm or 120nm technology. The total execution time is calculated from the sequential execution of each application on a generic focal plane architectural simulator. For a Quad-CIF system resolution (176×144), results show that 1 PPE provides the optimal area-time efficiency (5.7 μs2 x mm2 for 250nm, 1.7 μs2 x mm2 for 120nm) but requires a large silicon chip (2072mm2 for 250nm, 614mm2 for 120nm). Increasing the PPE to 4 or 16 can reduce silicon area by 48% and 60% respectively (120nm technology) while maintaining performance within real-time constraints.

  1. Oligothiophene-based colorimetric and ratiometric fluorescence dual-channel cyanide chemosensor: Sensing ability, TD-DFT calculations and its application as an efficient solid state sensor

    NASA Astrophysics Data System (ADS)

    Lan, Linxin; Li, Tianduo; Wei, Tao; Pang, He; Sun, Tao; Wang, Enhua; Liu, Haixia; Niu, Qingfen

    2018-03-01

    An oligothiophene-based colorimetric and ratiometric fluorescence dual-channel cyanide chemosensor 3 T-2CN was reported. Sensor 3 T-2CN showed both naked-eye recognition and ratiometric fluorescence response for CN- with an excellent selectivity and high sensitivity. The sensing mechanism based on the nucleophilic attack of CN- on the vinyl Cdbnd C bond has been successfully confirmed by the optical measurements, 1H NMR titration, FT-IR spectra as well as the DFT/TD-DFT calculations. Moreover, the detection limit was calculated to be 0.19 μM, which is much lower than the maximum permission concentration in drinking water (1.9 μM). Importantly, test strips (filter paper and TLC plates) containing 3 T-2CN were fabricated, which could act as a practical and efficient solid state optical sensor for CN- in field measurements.

  2. Improvement of antigen detection efficiency with the use of two-dimensional photonic crystal as a substrate

    NASA Astrophysics Data System (ADS)

    Dovzhenko, Dmitriy; Terekhin, Vladimir; Vokhmincev, Kirill; Sukhanova, Alyona; Nabiev, Igor

    2017-01-01

    Multiplex detection of different antigens in human serum in order to reveal diseases at the early stage is of interest nowadays. There are a lot of biosensors, which use the fluorescent labels for specific detection of analytes. For instance, common method for detection of antigens in human serum samples is enzyme-linked immunosorbent assay (ELISA). One of the most effective ways to improve the sensitivity of this detection method is the use of a substrate that could enhance the fluorescent signal and make it easier to collect. Two-dimensional (2D) photonic crystals are very suitable structures for these purposes because of the ability to enhance the luminescent signal, control the light propagation and perform the analysis directly on its surface. In our study we have calculated optimal parameters for 2D-dimensional photonic crystal consisting of the array of silicon nano-rods, fabricated such photonic crystal on a silicon substrate using reactive ion etching and showed the possibility of its efficient application as a substrate for ELISA detection of human cancer antigens.

  3. Constrained motion model of mobile robots and its applications.

    PubMed

    Zhang, Fei; Xi, Yugeng; Lin, Zongli; Chen, Weidong

    2009-06-01

    Target detecting and dynamic coverage are fundamental tasks in mobile robotics and represent two important features of mobile robots: mobility and perceptivity. This paper establishes the constrained motion model and sensor model of a mobile robot to represent these two features and defines the k -step reachable region to describe the states that the robot may reach. We show that the calculation of the k-step reachable region can be reduced from that of 2(k) reachable regions with the fixed motion styles to k + 1 such regions and provide an algorithm for its calculation. Based on the constrained motion model and the k -step reachable region, the problems associated with target detecting and dynamic coverage are formulated and solved. For target detecting, the k-step detectable region is used to describe the area that the robot may detect, and an algorithm for detecting a target and planning the optimal path is proposed. For dynamic coverage, the k-step detected region is used to represent the area that the robot has detected during its motion, and the dynamic-coverage strategy and algorithm are proposed. Simulation results demonstrate the efficiency of the coverage algorithm in both convex and concave environments.

  4. Efficient Detection of Carbapenemase Activity in Enterobacteriaceae by Matrix-Assisted Laser Desorption Ionization-Time of Flight Mass Spectrometry in Less Than 30 Minutes.

    PubMed

    Lasserre, Camille; De Saint Martin, Luc; Cuzon, Gaelle; Bogaerts, Pierre; Lamar, Estelle; Glupczynski, Youri; Naas, Thierry; Tandé, Didier

    2015-07-01

    The recognition of carbapenemase-producing Enterobacteriaceae (CPE) isolates is a major laboratory challenge, and their inappropriate or delayed detection may have negative impacts on patient management and on the implementation of infection control measures. We describe here a matrix-assisted laser desorption ionization-time of flight (MALDI-TOF)-based method to detect carbapenemase activity in Enterobacteriaceae. After a 20-min incubation of the isolate with 0.5 mg/ml imipenem at 37°C, supernatants were analyzed by MALDI-TOF in order to identify peaks corresponding to imipenem (300 Da) and an imipenem metabolite (254 Da). A total of 223 strains, 77 CPE (OXA-48 variants, KPC, NDM, VIM, IMI, IMP, and NMC-A) and 146 non-CPE (cephalosporinases, extended-spectrum β-lactamases [ESBLs], and porin defects), were tested and used to calculate a ratio of imipenem hydrolysis: mass spectrometry [MS] ratio = metabolite/(imipenem + metabolite). An MS ratio cutoff was statistically determined to classify strains as carbapenemase producers (MS ratio of ≥0.82). We validated this method first by testing 30 of our 223 isolates (15 CPE and 15 non-CPE) 10 times to calculate an intraclass correlation coefficient (ICC of 0.98), showing the excellent repeatability of the method. Second, 43 strains (25 CPE and 18 non-CPE) different from the 223 strains used to calculate the ratio cutoff were used as external controls and blind tested. They yielded sensitivity and specificity of 100%. The total cost per test is <0.10 U.S. dollars (USD). This easy-to-perform assay is time-saving, cost-efficient, and highly reliable and might be used in any routine laboratory, given the availability of mass spectrometry, to detect CPE. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  5. A combined experimental-modelling method for the detection and analysis of pollution in coastal zones

    NASA Astrophysics Data System (ADS)

    Limić, Nedzad; Valković, Vladivoj

    1996-04-01

    Pollution of coastal seas with toxic substances can be efficiently detected by examining toxic materials in sediment samples. These samples contain information on the overall pollution from surrounding sources such as yacht anchorages, nearby industries, sewage systems, etc. In an efficient analysis of pollution one must determine the contribution from each individual source. In this work it is demonstrated that a modelling method can be utilized for solving this latter problem. The modelling method is based on a unique interpretation of concentrations in sediments from all sampling stations. The proposed method is a synthesis consisting of the utilization of PIXE as an efficient method of pollution concentration determination and the code ANCOPOL (N. Limic and R. Benis, The computer code ANCOPOL, SimTel/msdos/geology, 1994 [1]) for the calculation of contributions from the main polluters. The efficiency and limits of the proposed method are demonstrated by discussing trace element concentrations in sediments of Punat Bay on the island of Krk in Croatia.

  6. Study of solid-conversion gaseous detector based on GEM for high energy X-ray industrial CT.

    PubMed

    Zhou, Rifeng; Zhou, Yaling

    2014-01-01

    The general gaseous ionization detectors are not suitable for high energy X-ray industrial computed tomography (HEICT) because of their inherent limitations, especially low detective efficiency and large volume. The goal of this study was to investigate a new type of gaseous detector to solve these problems. The novel detector was made by a metal foil as X-ray convertor to improve the conversion efficiency, and the Gas Electron Multiplier (hereinafter "GEM") was used as electron amplifier to lessen its volume. The detective mechanism and signal formation of the detector was discussed in detail. The conversion efficiency was calculated by using EGSnrc Monte Carlo code, and the transport course of photon and secondary electron avalanche in the detector was simulated with the Maxwell and Garfield codes. The result indicated that this detector has higher conversion efficiency as well as less volume. Theoretically this kind of detector could be a perfect candidate for replacing the conventional detector in HEICT.

  7. First-principles study of defects in TlBr

    NASA Astrophysics Data System (ADS)

    Du, Mao-Hua

    2010-03-01

    TlBr is a promising radiation detection material due to its high gamma-ray stopping efficiency, high resistivity (that reduces dark current and noise), large enough band gap of 2.68 eV (suitable for room temperature applications), and long electron carrier lifetime (for efficient collection of the radiation-generated carriers). The defect properties obtained from density functional calculations will be presented to discuss their roles in carrier trapping and recombination (which affects the carrier lifetime) and carrier compensation (which affects the resistivity).

  8. Electrophoretically mediated microanalysis of leucine aminopeptidase using two-photon excited fluorescence detection on a microchip.

    PubMed

    Zugel, S A; Burke, B J; Regnier, F E; Lytle, F E

    2000-11-15

    Two-photon excited fluorescence detection was performed on a microfabricated electrophoresis chip. A calibration curve of the fluorescent tag beta-naphthylamine was performed, resulting in a sensitivity of 2.5 x 10(9) counts M(-1) corresponding to a detection limit of 60 nM. Additionally, leucine aminopeptidase was assayed on the chip using electrophoretically mediated microanalysis. The differential electroosmotic mobilities of the enzyme and substrate, L-leucine beta-naphthylamide, allowed for efficient mixing in an open channel, resulting in the detection of a 30 nM enzyme solution under constant potential. A zero potential incubation for 1 min yielded a calculated detection limit of 4 nM enzyme.

  9. A hybrid fuzzy logic and extreme learning machine for improving efficiency of circulating water systems in power generation plant

    NASA Astrophysics Data System (ADS)

    Aziz, Nur Liyana Afiqah Abdul; Siah Yap, Keem; Afif Bunyamin, Muhammad

    2013-06-01

    This paper presents a new approach of the fault detection for improving efficiency of circulating water system (CWS) in a power generation plant using a hybrid Fuzzy Logic System (FLS) and Extreme Learning Machine (ELM) neural network. The FLS is a mathematical tool for calculating the uncertainties where precision and significance are applied in the real world. It is based on natural language which has the ability of "computing the word". The ELM is an extremely fast learning algorithm for neural network that can completed the training cycle in a very short time. By combining the FLS and ELM, new hybrid model, i.e., FLS-ELM is developed. The applicability of this proposed hybrid model is validated in fault detection in CWS which may help to improve overall efficiency of power generation plant, hence, consuming less natural recourses and producing less pollutions.

  10. Design of a sedimentation hole in a microfluidic channel to remove blood cells from diluted whole blood

    NASA Astrophysics Data System (ADS)

    Kuroda, Chiaki; Ohki, Yoshimichi; Ashiba, Hiroki; Fujimaki, Makoto; Awazu, Koichi; Makishima, Makoto

    2017-03-01

    With the aim of developing a sensor for rapidly detecting viruses in a drop of blood, in this study, we analyze the shape of a hole in a microfluidic channel in relation to the efficiency of sedimentation of blood cells. The efficiency of sedimentation is examined on the basis of our calculation and experimental results for two types of sedimentation hole, cylindrical and truncated conical holes, focusing on the Boycott effect, which can promote the sedimentation of blood cells from a downward-facing wall. As a result, we demonstrated that blood cells can be eliminated with an efficiency of 99% or higher by retaining a diluted blood sample of about 30 µL in the conical hole for only 2 min. Moreover, we succeeded in detecting the anti-hepatitis B surface antigen antibody in blood using a waveguide-mode sensor equipped with a microfluidic channel having the conical sedimentation hole.

  11. Whole-rock uranium analysis by fission track activation

    NASA Technical Reports Server (NTRS)

    Weiss, J. R.; Haines, E. L.

    1974-01-01

    We report a whole-rock uranium method in which the polished sample and track detector are separated in a vacuum chamber. Irradiation with thermal neutrons induces uranium fission in the sample, and the detector records the integrated fission track density. Detection efficiency and geometric factors are calculated and compared with calibration experiments.

  12. Detection and quantification of solute clusters in a nanostructured ferritic alloy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Michael K.; Larson, David J.; Reinhard, D. A.

    2014-12-26

    A series of simulated atom probe datasets were examined with a friends-of-friends method to establish the detection efficiency required to resolve solute clusters in the ferrite phase of a 14YWT nanostructured ferritic alloy. The size and number densities of solute clusters in the ferrite of the as-milled mechanically-alloyed condition and the stir zone of a friction stir weld were estimated with a prototype high-detection-efficiency (~80%) local electrode atom probe. High number densities, 1.8 × 10 24 m –3 and 1.2 × 10 24 m –3, respectively of solute clusters containing between 2 and 9 solute atoms of Ti, Y andmore » O and were detected for these two conditions. Furthermore, these results support first principle calculations that predicted that vacancies stabilize these Ti–Y–O– clusters, which retard diffusion and contribute to the excellent high temperature stability of the microstructure and radiation tolerance of nanostructured ferritic alloys.« less

  13. Thin Cloud Detection Method by Linear Combination Model of Cloud Image

    NASA Astrophysics Data System (ADS)

    Liu, L.; Li, J.; Wang, Y.; Xiao, Y.; Zhang, W.; Zhang, S.

    2018-04-01

    The existing cloud detection methods in photogrammetry often extract the image features from remote sensing images directly, and then use them to classify images into cloud or other things. But when the cloud is thin and small, these methods will be inaccurate. In this paper, a linear combination model of cloud images is proposed, by using this model, the underlying surface information of remote sensing images can be removed. So the cloud detection result can become more accurate. Firstly, the automatic cloud detection program in this paper uses the linear combination model to split the cloud information and surface information in the transparent cloud images, then uses different image features to recognize the cloud parts. In consideration of the computational efficiency, AdaBoost Classifier was introduced to combine the different features to establish a cloud classifier. AdaBoost Classifier can select the most effective features from many normal features, so the calculation time is largely reduced. Finally, we selected a cloud detection method based on tree structure and a multiple feature detection method using SVM classifier to compare with the proposed method, the experimental data shows that the proposed cloud detection program in this paper has high accuracy and fast calculation speed.

  14. A new method for detecting small and dim targets in starry background

    NASA Astrophysics Data System (ADS)

    Yao, Rui; Zhang, Yanning; Jiang, Lei

    2011-08-01

    Small visible optical space targets detection is one of the key issues in the research of long-range early warning and space debris surveillance. The SNR(Signal to Noise Ratio) of the target is very low because of the self influence of image device. Random noise and background movement also increase the difficulty of target detection. In order to detect small visible optical space targets effectively and rapidly, we bring up a novel detecting method based on statistic theory. Firstly, we get a reasonable statistical model of visible optical space image. Secondly, we extract SIFT(Scale-Invariant Feature Transform) feature of the image frames, and calculate the transform relationship, then use the transform relationship to compensate whole visual field's movement. Thirdly, the influence of star was wiped off by using interframe difference method. We find segmentation threshold to differentiate candidate targets and noise by using OTSU method. Finally, we calculate statistical quantity to judge whether there is the target for every pixel position in the image. Theory analysis shows the relationship of false alarm probability and detection probability at different SNR. The experiment result shows that this method could detect target efficiently, even the target passing through stars.

  15. Vision Based Obstacle Detection in Uav Imaging

    NASA Astrophysics Data System (ADS)

    Badrloo, S.; Varshosaz, M.

    2017-08-01

    Detecting and preventing incidence with obstacles is crucial in UAV navigation and control. Most of the common obstacle detection techniques are currently sensor-based. Small UAVs are not able to carry obstacle detection sensors such as radar; therefore, vision-based methods are considered, which can be divided into stereo-based and mono-based techniques. Mono-based methods are classified into two groups: Foreground-background separation, and brain-inspired methods. Brain-inspired methods are highly efficient in obstacle detection; hence, this research aims to detect obstacles using brain-inspired techniques, which try to enlarge the obstacle by approaching it. A recent research in this field, has concentrated on matching the SIFT points along with, SIFT size-ratio factor and area-ratio of convex hulls in two consecutive frames to detect obstacles. This method is not able to distinguish between near and far obstacles or the obstacles in complex environment, and is sensitive to wrong matched points. In order to solve the above mentioned problems, this research calculates the dist-ratio of matched points. Then, each and every point is investigated for Distinguishing between far and close obstacles. The results demonstrated the high efficiency of the proposed method in complex environments.

  16. Material efficiency studies for a Compton camera designed to measure characteristic prompt gamma rays emitted during proton beam radiotherapy

    PubMed Central

    Robertson, Daniel; Polf, Jerimy C; Peterson, Steve W; Gillin, Michael T; Beddar, Sam

    2011-01-01

    Prompt gamma rays emitted from biological tissues during proton irradiation carry dosimetric and spectroscopic information that can assist with treatment verification and provide an indication of the biological response of the irradiated tissues. Compton cameras are capable of determining the origin and energy of gamma rays. However, prompt gamma monitoring during proton therapy requires new Compton camera designs that perform well at the high gamma energies produced when tissues are bombarded with therapeutic protons. In this study we optimize the materials and geometry of a three-stage Compton camera for prompt gamma detection and calculate the theoretical efficiency of such a detector. The materials evaluated in this study include germanium, bismuth germanate (BGO), NaI, xenon, silicon and lanthanum bromide (LaBr3). For each material, the dimensions of each detector stage were optimized to produce the maximum number of relevant interactions. These results were used to predict the efficiency of various multi-material cameras. The theoretical detection efficiencies of the most promising multi-material cameras were then calculated for the photons emitted from a tissue-equivalent phantom irradiated by therapeutic proton beams ranging from 50 to 250 MeV. The optimized detector stages had a lateral extent of 10 × 10 cm2 with the thickness of the initial two stages dependent on the detector material. The thickness of the third stage was fixed at 10 cm regardless of material. The most efficient single-material cameras were composed of germanium (3 cm) and BGO (2.5 cm). These cameras exhibited efficiencies of 1.15 × 10−4 and 9.58 × 10−5 per incident proton, respectively. The most efficient multi-material camera design consisted of two initial stages of germanium (3 cm) and a final stage of BGO, resulting in a theoretical efficiency of 1.26 × 10−4 per incident proton. PMID:21508442

  17. Four pi calibration and modeling of a bare germanium detector in a cylindrical field source

    NASA Astrophysics Data System (ADS)

    Dewberry, R. A.; Young, J. E.

    2012-05-01

    In this paper we describe a 4π cylindrical field acquisition configuration surrounding a bare (unshielded, uncollimated) high purity germanium detector. We perform an efficiency calibration with a flexible planar source and model the configuration in the 4π cylindrical field. We then use exact calculus to model the flux on the cylindrical sides and end faces of the detector. We demonstrate that the model accurately represents the experimental detection efficiency compared to that of a point source and to Monte Carlo N-particle (MCNP) calculations of the flux. The model sums over the entire source surface area and the entire detector surface area including both faces and the detector's cylindrical sides. Agreement between the model and both experiment and the MCNP calculation is within 8%.

  18. Automatic Detection of Storm Damages Using High-Altitude Photogrammetric Imaging

    NASA Astrophysics Data System (ADS)

    Litkey, P.; Nurminen, K.; Honkavaara, E.

    2013-05-01

    The risks of storms that cause damage in forests are increasing due to climate change. Quickly detecting fallen trees, assessing the amount of fallen trees and efficiently collecting them are of great importance for economic and environmental reasons. Visually detecting and delineating storm damage is a laborious and error-prone process; thus, it is important to develop cost-efficient and highly automated methods. Objective of our research project is to investigate and develop a reliable and efficient method for automatic storm damage detection, which is based on airborne imagery that is collected after a storm. The requirements for the method are the before-storm and after-storm surface models. A difference surface is calculated using two DSMs and the locations where significant changes have appeared are automatically detected. In our previous research we used four-year old airborne laser scanning surface model as the before-storm surface. The after-storm DSM was provided from the photogrammetric images using the Next Generation Automatic Terrain Extraction (NGATE) algorithm of Socet Set software. We obtained 100% accuracy in detection of major storm damages. In this investigation we will further evaluate the sensitivity of the storm-damage detection process. We will investigate the potential of national airborne photography, that is collected at no-leaf season, to automatically produce a before-storm DSM using image matching. We will also compare impact of the terrain extraction algorithm to the results. Our results will also promote the potential of national open source data sets in the management of natural disasters.

  19. Modeling the focusing efficiency of lobster-eye optics for image shifting depending on the soft x-ray wavelength.

    PubMed

    Su, Luning; Li, Wei; Wu, Mingxuan; Su, Yun; Guo, Chongling; Ruan, Ningjuan; Yang, Bingxin; Yan, Feng

    2017-08-01

    Lobster-eye optics is widely applied to space x-ray detection missions and x-ray security checks for its wide field of view and low weight. This paper presents a theoretical model to obtain spatial distribution of focusing efficiency based on lobster-eye optics in a soft x-ray wavelength. The calculations reveal the competition mechanism of contributions to the focusing efficiency between the geometrical parameters of lobster-eye optics and the reflectivity of the iridium film. In addition, the focusing efficiency image depending on x-ray wavelengths further explains the influence of different geometrical parameters of lobster-eye optics and different soft x-ray wavelengths on focusing efficiency. These results could be beneficial to optimize parameters of lobster-eye optics in order to realize maximum focusing efficiency.

  20. HADES RV Programme with HARPS-N at TNG. II. Data treatment and simulations

    NASA Astrophysics Data System (ADS)

    Perger, M.; García-Piquer, A.; Ribas, I.; Morales, J. C.; Affer, L.; Micela, G.; Damasso, M.; Suárez-Mascareño, A.; González-Hernández, J. I.; Rebolo, R.; Herrero, E.; Rosich, A.; Lafarga, M.; Bignamini, A.; Sozzetti, A.; Claudi, R.; Cosentino, R.; Molinari, E.; Maldonado, J.; Maggio, A.; Lanza, A. F.; Poretti, E.; Pagano, I.; Desidera, S.; Gratton, R.; Piotto, G.; Bonomo, A. S.; Martinez Fiorenzano, A. F.; Giacobbe, P.; Malavolta, L.; Nascimbeni, V.; Rainer, M.; Scandariato, G.

    2017-02-01

    Context. The distribution of exoplanets around low-mass stars is still not well understood. Such stars, however, present an excellent opportunity for reaching down to the rocky and habitable planet domains. The number of current detections used for statistical purposes remains relatively modest and different surveys, using both photometry and precise radial velocities, are searching for planets around M dwarfs. Aims: Our HARPS-N red dwarf exoplanet survey is aimed at the detection of new planets around a sample of 78 selected stars, together with the subsequent characterization of their activity properties. Here we investigate the survey performance and strategy. Methods: From 2700 observed spectra, we compare the radial velocity determinations of the HARPS-N DRS pipeline and the HARPS-TERRA code, calculate the mean activity jitter level, evaluate the planet detection expectations, and address the general question of how to define the strategy of spectroscopic surveys in order to be most efficient in the detection of planets. Results: We find that the HARPS-TERRA radial velocities show less scatter and we calculate a mean activity jitter of 2.3 m s-1 for our sample. For a general radial velocity survey with limited observing time, the number of observations per star is key for the detection efficiency. In the case of an early M-type target sample, we conclude that approximately 50 observations per star with exposure times of 900 s and precisions of approximately 1 ms-1 maximizes the number of planet detections. Based on observations made with the Italian Telescopio Nazionale Galileo (TNG), operated on the island of La Palma by the Fundación Galileo Galilei of the INAF (Istituto Nazionale di Astrofisica) at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias (IAC).

  1. Turn-off fluorescence sensor for the detection of ferric ion in water using green synthesized N-doped carbon dots and its bio-imaging.

    PubMed

    Edison, Thomas Nesakumar Jebakumar Immanuel; Atchudan, Raji; Shim, Jae-Jin; Kalimuthu, Senthilkumar; Ahn, Byeong-Cheol; Lee, Yong Rok

    2016-05-01

    This paper reports turn-off fluorescence sensor for Fe(3+) ion in water using fluorescent N-doped carbon dots as a probe. A simple and efficient hydrothermal carbonization of Prunus avium fruit extract for the synthesis of fluorescent nitrogen-doped carbon dots (N-CDs) is described. This green approach proceeds quickly and provides good quality N-CDs. The mean size of synthesized N-CDs was approximately 7nm calculated from the high-resolution transmission electron microscopic images. X-ray photoelectron spectroscopy and Fourier transform infrared spectroscopy revealed the presence of -OH, -NH2, -COOH, and -CO functional groups over the surface of CDs. The N-CDs showed excellent fluorescent properties, and emitted blue fluorescence at 411nm upon excitation at 310nm. The calculated quantum yield of the synthesized N-CDs is 13% against quinine sulfate as a reference fluorophore. The synthesized N-CDs were used as a fluorescent probe towards the selective and sensitive detection of biologically important Fe(3+) ions in water by fluorescence spectroscopy and for bio-imaging of MDA-MB-231 cells. The limit of detection (LOD) and the Stern-Volmer quenching constant for the synthesized N-CDs were 0.96μM and 2.0958×10(3)M of Fe(3+) ions. The green synthesized N-CDs are efficiently used as a promising candidate for the detection of Fe(3+) ions and bio-imaging. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. X-ray detectability of accreting isolated black holes in our Galaxy

    NASA Astrophysics Data System (ADS)

    Tsuna, Daichi; Kawanaka, Norita; Totani, Tomonori

    2018-06-01

    Detectability of isolated black holes (IBHs) without a companion star but emitting X-rays by accretion from dense interstellar medium (ISM) or molecular cloud gas is investigated. We calculate orbits of IBHs in the Galaxy to derive a realistic spatial distribution of IBHs for various mean values of kick velocity at their birth υavg. X-ray luminosities of these IBHs are then calculated considering various phases of ISM and molecular clouds for a wide range of the accretion efficiency λ (a ratio of the actual accretion rate to the Bondi rate) that is rather uncertain. It is found that detectable IBHs mostly reside near the Galactic Centre (GC), and hence taking the Galactic structure into account is essential. In the hard X-ray band, where identification of IBHs from other contaminating X-ray sources may be easier, the expected number of IBHs detectable by the past survey by NuSTAR towards GC is at most order unity. However, 30-100 IBHs may be detected by the future survey by FORCE with an optimistic parameter set of υavg = 50 km s-1 and λ = 0.1, implying that it may be possible to detect IBHs or constrain the model parameters.

  3. Oligothiophene-based colorimetric and ratiometric fluorescence dual-channel cyanide chemosensor: Sensing ability, TD-DFT calculations and its application as an efficient solid state sensor.

    PubMed

    Lan, Linxin; Li, Tianduo; Wei, Tao; Pang, He; Sun, Tao; Wang, Enhua; Liu, Haixia; Niu, Qingfen

    2018-03-15

    An oligothiophene-based colorimetric and ratiometric fluorescence dual-channel cyanide chemosensor 3 T-2CN was reported. Sensor 3 T-2CN showed both naked-eye recognition and ratiometric fluorescence response for CN - with an excellent selectivity and high sensitivity. The sensing mechanism based on the nucleophilic attack of CN - on the vinyl CC bond has been successfully confirmed by the optical measurements, 1 H NMR titration, FT-IR spectra as well as the DFT/TD-DFT calculations. Moreover, the detection limit was calculated to be 0.19μM, which is much lower than the maximum permission concentration in drinking water (1.9μM). Importantly, test strips (filter paper and TLC plates) containing 3 T-2CN were fabricated, which could act as a practical and efficient solid state optical sensor for CN - in field measurements. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Automated object detection and tracking with a flash LiDAR system

    NASA Astrophysics Data System (ADS)

    Hammer, Marcus; Hebel, Marcus; Arens, Michael

    2016-10-01

    The detection of objects, or persons, is a common task in the fields of environment surveillance, object observation or danger defense. There are several approaches for automated detection with conventional imaging sensors as well as with LiDAR sensors, but for the latter the real-time detection is hampered by the scanning character and therefore by the data distortion of most LiDAR systems. The paper presents a solution for real-time data acquisition of a flash LiDAR sensor with synchronous raw data analysis, point cloud calculation, object detection, calculation of the next best view and steering of the pan-tilt head of the sensor. As a result the attention is always focused on the object, independent of the behavior of the object. Even for highly volatile and rapid changes in the direction of motion the object is kept in the field of view. The experimental setup used in this paper is realized with an elementary person detection algorithm in medium distances (20 m to 60 m) to show the efficiency of the system for objects with a high angular speed. It is easy to replace the detection part by any other object detection algorithm and thus it is easy to track nearly any object, for example a car or a boat or an UAV in various distances.

  5. Implementation of a channelized Hotelling observer model to assess image quality of x-ray angiography systems.

    PubMed

    Favazza, Christopher P; Fetterly, Kenneth A; Hangiandreou, Nicholas J; Leng, Shuai; Schueler, Beth A

    2015-01-01

    Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks.

  6. A Decision Mixture Model-Based Method for Inshore Ship Detection Using High-Resolution Remote Sensing Images

    PubMed Central

    Bi, Fukun; Chen, Jing; Zhuang, Yin; Bian, Mingming; Zhang, Qingjun

    2017-01-01

    With the rapid development of optical remote sensing satellites, ship detection and identification based on large-scale remote sensing images has become a significant maritime research topic. Compared with traditional ocean-going vessel detection, inshore ship detection has received increasing attention in harbor dynamic surveillance and maritime management. However, because the harbor environment is complex, gray information and texture features between docked ships and their connected dock regions are indistinguishable, most of the popular detection methods are limited by their calculation efficiency and detection accuracy. In this paper, a novel hierarchical method that combines an efficient candidate scanning strategy and an accurate candidate identification mixture model is presented for inshore ship detection in complex harbor areas. First, in the candidate region extraction phase, an omnidirectional intersected two-dimension scanning (OITDS) strategy is designed to rapidly extract candidate regions from the land-water segmented images. In the candidate region identification phase, a decision mixture model (DMM) is proposed to identify real ships from candidate objects. Specifically, to improve the robustness regarding the diversity of ships, a deformable part model (DPM) was employed to train a key part sub-model and a whole ship sub-model. Furthermore, to improve the identification accuracy, a surrounding correlation context sub-model is built. Finally, to increase the accuracy of candidate region identification, these three sub-models are integrated into the proposed DMM. Experiments were performed on numerous large-scale harbor remote sensing images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency. PMID:28640236

  7. A Decision Mixture Model-Based Method for Inshore Ship Detection Using High-Resolution Remote Sensing Images.

    PubMed

    Bi, Fukun; Chen, Jing; Zhuang, Yin; Bian, Mingming; Zhang, Qingjun

    2017-06-22

    With the rapid development of optical remote sensing satellites, ship detection and identification based on large-scale remote sensing images has become a significant maritime research topic. Compared with traditional ocean-going vessel detection, inshore ship detection has received increasing attention in harbor dynamic surveillance and maritime management. However, because the harbor environment is complex, gray information and texture features between docked ships and their connected dock regions are indistinguishable, most of the popular detection methods are limited by their calculation efficiency and detection accuracy. In this paper, a novel hierarchical method that combines an efficient candidate scanning strategy and an accurate candidate identification mixture model is presented for inshore ship detection in complex harbor areas. First, in the candidate region extraction phase, an omnidirectional intersected two-dimension scanning (OITDS) strategy is designed to rapidly extract candidate regions from the land-water segmented images. In the candidate region identification phase, a decision mixture model (DMM) is proposed to identify real ships from candidate objects. Specifically, to improve the robustness regarding the diversity of ships, a deformable part model (DPM) was employed to train a key part sub-model and a whole ship sub-model. Furthermore, to improve the identification accuracy, a surrounding correlation context sub-model is built. Finally, to increase the accuracy of candidate region identification, these three sub-models are integrated into the proposed DMM. Experiments were performed on numerous large-scale harbor remote sensing images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency.

  8. Kepler Planet Detection Metrics: Pixel-Level Transit Injection Tests of Pipeline Detection Efficiency for Data Release 25

    NASA Technical Reports Server (NTRS)

    Christiansen, Jessie L.

    2017-01-01

    This document describes the results of the fourth pixel-level transit injection experiment, which was designed to measure the detection efficiency of both the Kepler pipeline (Jenkins 2002, 2010; Jenkins et al. 2017) and the Robovetter (Coughlin 2017). Previous transit injection experiments are described in Christiansen et al. (2013, 2015a,b, 2016).In order to calculate planet occurrence rates using a given Kepler planet catalogue, produced with a given version of the Kepler pipeline, we need to know the detection efficiency of that pipeline. This can be empirically determined by injecting a suite of simulated transit signals into the Kepler data, processing the data through the pipeline, and examining the distribution of successfully recovered transits. This document describes the results for the pixel-level transit injection experiment performed to accompany the final Q1-Q17 Data Release 25 (DR25) catalogue (Thompson et al. 2017)of the Kepler Objects of Interest. The catalogue was generated using the SOC pipeline version 9.3 and the DR25 Robovetter acting on the uniformly processed Q1-Q17 DR25 light curves (Thompson et al. 2016a) and assuming the Q1-Q17 DR25 Kepler stellar properties (Mathur et al. 2017).

  9. Using Decision-Analytic Modeling to Isolate Interventions That Are Feasible, Efficient and Optimal: An Application from the Norwegian Cervical Cancer Screening Program.

    PubMed

    Pedersen, Kine; Sørbye, Sveinung Wergeland; Burger, Emily Annika; Lönnberg, Stefan; Kristiansen, Ivar Sønbø

    2015-12-01

    Decision makers often need to simultaneously consider multiple criteria or outcomes when deciding whether to adopt new health interventions. Using decision analysis within the context of cervical cancer screening in Norway, we aimed to aid decision makers in identifying a subset of relevant strategies that are simultaneously efficient, feasible, and optimal. We developed an age-stratified probabilistic decision tree model following a cohort of women attending primary screening through one screening round. We enumerated detected precancers (i.e., cervical intraepithelial neoplasia of grade 2 or more severe (CIN2+)), colposcopies performed, and monetary costs associated with 10 alternative triage algorithms for women with abnormal cytology results. As efficiency metrics, we calculated incremental cost-effectiveness, and harm-benefit, ratios, defined as the additional costs, or the additional number of colposcopies, per additional CIN2+ detected. We estimated capacity requirements and uncertainty surrounding which strategy is optimal according to the decision rule, involving willingness to pay (monetary or resources consumed per added benefit). For ages 25 to 33 years, we eliminated four strategies that did not fall on either efficiency frontier, while one strategy was efficient with respect to both efficiency metrics. Compared with current practice in Norway, two strategies detected more precancers at lower monetary costs, but some required more colposcopies. Similar results were found for women aged 34 to 69 years. Improving the effectiveness and efficiency of cervical cancer screening may necessitate additional resources. Although efficient and feasible, both society and individuals must specify their willingness to accept the additional resources and perceived harms required to increase effectiveness before a strategy can be considered optimal. Copyright © 2015. Published by Elsevier Inc.

  10. An ab initio molecular orbital study of the mechanism for the gas-phase water-mediated decomposition and the formation of hydrates of peroxyacetyl nitrate (PAN).

    PubMed

    Li, Yumin; Francisco, Joseph S

    2005-08-31

    There is uncertainty in the mechanism for the hydrolysis of peroxyacetyl nitrate (PAN), and experimental attempts to detect products of the direct reaction have been unsuccessful. Ab initio calculations are used to examine the energetics of water-mediated decomposition of gas-phase PAN into acetic acid and peroxynitric acid. On the basis of ab initio calculations, an alternative reaction mechanism for the decomposition of PAN is proposed. The calculations indicate that the barrier for one water addition to PAN is large. However, including additional water molecules reveals a substantially lower energy route. The calculations suggest that the formation of PAN hydrate complexes are energetically favorable and stable. Additional waters are increasingly efficient at stabilizing hydrated PAN.

  11. Characterizing the digital radiography system in terms of effective detective quantum efficiency and CDRAD measurement

    NASA Astrophysics Data System (ADS)

    Yalcin, A.; Olgar, T.

    2018-07-01

    The aim of this study was to assess the performance of a digital radiography system in terms of effective detective quantum efficiency (eDQE) for different tube voltages, polymethyl methacrylate (PMMA) phantom thicknesses and different grid types. The image performance of the digital radiography system was also evaluated by using CDRAD measurements at the same conditions and the correlation of CDRAD results with eDQE was compared. The eDQE was calculated via measurement of effective modulation transfer function (eMTF), effective normalized noise power spectra (eNNPS), scatter fraction (SF) and transmission factors (TF). SFs and TFs were also calculated for different beam qualities by using MCNP4C Monte Carlo simulation code. The integrated eDQE (IeDQE) over the frequency range was used to find the correlation with the inverse image quality figure (IQFinv) obtained from CDRAD measurements. The highest eDQE was obtained with 60 lp/cm grid frequency and 10:1 grid ratio. No remarkable effect was observed on eDQE with different grid frequency, but eDQE decreased with increasing grid ratio. A significant correlation was found between IeDQE and IQFinv.

  12. An Adaptive and Time-Efficient ECG R-Peak Detection Algorithm.

    PubMed

    Qin, Qin; Li, Jianqing; Yue, Yinggao; Liu, Chengyu

    2017-01-01

    R-peak detection is crucial in electrocardiogram (ECG) signal analysis. This study proposed an adaptive and time-efficient R-peak detection algorithm for ECG processing. First, wavelet multiresolution analysis was applied to enhance the ECG signal representation. Then, ECG was mirrored to convert large negative R-peaks to positive ones. After that, local maximums were calculated by the first-order forward differential approach and were truncated by the amplitude and time interval thresholds to locate the R-peaks. The algorithm performances, including detection accuracy and time consumption, were tested on the MIT-BIH arrhythmia database and the QT database. Experimental results showed that the proposed algorithm achieved mean sensitivity of 99.39%, positive predictivity of 99.49%, and accuracy of 98.89% on the MIT-BIH arrhythmia database and 99.83%, 99.90%, and 99.73%, respectively, on the QT database. By processing one ECG record, the mean time consumptions were 0.872 s and 0.763 s for the MIT-BIH arrhythmia database and QT database, respectively, yielding 30.6% and 32.9% of time reduction compared to the traditional Pan-Tompkins method.

  13. An Adaptive and Time-Efficient ECG R-Peak Detection Algorithm

    PubMed Central

    Qin, Qin

    2017-01-01

    R-peak detection is crucial in electrocardiogram (ECG) signal analysis. This study proposed an adaptive and time-efficient R-peak detection algorithm for ECG processing. First, wavelet multiresolution analysis was applied to enhance the ECG signal representation. Then, ECG was mirrored to convert large negative R-peaks to positive ones. After that, local maximums were calculated by the first-order forward differential approach and were truncated by the amplitude and time interval thresholds to locate the R-peaks. The algorithm performances, including detection accuracy and time consumption, were tested on the MIT-BIH arrhythmia database and the QT database. Experimental results showed that the proposed algorithm achieved mean sensitivity of 99.39%, positive predictivity of 99.49%, and accuracy of 98.89% on the MIT-BIH arrhythmia database and 99.83%, 99.90%, and 99.73%, respectively, on the QT database. By processing one ECG record, the mean time consumptions were 0.872 s and 0.763 s for the MIT-BIH arrhythmia database and QT database, respectively, yielding 30.6% and 32.9% of time reduction compared to the traditional Pan-Tompkins method. PMID:29104745

  14. Parameter selection with the Hotelling observer in linear iterative image reconstruction for breast tomosynthesis

    NASA Astrophysics Data System (ADS)

    Rose, Sean D.; Roth, Jacob; Zimmerman, Cole; Reiser, Ingrid; Sidky, Emil Y.; Pan, Xiaochuan

    2018-03-01

    In this work we investigate an efficient implementation of a region-of-interest (ROI) based Hotelling observer (HO) in the context of parameter optimization for detection of a rod signal at two orientations in linear iterative image reconstruction for DBT. Our preliminary results suggest that ROI-HO performance trends may be efficiently estimated by modeling only the 2D plane perpendicular to the detector and containing the X-ray source trajectory. In addition, the ROI-HO is seen to exhibit orientation dependent trends in detectability as a function of the regularization strength employed in reconstruction. To further investigate the ROI-HO performance in larger 3D system models, we present and validate an iterative methodology for calculating the ROI-HO. Lastly, we present a real data study investigating the correspondence between ROI-HO performance trends and signal conspicuity. Conspicuity of signals in real data reconstructions is seen to track well with trends in ROI-HO detectability. In particular, we observe orientation dependent conspicuity matching the orientation dependent detectability of the ROI-HO.

  15. Medium-based noninvasive preimplantation genetic diagnosis for human α-thalassemias-SEA.

    PubMed

    Wu, Haitao; Ding, Chenhui; Shen, Xiaoting; Wang, Jing; Li, Rong; Cai, Bing; Xu, Yanwen; Zhong, Yiping; Zhou, Canquan

    2015-03-01

    To develop a noninvasive medium-based preimplantation genetic diagnosis (PGD) test for α-thalassemias-SEA. The embryos of α-thalassemia-SEA carriers undergoing in vitro fertilization (IVF) were cultured. Single cells were biopsied from blastomeres and subjected to fluorescent gap polymerase chain reaction (PCR) analysis; the spent culture media that contained embryo genomic DNA and corresponding blastocysts as verification were subjected to quantitative-PCR (Q-PCR) detection of α-thalassemia-SEA. The diagnosis efficiency and allele dropout (ADO) ratio were calculated, and the cell-free DNA concentration was quantitatively assessed in the culture medium. The diagnosis efficiency of medium-based α-thalassemias-SEA detection significantly increased compared with that of biopsy-based fluorescent gap PCR analysis (88.6% vs 82.1%, P < 0.05). There is no significant difference regarding ADO ratio between them. The optimal time for medium-based α-thalassemias-SEA detection is Day 5 (D5) following IVF. Medium-based α-thalassemias-SEA detection could represent a novel, quick, and noninvasive approach for carriers to undergo IVF and PGD.

  16. Automatic Seizure Detection in Rats Using Laplacian EEG and Verification with Human Seizure Signals

    PubMed Central

    Feltane, Amal; Boudreaux-Bartels, G. Faye; Besio, Walter

    2012-01-01

    Automated detection of seizures is still a challenging problem. This study presents an approach to detect seizure segments in Laplacian electroencephalography (tEEG) recorded from rats using the tripolar concentric ring electrode (TCRE) configuration. Three features, namely, median absolute deviation, approximate entropy, and maximum singular value were calculated and used as inputs into two different classifiers: support vector machines and adaptive boosting. The relative performance of the extracted features on TCRE tEEG was examined. Results are obtained with an overall accuracy between 84.81 and 96.51%. In addition to using TCRE tEEG data, the seizure detection algorithm was also applied to the recorded EEG signals from Andrzejak et al. database to show the efficiency of the proposed method for seizure detection. PMID:23073989

  17. a Theoretical Calculation of Microlensing Signatures Caused by Free-Floating Planets Towards the Galactic Bulge

    NASA Astrophysics Data System (ADS)

    Hamolli, L.; Hafizi, M.; Nucita, A. A.

    2013-08-01

    Free-floating planets (FFPs) are recently drawing a special interest of the scientific community. Gravitational microlensing is up to now the exclusive method for the investigation of FFPs, including their spatial distribution function and mass function. In this paper, we examine the possibility that the future Euclid space-based observatory may allow to discover a substantial number of microlensing events caused by FFPs. Based on latest results about the free-floating planet (FFP) mass function in the mass range [10-5, 10-2]M⊙, we calculate the optical depth towards the Galactic bulge as well as the expected microlensing rate and find that Euclid may be able to detect hundreds to thousands of these events per month. Making use of a synthetic population, we also investigate the possibility of detecting parallax effect in simulated microlensing events due to FFPs and find a significant efficiency for the parallax detection that turns out to be around 30%.

  18. Fluorescent microplate assay method for high-throughput detection of lipase transesterification activity.

    PubMed

    Zheng, Jianyong; Wei, Wei; Lan, Xing; Zhang, Yinjun; Wang, Zhao

    2018-05-15

    This study describes a sensitive and fluorescent microplate assay method to detect lipase transesterification activity. Lipase-catalyzed transesterification between butyryl 4-methyl umbelliferone (Bu-4-Mu) and methanol in tert-butanol was selected as the model reaction. The release of 4-methylumbelliferone (4-Mu) in the reaction was determined by detecting the fluorescence intensity at λ ex 330 nm and λ em 390 nm. Several lipases were used to investigate the accuracy and efficiency of the proposed method. Apparent Michaelis constant (Km) was calculated for transesterification between Bu-4-Mu and methanol by the lipases. The main advantages of the assay method include high sensitivity, inexpensive reagents, and simple detection process. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Implementing a combined polar-geostationary algorithm for smoke emissions estimation in near real time

    NASA Astrophysics Data System (ADS)

    Hyer, E. J.; Schmidt, C. C.; Hoffman, J.; Giglio, L.; Peterson, D. A.

    2013-12-01

    Polar and geostationary satellites are used operationally for fire detection and smoke source estimation by many near-real-time operational users, including operational forecast centers around the globe. The input satellite radiance data are processed by data providers to produce Level-2 and Level -3 fire detection products, but processing these data into spatially and temporally consistent estimates of fire activity requires a substantial amount of additional processing. The most significant processing steps are correction for variable coverage of the satellite observations, and correction for conditions that affect the detection efficiency of the satellite sensors. We describe a system developed by the Naval Research Laboratory (NRL) that uses the full raster information from the entire constellation to diagnose detection opportunities, calculate corrections for factors such as angular dependence of detection efficiency, and generate global estimates of fire activity at spatial and temporal scales suitable for atmospheric modeling. By incorporating these improved fire observations, smoke emissions products, such as NRL's FLAMBE, are able to produce improved estimates of global emissions. This talk provides an overview of the system, demonstrates the achievable improvement over older methods, and describes challenges for near-real-time implementation.

  20. Geometric identification and damage detection of structural elements by terrestrial laser scanner

    NASA Astrophysics Data System (ADS)

    Hou, Tsung-Chin; Liu, Yu-Wei; Su, Yu-Min

    2016-04-01

    In recent years, three-dimensional (3D) terrestrial laser scanning technologies with higher precision and higher capability are developing rapidly. The growing maturity of laser scanning has gradually approached the required precision as those have been provided by traditional structural monitoring technologies. Together with widely available fast computation for massive point cloud data processing, 3D laser scanning can serve as an efficient structural monitoring alternative for civil engineering communities. Currently most research efforts have focused on integrating/calculating the measured multi-station point cloud data, as well as modeling/establishing the 3D meshes of the scanned objects. Very little attention has been spent on extracting the information related to health conditions and mechanical states of structures. In this study, an automated numerical approach that integrates various existing algorithms for geometric identification and damage detection of structural elements were established. Specifically, adaptive meshes were employed for classifying the point cloud data of the structural elements, and detecting the associated damages from the calculated eigenvalues in each area of the structural element. Furthermore, kd-tree was used to enhance the searching efficiency of plane fitting which were later used for identifying the boundaries of structural elements. The results of geometric identification were compared with M3C2 algorithm provided by CloudCompare, as well as validated by LVDT measurements of full-scale reinforced concrete beams tested in laboratory. It shows that 3D laser scanning, through the established processing approaches of the point cloud data, can offer a rapid, nondestructive, remote, and accurate solution for geometric identification and damage detection of structural elements.

  1. Quantifying radionuclide signatures from a γ-γ coincidence system.

    PubMed

    Britton, Richard; Jackson, Mark J; Davies, Ashley V

    2015-11-01

    A method for quantifying gamma coincidence signatures has been developed, and tested in conjunction with a high-efficiency multi-detector system to quickly identify trace amounts of radioactive material. The γ-γ system utilises fully digital electronics and list-mode acquisition to time-stamp each event, allowing coincidence matrices to be easily produced alongside typical 'singles' spectra. To quantify the coincidence signatures a software package has been developed to calculate efficiency and cascade summing corrected branching ratios. This utilises ENSDF records as an input, and can be fully automated, allowing the user to quickly and easily create/update a coincidence library that contains all possible γ and conversion electron cascades, associated cascade emission probabilities, and true-coincidence summing corrected γ cascade detection probabilities. It is also fully searchable by energy, nuclide, coincidence pair, γ multiplicity, cascade probability and half-life of the cascade. The probabilities calculated were tested using measurements performed on the γ-γ system, and found to provide accurate results for the nuclides investigated. Given the flexibility of the method, (it only relies on evaluated nuclear data, and accurate efficiency characterisations), the software can now be utilised for a variety of systems, quickly and easily calculating coincidence signature probabilities. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  2. Cis- and trans-perfluorodecalin: Infrared spectra, radiative efficiency and global warming potential

    NASA Astrophysics Data System (ADS)

    Le Bris, Karine; DeZeeuw, Jasmine; Godin, Paul J.; Strong, Kimberly

    2017-12-01

    Perfluorodecalin (PFD) is a molecule used in various medical applications for its capacity to dissolve gases. This potent greenhouse gas was detected for the first time in the atmosphere in 2005. We present infrared absorption cross-section spectra of a pure vapour of cis- and trans-perfluorodecalin at a resolution of 0.1 cm-1. Measurements were performed in the 560-3000 cm-1 spectral range using Fourier transform spectroscopy. The spectra have been compared with previous experimental data and theoretical calculations by density functional theory. The new experimental absorption cross-sections have been used to calculate a lifetime-corrected radiative efficiency at 300 K of 0.62 W m-2 ppb-1 and 0.57 W.m-2.ppb-1 for the cis and trans isomers respectively. This leads to a 100-year time horizon global warming potential of 8030 for cis-PFD and 7440 for trans-PFD.

  3. Fast and efficient indexing approach for object recognition

    NASA Astrophysics Data System (ADS)

    Hefnawy, Alaa; Mashali, Samia A.; Rashwan, Mohsen; Fikri, Magdi

    1999-08-01

    This paper introduces a fast and efficient indexing approach for both 2D and 3D model-based object recognition in the presence of rotation, translation, and scale variations of objects. The indexing entries are computed after preprocessing the data by Haar wavelet decomposition. The scheme is based on a unified image feature detection approach based on Zernike moments. A set of low level features, e.g. high precision edges, gray level corners, are estimated by a set of orthogonal Zernike moments, calculated locally around every image point. A high dimensional, highly descriptive indexing entries are then calculated based on the correlation of these local features and employed for fast access to the model database to generate hypotheses. A list of the most candidate models is then presented by evaluating the hypotheses. Experimental results are included to demonstrate the effectiveness of the proposed indexing approach.

  4. DQE and system optimization for indirect-detection flat-panel imagers in diagnostic radiology

    NASA Astrophysics Data System (ADS)

    Siewerdsen, Jeffrey H.; Antonuk, Larry E.

    1998-07-01

    The performance of indirect-detection flat-panel imagers incorporating CsI:Tl x-ray converters is examined through calculation of the detective quantum efficiency (DQE) under conditions of chest radiography, fluoroscopy, and mammography. Calculations are based upon a cascaded systems model which has demonstrated excellent agreement with empirical signal, noise- power spectra, and DQE results. For each application, the DQE is calculated as a function of spatial-frequency and CsI:Tl thickness. A preliminary investigation into the optimization of flat-panel imaging systems is described, wherein the x-ray converter thickness which provides optimal DQE for a given imaging task is estimated. For each application, a number of example tasks involving detection of an object of variable size and contrast against a noisy background are considered. The method described is fairly general and can be extended to account for a variety of imaging tasks. For the specific examples considered, the preliminary results estimate optimal CsI:Tl thicknesses of approximately 450 micrometer (approximately 200 mg/cm2), approximately 320 micrometer (approximately 140 mg/cm2), and approximately 200 micrometer (approximately 90 mg/cm2) for chest radiography, fluoroscopy, and mammography, respectively. These results are expected to depend upon the imaging task as well as upon the quality of available CsI:Tl, and future improvements in scintillator fabrication could result in increased optimal thickness and DQE.

  5. Implementation of a channelized Hotelling observer model to assess image quality of x-ray angiography systems

    PubMed Central

    Favazza, Christopher P.; Fetterly, Kenneth A.; Hangiandreou, Nicholas J.; Leng, Shuai; Schueler, Beth A.

    2015-01-01

    Abstract. Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks. PMID:26158086

  6. Improved determination of the neutron lifetime.

    PubMed

    Yue, A T; Dewey, M S; Gilliam, D M; Greene, G L; Laptev, A B; Nico, J S; Snow, W M; Wietfeldt, F E

    2013-11-27

    The most precise determination of the neutron lifetime using the beam method was completed in 2005 and reported a result of τ(n)=(886.3±1.2[stat]±3.2[syst]) s. The dominant uncertainties were attributed to the absolute determination of the fluence of the neutron beam (2.7 s). The fluence was measured with a neutron monitor that counted the neutron-induced charged particles from absorption in a thin, well-characterized 6Li deposit. The detection efficiency of the monitor was calculated from the areal density of the deposit, the detector solid angle, and the evaluated nuclear data file, ENDF/B-VI 6Li(n,t)4He thermal neutron cross section. In the current work, we measure the detection efficiency of the same monitor used in the neutron lifetime measurement with a second, totally absorbing neutron detector. This direct approach does not rely on the 6Li(n,t)4He cross section or any other nuclear data. The detection efficiency is consistent with the value used in 2005 but is measured with a precision of 0.057%, which represents a fivefold improvement in the uncertainty. We verify the temporal stability of the neutron monitor through ancillary measurements, allowing us to apply the measured neutron monitor efficiency to the lifetime result from the 2005 experiment. The updated lifetime is τ(n)=(887.7±1.2[stat]±1.9[syst]) s.

  7. Platinum nanoparticles encapsulated metal-organic frameworks for the electrochemical detection of telomerase activity.

    PubMed

    Ling, Pinghua; Lei, Jianping; Jia, Li; Ju, Huangxian

    2016-01-21

    A simple and rapid electrochemical sensor is constructed for the detection of telomerase activity based on the electrocatalysis of platinum nanoparticle (Pt NP) encapsulated metal-organic frameworks (MOFs), which are synthesized by one-pot encapsulation of Pt NPs into prototypal MOFs, UiO-66-NH2. Integrating with the efficient electrocatalysis of Pt@MOFs towards NaBH4 oxidation, this biosensor shows the wide dynamic correlation of telomerase activity from 5 × 10(2) to 10(7) HeLa cells mL(-1) and the telomerase activity in a single HeLa cell was calculated to be 2.0 × 10(-11) IU, providing a powerful platform for detecting telomerase activity.

  8. Multi-Complementary Model for Long-Term Tracking

    PubMed Central

    Zhang, Deng; Zhang, Junchang; Xia, Chenyang

    2018-01-01

    In recent years, video target tracking algorithms have been widely used. However, many tracking algorithms do not achieve satisfactory performance, especially when dealing with problems such as object occlusions, background clutters, motion blur, low illumination color images, and sudden illumination changes in real scenes. In this paper, we incorporate an object model based on contour information into a Staple tracker that combines the correlation filter model and color model to greatly improve the tracking robustness. Since each model is responsible for tracking specific features, the three complementary models combine for more robust tracking. In addition, we propose an efficient object detection model with contour and color histogram features, which has good detection performance and better detection efficiency compared to the traditional target detection algorithm. Finally, we optimize the traditional scale calculation, which greatly improves the tracking execution speed. We evaluate our tracker on the Object Tracking Benchmarks 2013 (OTB-13) and Object Tracking Benchmarks 2015 (OTB-15) benchmark datasets. With the OTB-13 benchmark datasets, our algorithm is improved by 4.8%, 9.6%, and 10.9% on the success plots of OPE, TRE and SRE, respectively, in contrast to another classic LCT (Long-term Correlation Tracking) algorithm. On the OTB-15 benchmark datasets, when compared with the LCT algorithm, our algorithm achieves 10.4%, 12.5%, and 16.1% improvement on the success plots of OPE, TRE, and SRE, respectively. At the same time, it needs to be emphasized that, due to the high computational efficiency of the color model and the object detection model using efficient data structures, and the speed advantage of the correlation filters, our tracking algorithm could still achieve good tracking speed. PMID:29425170

  9. Luminous Efficiency of Hypervelocity Meteoroid Impacts on the Moon Derived from the 2015 Geminid Meteor Shower

    NASA Technical Reports Server (NTRS)

    Moser, D. E.; Suggs, R. M.; Ehlert, S. R.

    2017-01-01

    Meteoroids cannot be observed directly because of their small size. In-situ measurements of the meteoroid environment are rare and have very small collecting areas. The Moon, in contrast, has a large collecting area and therefore can be used as a large meteoroid detector for gram-kilogram sized particles. Meteoroids striking the Moon create an impact flash observable by Earth-based telescopes. Their kinetic energy is converted to luminous energy with some unknown luminous efficiency ?(v), which is likely a function of meteoroid velocity (among other factors). This luminous efficiency is imperative to calculating the kinetic energy and mass of the meteoroid, as well as meteoroid fluxes, and it cannot be determined in the laboratory at meteoroid speeds and sizes due to mechanical constraints. Since laboratory simulations fail to resolve the luminous efficiency problem, observations of the impact flash itself must be utilized. Meteoroids associated with specific meteor showers have known speed and direction, which simplifies the determination of the luminous efficiency. NASA has routinely monitored the Moon for impact flashes since early 2006 [1]. During this time, several meteor showers have produced multiple impact flashes on the Moon, yielding a sufficient sample of impact flashes with which to perform a luminous efficiency analysis similar to that outlined in Bellot Rubio et al. [2, 3] and further described by Moser et al. [4], utilizing Earth-based measurements of the shower flux and mass index. The Geminid meteor shower has produced the most impact flashes in the NASA dataset to date with over 80 detections. More than half of these Geminids were recorded in 2015 (locations pictured in Fig. 1), and may represent the largest single-shower impact flash sample known. This work analyzes the 2015 Geminid lunar impacts and calculates their luminous efficiency. The luminous efficiency is then applied to calculate the kinetic energies and mass-es of these shower meteoroids.

  10. Novel high-efficiency visible-light responsive Ag 4(GeO 4) photocatalyst

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Xianglin; Wang, Peng; Li, Mengmeng

    A novel high-efficiency visible-light responsive Ag 4(GeO 4) photocatalyst was prepared by a facile hydrothermal method. The photocatalytic activity of as-prepared Ag 4(GeO 4) was evaluated by photodegradation of methylene blue (MB) dye and water splitting experiments. The photodegradation efficiency and oxygen production efficiency of Ag 4(GeO 4) were detected to be 2.9 and 1.9 times higher than those of Ag 2O. UVvis diffuse reflectance spectroscopy (DRS), photoluminescence experiment and photoelectric effect experiments prove that the good light response and high carrier separation efficiency facilitated by the internal electric field are the main reasons for Ag 4(GeO 4)'s excellent catalyticmore » activity. Radical-trapping experiments reveal that the photogenerated holes are the main active species. Lastly, first-principles theoretical calculations provide more insight into understanding the photocatalytic mechanism of the Ag 4(GeO 4) catalyst.« less

  11. Novel high-efficiency visible-light responsive Ag 4(GeO 4) photocatalyst

    DOE PAGES

    Zhu, Xianglin; Wang, Peng; Li, Mengmeng; ...

    2017-04-25

    A novel high-efficiency visible-light responsive Ag 4(GeO 4) photocatalyst was prepared by a facile hydrothermal method. The photocatalytic activity of as-prepared Ag 4(GeO 4) was evaluated by photodegradation of methylene blue (MB) dye and water splitting experiments. The photodegradation efficiency and oxygen production efficiency of Ag 4(GeO 4) were detected to be 2.9 and 1.9 times higher than those of Ag 2O. UVvis diffuse reflectance spectroscopy (DRS), photoluminescence experiment and photoelectric effect experiments prove that the good light response and high carrier separation efficiency facilitated by the internal electric field are the main reasons for Ag 4(GeO 4)'s excellent catalyticmore » activity. Radical-trapping experiments reveal that the photogenerated holes are the main active species. Lastly, first-principles theoretical calculations provide more insight into understanding the photocatalytic mechanism of the Ag 4(GeO 4) catalyst.« less

  12. Performance calculation and simulation system of high energy laser weapon

    NASA Astrophysics Data System (ADS)

    Wang, Pei; Liu, Min; Su, Yu; Zhang, Ke

    2014-12-01

    High energy laser weapons are ready for some of today's most challenging military applications. Based on the analysis of the main tactical/technical index and combating process of high energy laser weapon, a performance calculation and simulation system of high energy laser weapon was established. Firstly, the index decomposition and workflow of high energy laser weapon was proposed. The entire system was composed of six parts, including classical target, platform of laser weapon, detect sensor, tracking and pointing control, laser atmosphere propagation and damage assessment module. Then, the index calculation modules were designed. Finally, anti-missile interception simulation was performed. The system can provide reference and basis for the analysis and evaluation of high energy laser weapon efficiency.

  13. Environmental DNA as a Tool for Inventory and Monitoring of Aquatic Vertebrates

    DTIC Science & Technology

    2017-07-01

    geomorphic calculations and description of each reach. Methods Channel Surveys We initially selected reaches based on access and visual indicators...WA 99164 I-2 Environmental DNA lab protocol: designing species-specific qPCR assays Species-specific surveys should use quantitative polymerase...to traditional field sampling with respect to sensitivity, detection probabilities, and cost efficiency. Compared to field surveys , eDNA sampling

  14. Tunable generation and adsorption of energetic compounds in the vapor phase at trace levels: a tool for testing and developing sensitive and selective substrates for explosive detection.

    PubMed

    Bonnot, Karine; Bernhardt, Pierre; Hassler, Dominique; Baras, Christian; Comet, Marc; Keller, Valérie; Spitzer, Denis

    2010-04-15

    Among various methods for landmine detection, as well as soil and water pollution monitoring, the detection of explosive compounds in air is becoming an important and inevitable challenge for homeland security applications, due to the threatening increase in terrorist explosive bombs used against civil populations. However, in the last case, there is a crucial need for the detection of vapor phase traces or subtraces (in the ppt range or even lower). A novel and innovative generator for explosive trace vapors was designed and developed. It allowed the generation of theoretical concentrations as low as 0.24 ppq for hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) in air according to Clapeyron equations. The accurate generation of explosive concentrations at subppt levels was verified for RDX and 2,4,6-trinitrotoluene (TNT) using a gas chromatograph coupled to an electron capture detector (GC-ECD). First, sensing material experiments were conducted on a nanostructured tungsten oxide. The sensing efficiency of this material determined as its adsorption capacity toward 54 ppb RDX was calculated to be five times higher than the sensing efficiency of a 54 ppb TNT vapor. The material sensing efficiency showed no dependence on the mass of material used. The results showed that the device allowed the calibration and discrimination between materials for highly sensitive and accurate sensing detection in air of low vapor pressure explosives such as TNT or RDX at subppb levels. The designed device and method showed promising features for nanosensing applications in the field of ultratrace explosive detection. The current perspectives are to decrease the testing scale and the detection levels to ppt or subppt concentration of explosives in air.

  15. Predicting the difficulty of pure, strict, epistatic models: metrics for simulated model selection.

    PubMed

    Urbanowicz, Ryan J; Kiralis, Jeff; Fisher, Jonathan M; Moore, Jason H

    2012-09-26

    Algorithms designed to detect complex genetic disease associations are initially evaluated using simulated datasets. Typical evaluations vary constraints that influence the correct detection of underlying models (i.e. number of loci, heritability, and minor allele frequency). Such studies neglect to account for model architecture (i.e. the unique specification and arrangement of penetrance values comprising the genetic model), which alone can influence the detectability of a model. In order to design a simulation study which efficiently takes architecture into account, a reliable metric is needed for model selection. We evaluate three metrics as predictors of relative model detection difficulty derived from previous works: (1) Penetrance table variance (PTV), (2) customized odds ratio (COR), and (3) our own Ease of Detection Measure (EDM), calculated from the penetrance values and respective genotype frequencies of each simulated genetic model. We evaluate the reliability of these metrics across three very different data search algorithms, each with the capacity to detect epistatic interactions. We find that a model's EDM and COR are each stronger predictors of model detection success than heritability. This study formally identifies and evaluates metrics which quantify model detection difficulty. We utilize these metrics to intelligently select models from a population of potential architectures. This allows for an improved simulation study design which accounts for differences in detection difficulty attributed to model architecture. We implement the calculation and utilization of EDM and COR into GAMETES, an algorithm which rapidly and precisely generates pure, strict, n-locus epistatic models.

  16. Measurement of gamma-ray production from thermal neutron capture on gadolinium for neutrino experiments

    NASA Astrophysics Data System (ADS)

    Yano, Takatomi; 2012B0025 Collaboration; 2014B0126 Collaboration

    2017-02-01

    Recently, several scientific applications of gadolinium are found in neutrino physics experiments. Gadolinium-157 is the nucleus, which has the largest thermal neutron capture cross-section among all stable nuclei. Gadolinium-155 also has the large cross-section. These neutron capture reactions provide the gamma-ray cascade with the total energy of about 8 MeV. This reaction is applied for several neutrino experiments, e.g. reactor neutrino experiments and Gd doped large water Cherenkov detector experiments, to recognize inverse-beta-decay reaction. A good Gd(n,γ) simulation model is needed to evaluate the detection efficiency of the neutron capture reaction, i.e. the efficiency of IBD detection. In this presentation, we will report the development and study status of a Gd(n,γ) calculation model and comparison with our experimental data taken at ANNRI/MLF beam line, J-PARC.

  17. Pyridine-ring containing twisttetraazaacene: Synthesis, physical properties, crystal structure and picric acid sensing.

    PubMed

    Yu, Xianglin; Wan, Jiaqi; Chen, Shao; Li, Miao; Gao, Junkuo; Yang, Li; Wang, Huisheng; Chen, Dugang; Pan, Zhiquan; Li, Junbo

    2017-11-01

    Novel pyridine-ring containing twisttetraazaacene 9,14-diphenylpyreno[4,5-g]isoquinoline (1) and its full-carbon derivative 9,14-diphenyldibenzo[de,qr]tetracene (2) have been synthesized and fully characterized. Studies showed that compound 1 could identify picric acid (PA) over other common nitro compounds with high selectivity and sensitivity. Upon the addition of PA, the emission peak of compound 1 in CH 3 CN was red shifted from 447 to 555nm with a fluorescence quenching efficiency as high as 95%, the detection limit was calculated to be 2.42μM, while its full-carbon derivative (2) could not exhibit this kind of performance. The possible mechanism with the enhanced PA detection efficiency in pyridine-ring containing twisttetraazaacene (1) than its full-carbon derivative (2) was also investigated. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. 4H-SiC UV Photo Detector with Large Area and Very High Specific Detectivity

    NASA Technical Reports Server (NTRS)

    Yan, Feng; Shahid, Aslam; Franz, David; Xin, Xiaobin; Zhao, Jian H.; Zhao, Yuegang; Winer, Maurice

    2004-01-01

    Pt/4H-SiC Schottky photodiodes have been fabricated with the device areas up to 1 sq cm. The I-V characteristics and photo-response spectra have been measured and analyzed. For a 5 mm x 5 mm area device leakage current of 1 x 10(exp 15)A at zero bias and 1.2 x 10(exp 14)A at -IV have been established. The quantum efficiency is over 30% from 240nm to 320nm. The specific detectivity, D(sup *), has been calculated from the directly measured leakage current and quantum efficiency data and are shown to be higher than 10(exp 15) cmHz(sup 1/2)/W from 210nm to 350nm with a peak D(sup *) of 3.6 x 10(exp 15)cmH(sup 1/2)/W at 300nm.

  19. Using airborne LiDAR in geoarchaeological contexts: Assessment of an automatic tool for the detection and the morphometric analysis of grazing archaeological structures (French Massif Central).

    NASA Astrophysics Data System (ADS)

    Roussel, Erwan; Toumazet, Jean-Pierre; Florez, Marta; Vautier, Franck; Dousteyssier, Bertrand

    2014-05-01

    Airborne laser scanning (ALS) of archaeological regions of interest is nowadays a widely used and established method for accurate topographic and microtopographic survey. The penetration of the vegetation cover by the laser beam allows the reconstruction of reliable digital terrain models (DTM) of forested areas where traditional prospection methods are inefficient, time-consuming and non-exhaustive. The ALS technology provides the opportunity to discover new archaeological features hidden by vegetation and provides a comprehensive survey of cultural heritage sites within their environmental context. However, the post-processing of LiDAR points clouds produces a huge quantity of data in which relevant archaeological features are not easily detectable with common visualizing and analysing tools. Undoubtedly, there is an urgent need for automation of structures detection and morphometric extraction techniques, especially for the "archaeological desert" in densely forested areas. This presentation deals with the development of automatic detection procedures applied to archaeological structures located in the French Massif Central, in the western forested part of the Puy-de-Dôme volcano between 950 and 1100 m a.s.l.. These unknown archaeological sites were discovered by the March 2011 ALS mission and display a high density of subcircular depressions with a corridor access. The spatial organization of these depressions vary from isolated to aggregated or aligned features. Functionally, they appear to be former grazing constructions built from the medieval to the modern period. Similar grazing structures are known in other locations of the French Massif Central (Sancy, Artense, Cézallier) where the ground is vegetation-free. In order to develop a reliable process of automatic detection and mapping of these archaeological structures, a learning zone has been delineated within the ALS surveyed area. The grazing features were mapped and typical morphometric attributes were calculated based on 2 methods: (i) The mapping of the archaeological structures by a human operator using common visualisation tools (DTM, multi-direction hillshading & local relief models) within a GIS environment; (ii) The automatic detection and mapping performed by a recognition algorithm based on a user defined geometric pattern of the grazing structures. The efficiency of the automatic tool has been assessed by comparing the number of structures detected and the morphometric attributes calculated by the two methods. Our results indicate that the algorithm is efficient for the detection and the location of grazing structures. Concerning the morphometric results, there is still a discrepancy between automatic and expert calculations, due to both the expert mapping choices and the algorithm calibration.

  20. [Impact of the funding reform of teaching hospitals in Brazil].

    PubMed

    Lobo, M S C; Silva, A C M; Lins, M P E; Fiszman, R

    2009-06-01

    To assess the impact of funding reform on the productivity of teaching hospitals. Based on the Information System of Federal University Hospitals of Brazil, 2003 and 2006 efficiency and productivity were measured using frontier methods with a linear programming technique, data envelopment analysis, and input-oriented variable returns to scale model. The Malmquist index was calculated to detect changes during the study period: 'technical efficiency change,' or the relative variation of the efficiency of each unit; and 'technological change' after frontier shift. There was 51% mean budget increase and improvement of technical efficiency of teaching hospitals (previously 11, 17 hospitals reached the empirical efficiency frontier) but the same was not seen for the technology frontier. Data envelopment analysis set benchmark scores for each inefficient unit (before and after reform) and there was a positive correlation between technical efficiency and teaching intensity and dedication. The reform promoted management improvements but there is a need of further follow-up to assess the effectiveness of funding changes.

  1. An Energy-Efficient Cluster-Based Vehicle Detection on Road Network Using Intention Numeration Method

    PubMed Central

    Devasenapathy, Deepa; Kannan, Kathiravan

    2015-01-01

    The traffic in the road network is progressively increasing at a greater extent. Good knowledge of network traffic can minimize congestions using information pertaining to road network obtained with the aid of communal callers, pavement detectors, and so on. Using these methods, low featured information is generated with respect to the user in the road network. Although the existing schemes obtain urban traffic information, they fail to calculate the energy drain rate of nodes and to locate equilibrium between the overhead and quality of the routing protocol that renders a great challenge. Thus, an energy-efficient cluster-based vehicle detection in road network using the intention numeration method (CVDRN-IN) is developed. Initially, sensor nodes that detect a vehicle are grouped into separate clusters. Further, we approximate the strength of the node drain rate for a cluster using polynomial regression function. In addition, the total node energy is estimated by taking the integral over the area. Finally, enhanced data aggregation is performed to reduce the amount of data transmission using digital signature tree. The experimental performance is evaluated with Dodgers loop sensor data set from UCI repository and the performance evaluation outperforms existing work on energy consumption, clustering efficiency, and node drain rate. PMID:25793221

  2. An energy-efficient cluster-based vehicle detection on road network using intention numeration method.

    PubMed

    Devasenapathy, Deepa; Kannan, Kathiravan

    2015-01-01

    The traffic in the road network is progressively increasing at a greater extent. Good knowledge of network traffic can minimize congestions using information pertaining to road network obtained with the aid of communal callers, pavement detectors, and so on. Using these methods, low featured information is generated with respect to the user in the road network. Although the existing schemes obtain urban traffic information, they fail to calculate the energy drain rate of nodes and to locate equilibrium between the overhead and quality of the routing protocol that renders a great challenge. Thus, an energy-efficient cluster-based vehicle detection in road network using the intention numeration method (CVDRN-IN) is developed. Initially, sensor nodes that detect a vehicle are grouped into separate clusters. Further, we approximate the strength of the node drain rate for a cluster using polynomial regression function. In addition, the total node energy is estimated by taking the integral over the area. Finally, enhanced data aggregation is performed to reduce the amount of data transmission using digital signature tree. The experimental performance is evaluated with Dodgers loop sensor data set from UCI repository and the performance evaluation outperforms existing work on energy consumption, clustering efficiency, and node drain rate.

  3. Performance of SEM scintillation detector evaluated by modulation transfer function and detective quantum efficiency function.

    PubMed

    Bok, Jan; Schauer, Petr

    2014-01-01

    In the paper, the SEM detector is evaluated by the modulation transfer function (MTF) which expresses the detector's influence on the SEM image contrast. This is a novel approach, since the MTF was used previously to describe only the area imaging detectors, or whole imaging systems. The measurement technique and calculation of the MTF for the SEM detector are presented. In addition, the measurement and calculation of the detective quantum efficiency (DQE) as a function of the spatial frequency for the SEM detector are described. In this technique, the time modulated e-beam is used in order to create well-defined input signal for the detector. The MTF and DQE measurements are demonstrated on the Everhart-Thornley scintillation detector. This detector was alternated using the YAG:Ce, YAP:Ce, and CRY18 single-crystal scintillators. The presented MTF and DQE characteristics show good imaging properties of the detectors with the YAP:Ce or CRY18 scintillator, especially for a specific type of the e-beam scan. The results demonstrate the great benefit of the description of SEM detectors using the MTF and DQE. In addition, point-by-point and continual-sweep e-beam scans in SEM were discussed and their influence on the image quality was revealed using the MTF. © 2013 Wiley Periodicals, Inc.

  4. Reevaluation of secondary neutron spectra from thick targets upon heavy-ion bombardment

    NASA Astrophysics Data System (ADS)

    Satoh, D.; Kurosawa, T.; Sato, T.; Endo, A.; Takada, M.; Iwase, H.; Nakamura, T.; Niita, K.

    2007-12-01

    Previously published data of secondary neutron spectra from thick targets of C, Al, Cu and Pb bombarded with heavy ions from He to Xe are revised by using a new set of neutron-detection efficiency values for a liquid organic scintillator calculated with SCINFUL-QMD. Additional data have been measured for bombardment of C target by 400-MeV/nucleon C ions and 800-MeV/nucleon Si ions. The set of spectra are compared with the calculation results using a Monte-Carlo heavy-ion transport code, PHITS. It was found that PHITS is able to reproduce the secondary neutron spectra in a wide neutron-energy regime.

  5. Scintillation detector efficiencies for neutrons in the energy region above 20 MeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dickens, J.K.

    1991-01-01

    The computer program SCINFUL (for SCINtillator FUL1 response) is a program designed to provide a calculated complete pulse-height response anticipated for neutrons being detected by either an NE-213 (liquid) scintillator or an NE-110 (solid) scintillator in the shape of a right circular cylinder. The point neutron source may be placed at any location with respect to the detector, even inside of it. The neutron source may be monoenergetic, or Maxwellian distributed, or distributed between chosen lower and upper bounds. The calculational method uses Monte Carlo techniques, and it is relativistically correct. Extensive comparisons with a variety of experimental data havemore » been made. There is generally overall good agreement (less than 10% differences) of results for SCINFUL calculations with measured integral detector efficiencies for the design incident neutron energy range of 0.1 to 80 MeV. Calculations of differential detector responses, i.e. yield versus response pulse height, are generally within about 5% on the average for incident neutron energies between 16 and 50 MeV and for the upper 70% of the response pulse height. For incident neutron energies between 50 and 80 MeV, the calculated shape of the response agrees with measurements, but the calculations tend to underpredict the absolute values of the measured responses. Extension of the program to compute responses for incident neutron energies greater than 80 MeV will require new experimental data on neutron interactions with carbon. 32 refs., 6 figs., 2 tabs.« less

  6. Scintillation detector efficiencies for neutrons in the energy region above 20 MeV

    NASA Astrophysics Data System (ADS)

    Dickens, J. K.

    The computer program SCINFUL (for SCINtillator FUL1 response) is a program designed to provide a calculated complete pulse-height response anticipated for neutrons being detected by either an NE-213 (liquid) scintillator or an NE-110 (solid) scintillator in the shape of a right circular cylinder. The point neutron source may be placed at any location with respect to the detector, even inside of it. The neutron source may be monoenergetic, or Maxwellian distributed, or distributed between chosen lower and upper bounds. The calculational method uses Monte Carlo techniques, and it is relativistically correct. Extensive comparisons with a variety of experimental data were made. There is generally overall good agreement (less than 10 pct. differences) of results for SCINFUL calculations with measured integral detector efficiencies for the design incident neutron energy range of 0.1 to 80 MeV. Calculations of differential detector responses, i.e., yield versus response pulse height, are generally within about 5 pct. on the average for incident neutron energies between 16 and 50 MeV and for the upper 70 pct. of the response pulse height. For incident neutron energies between 50 and 80 MeV, the calculated shape of the response agrees with measurements, but the calculations tend to underpredict the absolute values of the measured responses. Extension of the program to compute responses for incident neutron energies greater than 80 MeV will require new experimental data on neutron interactions with carbon.

  7. Hybrid Monte Carlo/Deterministic Methods for Accelerating Active Interrogation Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peplow, Douglas E.; Miller, Thomas Martin; Patton, Bruce W

    2013-01-01

    The potential for smuggling special nuclear material (SNM) into the United States is a major concern to homeland security, so federal agencies are investigating a variety of preventive measures, including detection and interdiction of SNM during transport. One approach for SNM detection, called active interrogation, uses a radiation source, such as a beam of neutrons or photons, to scan cargo containers and detect the products of induced fissions. In realistic cargo transport scenarios, the process of inducing and detecting fissions in SNM is difficult due to the presence of various and potentially thick materials between the radiation source and themore » SNM, and the practical limitations on radiation source strength and detection capabilities. Therefore, computer simulations are being used, along with experimental measurements, in efforts to design effective active interrogation detection systems. The computer simulations mostly consist of simulating radiation transport from the source to the detector region(s). Although the Monte Carlo method is predominantly used for these simulations, difficulties persist related to calculating statistically meaningful detector responses in practical computing times, thereby limiting their usefulness for design and evaluation of practical active interrogation systems. In previous work, the benefits of hybrid methods that use the results of approximate deterministic transport calculations to accelerate high-fidelity Monte Carlo simulations have been demonstrated for source-detector type problems. In this work, the hybrid methods are applied and evaluated for three example active interrogation problems. Additionally, a new approach is presented that uses multiple goal-based importance functions depending on a particle s relevance to the ultimate goal of the simulation. Results from the examples demonstrate that the application of hybrid methods to active interrogation problems dramatically increases their calculational efficiency.« less

  8. ProBiS-2012: web server and web services for detection of structurally similar binding sites in proteins.

    PubMed

    Konc, Janez; Janezic, Dusanka

    2012-07-01

    The ProBiS web server is a web server for detection of structurally similar binding sites in the PDB and for local pairwise alignment of protein structures. In this article, we present a new version of the ProBiS web server that is 10 times faster than earlier versions, due to the efficient parallelization of the ProBiS algorithm, which now allows significantly faster comparison of a protein query against the PDB and reduces the calculation time for scanning the entire PDB from hours to minutes. It also features new web services, and an improved user interface. In addition, the new web server is united with the ProBiS-Database and thus provides instant access to pre-calculated protein similarity profiles for over 29 000 non-redundant protein structures. The ProBiS web server is particularly adept at detection of secondary binding sites in proteins. It is freely available at http://probis.cmm.ki.si/old-version, and the new ProBiS web server is at http://probis.cmm.ki.si.

  9. ProBiS-2012: web server and web services for detection of structurally similar binding sites in proteins

    PubMed Central

    Konc, Janez; Janežič, Dušanka

    2012-01-01

    The ProBiS web server is a web server for detection of structurally similar binding sites in the PDB and for local pairwise alignment of protein structures. In this article, we present a new version of the ProBiS web server that is 10 times faster than earlier versions, due to the efficient parallelization of the ProBiS algorithm, which now allows significantly faster comparison of a protein query against the PDB and reduces the calculation time for scanning the entire PDB from hours to minutes. It also features new web services, and an improved user interface. In addition, the new web server is united with the ProBiS-Database and thus provides instant access to pre-calculated protein similarity profiles for over 29 000 non-redundant protein structures. The ProBiS web server is particularly adept at detection of secondary binding sites in proteins. It is freely available at http://probis.cmm.ki.si/old-version, and the new ProBiS web server is at http://probis.cmm.ki.si. PMID:22600737

  10. A hierarchical detection method in external communication for self-driving vehicles based on TDMA.

    PubMed

    Alheeti, Khattab M Ali; Al-Ani, Muzhir Shaban; McDonald-Maier, Klaus

    2018-01-01

    Security is considered a major challenge for self-driving and semi self-driving vehicles. These vehicles depend heavily on communications to predict and sense their external environment used in their motion. They use a type of ad hoc network termed Vehicular ad hoc networks (VANETs). Unfortunately, VANETs are potentially exposed to many attacks on network and application level. This paper, proposes a new intrusion detection system to protect the communication system of self-driving cars; utilising a combination of hierarchical models based on clusters and log parameters. This security system is designed to detect Sybil and Wormhole attacks in highway usage scenarios. It is based on clusters, utilising Time Division Multiple Access (TDMA) to overcome some of the obstacles of VANETs such as high density, high mobility and bandwidth limitations in exchanging messages. This makes the security system more efficient, accurate and capable of real time detection and quick in identification of malicious behaviour in VANETs. In this scheme, each vehicle log calculates and stores different parameter values after receiving the cooperative awareness messages from nearby vehicles. The vehicles exchange their log data and determine the difference between the parameters, which is utilised to detect Sybil attacks and Wormhole attacks. In order to realize efficient and effective intrusion detection system, we use the well-known network simulator (ns-2) to verify the performance of the security system. Simulation results indicate that the security system can achieve high detection rates and effectively detect anomalies with low rate of false alarms.

  11. Probability of detecting nematode infestations for quarantine sampling with imperfect extraction efficacy

    PubMed Central

    Chen, Peichen; Liu, Shih-Chia; Liu, Hung-I; Chen, Tse-Wei

    2011-01-01

    For quarantine sampling, it is of fundamental importance to determine the probability of finding an infestation when a specified number of units are inspected. In general, current sampling procedures assume 100% probability (perfect) of detecting a pest if it is present within a unit. Ideally, a nematode extraction method should remove all stages of all species with 100% efficiency regardless of season, temperature, or other environmental conditions; in practice however, no method approaches these criteria. In this study we determined the probability of detecting nematode infestations for quarantine sampling with imperfect extraction efficacy. Also, the required sample and the risk involved in detecting nematode infestations with imperfect extraction efficacy are presented. Moreover, we developed a computer program to calculate confidence levels for different scenarios with varying proportions of infestation and efficacy of detection. In addition, a case study, presenting the extraction efficacy of the modified Baermann's Funnel method on Aphelenchoides besseyi, is used to exemplify the use of our program to calculate the probability of detecting nematode infestations in quarantine sampling with imperfect extraction efficacy. The result has important implications for quarantine programs and highlights the need for a very large number of samples if perfect extraction efficacy is not achieved in such programs. We believe that the results of the study will be useful for the determination of realistic goals in the implementation of quarantine sampling. PMID:22791911

  12. Calculated Coupling Efficiency Between an Elliptical-Core Optical Fiber and a Silicon Oxynitride Rib Waveguide [Corrected Copy

    NASA Technical Reports Server (NTRS)

    Tuma, Margaret L.; Beheim, Glenn

    1995-01-01

    The effective-index method and Marcatili's technique were utilized independently to calculate the electric field profile of a rib channel waveguide. Using the electric field profile calculated from each method, the theoretical coupling efficiency between a single-mode optical fiber and a rib waveguide was calculated using the overlap integral. Perfect alignment was assumed and the coupling efficiency calculated. The coupling efficiency calculation was then repeated for a range of transverse offsets.

  13. Improved Determination of the Neutron Lifetime

    NASA Astrophysics Data System (ADS)

    Yue, A.

    2013-10-01

    The most precise determination of the neutron lifetime using the beam method reported a result of τn = (886 . 3 +/- 3 . 4) s. The dominant uncertainties were attributed to the absolute determination of the fluence of the neutron beam (2.7 s). The fluence was determined with a monitor that counted the neutron-induced charged particles from absorption in a thin, well-characterized 6Li deposit. The detection efficiency of the monitor was calculated from the areal density of the deposit, the detector solid angle, and the ENDF/B-VI 6Li(n,t)4He thermal neutron cross section. We have used a second, totally-absorbing neutron detector to directly measure the detection efficiency of the monitor on a monochromatic neutron beam of precisely known wavelength. This method does not rely on the 6Li(n,t)4He cross section or any other nuclear data. The monitor detection efficiency was measured to an uncertainty of 0.06%, which represents a five-fold improvement in uncertainty. We have verified the temporal stability of the monitor with ancillary measurements, and the measured neutron monitor efficiency has been used to improve the fluence determination in the past lifetime experiment. An updated neutron lifetime based on the improved fluence determination will be presented. Work done in collaboration with M. Dewey, D. Gilliam, J. Nico, National Institute of Standards and Technology; G. Greene, University of Tennessee / Oak Ridge National Laboratory; A. Laptev, Los Alamos National Laboratory; W. Snow, Indiana University; and F. Wietfeldt, Tulane University.

  14. Radon measurement of natural gas using alpha scintillation cells.

    PubMed

    Kitto, Michael E; Torres, Miguel A; Haines, Douglas K; Semkow, Thomas M

    2014-12-01

    Due to their sensitivity and ease of use, alpha-scintillation cells are being increasingly utilized for measurements of radon ((222)Rn) in natural gas. Laboratory studies showed an average increase of 7.3% in the measurement efficiency of alpha-scintillation cells when filled with less-dense natural gas rather than regular air. A theoretical calculation comparing the atomic weight and density of air to that of natural gas suggests a 6-7% increase in the detection efficiency when measuring radon in the cells. A correction is also applicable when the sampling location and measurement laboratory are at different elevations. These corrections to the measurement efficiency need to be considered in order to derive accurate concentrations of radon in natural gas. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Time Series Discord Detection in Medical Data using a Parallel Relational Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodbridge, Diane; Rintoul, Mark Daniel; Wilson, Andrew T.

    Recent advances in sensor technology have made continuous real-time health monitoring available in both hospital and non-hospital settings. Since data collected from high frequency medical sensors includes a huge amount of data, storing and processing continuous medical data is an emerging big data area. Especially detecting anomaly in real time is important for patients’ emergency detection and prevention. A time series discord indicates a subsequence that has the maximum difference to the rest of the time series subsequences, meaning that it has abnormal or unusual data trends. In this study, we implemented two versions of time series discord detection algorithmsmore » on a high performance parallel database management system (DBMS) and applied them to 240 Hz waveform data collected from 9,723 patients. The initial brute force version of the discord detection algorithm takes each possible subsequence and calculates a distance to the nearest non-self match to find the biggest discords in time series. For the heuristic version of the algorithm, a combination of an array and a trie structure was applied to order time series data for enhancing time efficiency. The study results showed efficient data loading, decoding and discord searches in a large amount of data, benefiting from the time series discord detection algorithm and the architectural characteristics of the parallel DBMS including data compression, data pipe-lining, and task scheduling.« less

  16. Time Series Discord Detection in Medical Data using a Parallel Relational Database [PowerPoint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodbridge, Diane; Wilson, Andrew T.; Rintoul, Mark Daniel

    Recent advances in sensor technology have made continuous real-time health monitoring available in both hospital and non-hospital settings. Since data collected from high frequency medical sensors includes a huge amount of data, storing and processing continuous medical data is an emerging big data area. Especially detecting anomaly in real time is important for patients’ emergency detection and prevention. A time series discord indicates a subsequence that has the maximum difference to the rest of the time series subsequences, meaning that it has abnormal or unusual data trends. In this study, we implemented two versions of time series discord detection algorithmsmore » on a high performance parallel database management system (DBMS) and applied them to 240 Hz waveform data collected from 9,723 patients. The initial brute force version of the discord detection algorithm takes each possible subsequence and calculates a distance to the nearest non-self match to find the biggest discords in time series. For the heuristic version of the algorithm, a combination of an array and a trie structure was applied to order time series data for enhancing time efficiency. The study results showed efficient data loading, decoding and discord searches in a large amount of data, benefiting from the time series discord detection algorithm and the architectural characteristics of the parallel DBMS including data compression, data pipe-lining, and task scheduling.« less

  17. Aerial Measuring System Sensor Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. S. Detwiler

    2002-04-01

    This project deals with the modeling the Aerial Measuring System (AMS) fixed-wing and rotary-wing sensor systems, which are critical U.S. Department of Energy's National Nuclear Security Administration (NNSA) Consequence Management assets. The fixed-wing system is critical in detecting lost or stolen radiography or medical sources, or mixed fission products as from a commercial power plant release at high flying altitudes. The helicopter is typically used at lower altitudes to determine ground contamination, such as in measuring americium from a plutonium ground dispersal during a cleanup. Since the sensitivity of these instruments as a function of altitude is crucial in estimatingmore » detection limits of various ground contaminations and necessary count times, a characterization of their sensitivity as a function of altitude and energy is needed. Experimental data at altitude as well as laboratory benchmarks is important to insure that the strong effects of air attenuation are modeled correctly. The modeling presented here is the first attempt at such a characterization of the equipment for flying altitudes. The sodium iodide (NaI) sensors utilized with these systems were characterized using the Monte Carlo N-Particle code (MCNP) developed at Los Alamos National Laboratory. For the fixed wing system, calculations modeled the spectral response for the 3-element NaI detector pod and High-Purity Germanium (HPGe) detector, in the relevant energy range of 50 keV to 3 MeV. NaI detector responses were simulated for both point and distributed surface sources as a function of gamma energy and flying altitude. For point sources, photopeak efficiencies were calculated for a zero radial distance and an offset equal to the altitude. For distributed sources approximating an infinite plane, gross count efficiencies were calculated and normalized to a uniform surface deposition of 1 {micro}Ci/m{sup 2}. The helicopter calculations modeled the transport of americium-241 ({sup 241}Am) as this is the ''marker'' isotope utilized by the system for Pu detection. The helicopter sensor array consists of 2 six-element NaI detector pods, and the NaI pod detector response was simulated for a distributed surface source of {sup 241}Am as a function of altitude.« less

  18. Highly selective and sensitive determination of Cu2+ in drink and water samples based on a 1,8-diaminonaphthalene derived fluorescent sensor

    NASA Astrophysics Data System (ADS)

    Sun, Tao; Li, Yang; Niu, Qingfen; Li, Tianduo; Liu, Yan

    2018-04-01

    A new simple and efficient fluorescent sensor L based on 1,8-diaminonaphthalene Schiff-base for highly sensitive and selective determination of Cu2+ in drink and water has been developed. This Cu2+-selective detection over other tested metal ions displayed an obvious color change from blue to colorless easily detected by naked eye. The detection limit is determined to be as low as 13.2 nM and the response time is very fast within 30 s. The 1:1 binding mechanism was well confirmed by fluorescence measurements, IR analysis and DFT calculations. Importantly, this sensor L was employed for quick detection of Cu2+ in drink and environmental water samples with satisfactory results, providing a simple, rapid, reliable and feasible Cu2+-sensing method.

  19. False-Negative Rate and Recovery Efficiency Performance of a Validated Sponge Wipe Sampling Method

    PubMed Central

    Piepel, Greg F.; Boucher, Raymond; Tezak, Matt; Amidan, Brett G.; Einfeld, Wayne

    2012-01-01

    Recovery of spores from environmental surfaces varies due to sampling and analysis methods, spore size and characteristics, surface materials, and environmental conditions. Tests were performed to evaluate a new, validated sponge wipe method using Bacillus atrophaeus spores. Testing evaluated the effects of spore concentration and surface material on recovery efficiency (RE), false-negative rate (FNR), limit of detection (LOD), and their uncertainties. Ceramic tile and stainless steel had the highest mean RE values (48.9 and 48.1%, respectively). Faux leather, vinyl tile, and painted wood had mean RE values of 30.3, 25.6, and 25.5, respectively, while plastic had the lowest mean RE (9.8%). Results show roughly linear dependences of RE and FNR on surface roughness, with smoother surfaces resulting in higher mean REs and lower FNRs. REs were not influenced by the low spore concentrations tested (3.10 × 10−3 to 1.86 CFU/cm2). Stainless steel had the lowest mean FNR (0.123), and plastic had the highest mean FNR (0.479). The LOD90 (≥1 CFU detected 90% of the time) varied with surface material, from 0.015 CFU/cm2 on stainless steel up to 0.039 on plastic. It may be possible to improve sampling results by considering surface roughness in selecting sampling locations and interpreting spore recovery data. Further, FNR values (calculated as a function of concentration and surface material) can be used presampling to calculate the numbers of samples for statistical sampling plans with desired performance and postsampling to calculate the confidence in characterization and clearance decisions. PMID:22138998

  20. A simple system for detection of EEG artifacts in polysomnographic recordings.

    PubMed

    Durka, P J; Klekowicz, H; Blinowska, K J; Szelenberger, W; Niemcewicz, Sz

    2003-04-01

    We present an efficient parametric system for automatic detection of electroencephalogram (EEG) artifacts in polysomnographic recordings. For each of the selected types of artifacts, a relevant parameter was calculated for a given epoch. If any of these parameters exceeded a threshold, the epoch was marked as an artifact. Performance of the system, evaluated on 18 overnight polysomnographic recordings, revealed concordance with decisions of human experts close to the interexpert agreement and the repeatability of expert's decisions, assessed via a double-blind test. Complete software (Matlab source code) for the presented system is freely available from the Internet at http://brain.fuw.edu.pl/artifacts.

  1. Visual verification and analysis of cluster detection for molecular dynamics.

    PubMed

    Grottel, Sebastian; Reina, Guido; Vrabec, Jadran; Ertl, Thomas

    2007-01-01

    A current research topic in molecular thermodynamics is the condensation of vapor to liquid and the investigation of this process at the molecular level. Condensation is found in many physical phenomena, e.g. the formation of atmospheric clouds or the processes inside steam turbines, where a detailed knowledge of the dynamics of condensation processes will help to optimize energy efficiency and avoid problems with droplets of macroscopic size. The key properties of these processes are the nucleation rate and the critical cluster size. For the calculation of these properties it is essential to make use of a meaningful definition of molecular clusters, which currently is a not completely resolved issue. In this paper a framework capable of interactively visualizing molecular datasets of such nucleation simulations is presented, with an emphasis on the detected molecular clusters. To check the quality of the results of the cluster detection, our framework introduces the concept of flow groups to highlight potential cluster evolution over time which is not detected by the employed algorithm. To confirm the findings of the visual analysis, we coupled the rendering view with a schematic view of the clusters' evolution. This allows to rapidly assess the quality of the molecular cluster detection algorithm and to identify locations in the simulation data in space as well as in time where the cluster detection fails. Thus, thermodynamics researchers can eliminate weaknesses in their cluster detection algorithms. Several examples for the effective and efficient usage of our tool are presented.

  2. Java web tools for PCR, in silico PCR, and oligonucleotide assembly and analysis.

    PubMed

    Kalendar, Ruslan; Lee, David; Schulman, Alan H

    2011-08-01

    The polymerase chain reaction is fundamental to molecular biology and is the most important practical molecular technique for the research laboratory. We have developed and tested efficient tools for PCR primer and probe design, which also predict oligonucleotide properties based on experimental studies of PCR efficiency. The tools provide comprehensive facilities for designing primers for most PCR applications and their combinations, including standard, multiplex, long-distance, inverse, real-time, unique, group-specific, bisulphite modification assays, Overlap-Extension PCR Multi-Fragment Assembly, as well as a programme to design oligonucleotide sets for long sequence assembly by ligase chain reaction. The in silico PCR primer or probe search includes comprehensive analyses of individual primers and primer pairs. It calculates the melting temperature for standard and degenerate oligonucleotides including LNA and other modifications, provides analyses for a set of primers with prediction of oligonucleotide properties, dimer and G-quadruplex detection, linguistic complexity, and provides a dilution and resuspension calculator. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Development of an Itemwise Efficiency Scoring Method: Concurrent, Convergent, Discriminant, and Neuroimaging-Based Predictive Validity Assessed in a Large Community Sample

    PubMed Central

    Moore, Tyler M.; Reise, Steven P.; Roalf, David R.; Satterthwaite, Theodore D.; Davatzikos, Christos; Bilker, Warren B.; Port, Allison M.; Jackson, Chad T.; Ruparel, Kosha; Savitt, Adam P.; Baron, Robert B.; Gur, Raquel E.; Gur, Ruben C.

    2016-01-01

    Traditional “paper-and-pencil” testing is imprecise in measuring speed and hence limited in assessing performance efficiency, but computerized testing permits precision in measuring itemwise response time. We present a method of scoring performance efficiency (combining information from accuracy and speed) at the item level. Using a community sample of 9,498 youths age 8-21, we calculated item-level efficiency scores on four neurocognitive tests, and compared the concurrent, convergent, discriminant, and predictive validity of these scores to simple averaging of standardized speed and accuracy-summed scores. Concurrent validity was measured by the scores' abilities to distinguish men from women and their correlations with age; convergent and discriminant validity were measured by correlations with other scores inside and outside of their neurocognitive domains; predictive validity was measured by correlations with brain volume in regions associated with the specific neurocognitive abilities. Results provide support for the ability of itemwise efficiency scoring to detect signals as strong as those detected by standard efficiency scoring methods. We find no evidence of superior validity of the itemwise scores over traditional scores, but point out several advantages of the former. The itemwise efficiency scoring method shows promise as an alternative to standard efficiency scoring methods, with overall moderate support from tests of four different types of validity. This method allows the use of existing item analysis methods and provides the convenient ability to adjust the overall emphasis of accuracy versus speed in the efficiency score, thus adjusting the scoring to the real-world demands the test is aiming to fulfill. PMID:26866796

  4. Research on conflict detection algorithm in 3D visualization environment of urban rail transit line

    NASA Astrophysics Data System (ADS)

    Wang, Li; Xiong, Jing; You, Kuokuo

    2017-03-01

    In this paper, a method of collision detection is introduced, and the theory of three-dimensional modeling of underground buildings and urban rail lines is realized by rapidly extracting the buildings that are in conflict with the track area in the 3D visualization environment. According to the characteristics of the buildings, CSG and B-rep are used to model the buildings based on CSG and B-rep. On the basis of studying the modeling characteristics, this paper proposes to use the AABB level bounding volume method to detect the first conflict and improve the detection efficiency, and then use the triangular rapid intersection detection algorithm to detect the conflict, and finally determine whether the building collides with the track area. Through the algorithm of this paper, we can quickly extract buildings colliding with the influence area of the track line, so as to help the line design, choose the best route and calculate the cost of land acquisition in the three-dimensional visualization environment.

  5. An Improved Harmonic Current Detection Method Based on Parallel Active Power Filter

    NASA Astrophysics Data System (ADS)

    Zeng, Zhiwu; Xie, Yunxiang; Wang, Yingpin; Guan, Yuanpeng; Li, Lanfang; Zhang, Xiaoyu

    2017-05-01

    Harmonic detection technology plays an important role in the applications of active power filter. The accuracy and real-time performance of harmonic detection are the precondition to ensure the compensation performance of Active Power Filter (APF). This paper proposed an improved instantaneous reactive power harmonic current detection algorithm. The algorithm uses an improved ip -iq algorithm which is combined with the moving average value filter. The proposed ip -iq algorithm can remove the αβ and dq coordinate transformation, decreasing the cost of calculation, simplifying the extraction process of fundamental components of load currents, and improving the detection speed. The traditional low-pass filter is replaced by the moving average filter, detecting the harmonic currents more precisely and quickly. Compared with the traditional algorithm, the THD (Total Harmonic Distortion) of the grid currents is reduced from 4.41% to 3.89% for the simulations and from 8.50% to 4.37% for the experiments after the improvement. The results show the proposed algorithm is more accurate and efficient.

  6. Calibration of 4π NaI(Tl) detectors with coincidence summing correction using new numerical procedure and ANGLE4 software

    NASA Astrophysics Data System (ADS)

    Badawi, Mohamed S.; Jovanovic, Slobodan I.; Thabet, Abouzeid A.; El-Khatib, Ahmed M.; Dlabac, Aleksandar D.; Salem, Bohaysa A.; Gouda, Mona M.; Mihaljevic, Nikola N.; Almugren, Kholud S.; Abbas, Mahmoud I.

    2017-03-01

    The 4π NaI(Tl) γ-ray detectors are consisted of the well cavity with cylindrical cross section, and the enclosing geometry of measurements with large detection angle. This leads to exceptionally high efficiency level and a significant coincidence summing effect, much more than a single cylindrical or coaxial detector especially in very low activity measurements. In the present work, the detection effective solid angle in addition to both full-energy peak and total efficiencies of well-type detectors, were mainly calculated by the new numerical simulation method (NSM) and ANGLE4 software. To obtain the coincidence summing correction factors through the previously mentioned methods, the simulation of the coincident emission of photons was modeled mathematically, based on the analytical equations and complex integrations over the radioactive volumetric sources including the self-attenuation factor. The measured full-energy peak efficiencies and correction factors were done by using 152Eu, where an exact adjustment is required for the detector efficiency curve, because neglecting the coincidence summing effect can make the results inconsistent with the whole. These phenomena, in general due to the efficiency calibration process and the coincidence summing corrections, appear jointly. The full-energy peak and the total efficiencies from the two methods typically agree with discrepancy 10%. The discrepancy between the simulation, ANGLE4 and measured full-energy peak after corrections for the coincidence summing effect was on the average, while not exceeding 14%. Therefore, this technique can be easily applied in establishing the efficiency calibration curves of well-type detectors.

  7. Optimization and Characterization of a Novel Self Powered Solid State Neutron Detector

    NASA Astrophysics Data System (ADS)

    Clinton, Justin

    There is a strong interest in detecting both the diversion of special nuclear material (SNM) from legitimate, peaceful purposes and the transport of illicit SNM across domestic and international borders and ports. A simple solid-state detector employs a planar solar-cell type p-n junction and a thin conversion layer that converts incident neutrons into detectable charged particles, such as protons, alpha-particles, and heavier ions. Although simple planar devices can act as highly portable, low cost detectors, they have historically been limited to relatively low detection efficiencies; ˜10% and ˜0.2% for thermal and fast detectors, respectively. To increase intrinsic detection efficiency, the incorporation of 3D microstructures into p-i-n silicon devices was proposed. In this research, a combination of existing and new types of detector microstructures were investigated; Monte Carlo models, based on analytical calculations, were constructed and characterized using the GEANT4 simulation toolkit. The simulation output revealed that an array of etched hexagonal holes arranged in a honeycomb pattern and filled with either enriched (99% 10B) boron or parylene resulted in the highest intrinsic detection efficiencies of 48% and 0.88% for thermal and fast neutrons, respectively. The optimal parameters corresponding to each model were utilized as the basis for the fabrication of several prototype detectors. A calibrated 252Cf spontaneous fission source was utilized to generate fast neutrons, while thermal neutrons were created by placing the 252Cf in an HDPE housing designed and optimized using the MCNP simulation software. Upon construction, thermal neutron calibration was performed via activation analysis of gold foils and measurements from a 6Li loaded glass scintillator. Experimental testing of the prototype detectors resulted in maximum intrinsic efficiencies of 4.5 and 0.12% for the thermal and fast devices, respectively. The prototype thermal device was filled with natural (19% 10B) boron; scaling the response to 99% 10B enriched boron resulted in an intrinsic efficiency of 22.5%, one of the highest results in the literature. A comparison of simulated and experimental detector responses demonstrated a high degree of correlation, validating the conceptual models.

  8. A hierarchical detection method in external communication for self-driving vehicles based on TDMA

    PubMed Central

    Al-ani, Muzhir Shaban; McDonald-Maier, Klaus

    2018-01-01

    Security is considered a major challenge for self-driving and semi self-driving vehicles. These vehicles depend heavily on communications to predict and sense their external environment used in their motion. They use a type of ad hoc network termed Vehicular ad hoc networks (VANETs). Unfortunately, VANETs are potentially exposed to many attacks on network and application level. This paper, proposes a new intrusion detection system to protect the communication system of self-driving cars; utilising a combination of hierarchical models based on clusters and log parameters. This security system is designed to detect Sybil and Wormhole attacks in highway usage scenarios. It is based on clusters, utilising Time Division Multiple Access (TDMA) to overcome some of the obstacles of VANETs such as high density, high mobility and bandwidth limitations in exchanging messages. This makes the security system more efficient, accurate and capable of real time detection and quick in identification of malicious behaviour in VANETs. In this scheme, each vehicle log calculates and stores different parameter values after receiving the cooperative awareness messages from nearby vehicles. The vehicles exchange their log data and determine the difference between the parameters, which is utilised to detect Sybil attacks and Wormhole attacks. In order to realize efficient and effective intrusion detection system, we use the well-known network simulator (ns-2) to verify the performance of the security system. Simulation results indicate that the security system can achieve high detection rates and effectively detect anomalies with low rate of false alarms. PMID:29315302

  9. Preliminary Monte Carlo calculations for the UNCOSS neutron-based explosive detector

    NASA Astrophysics Data System (ADS)

    Eleon, C.; Perot, B.; Carasco, C.

    2010-07-01

    The goal of the FP7 UNCOSS project (Underwater Coastal Sea Surveyor) is to develop a non destructive explosive detection system based on the associated particle technique, in view to improve the security of coastal area and naval infrastructures where violent conflicts took place. The end product of the project will be a prototype of a complete coastal survey system, including a neutron-based sensor capable of confirming the presence of explosives on the sea bottom. A 3D analysis of prompt gamma rays induced by 14 MeV neutrons will be performed to identify elements constituting common military explosives, such as C, N and O. This paper presents calculations performed with the MCNPX computer code to support the ongoing design studies performed by the UNCOSS collaboration. Detection efficiencies, time and energy resolutions of the possible gamma-ray detectors are compared, which show NaI(Tl) or LaBr 3(Ce) scintillators will be suitable for this application. The effect of neutron attenuation and scattering in the seawater, influencing the counting statistics and signal-to-noise ratio, are also studied with calculated neutron time-of-flight and gamma-ray spectra for an underwater TNT target.

  10. Heart rate calculation from ensemble brain wave using wavelet and Teager-Kaiser energy operator.

    PubMed

    Srinivasan, Jayaraman; Adithya, V

    2015-01-01

    Electroencephalogram (EEG) signal artifacts are caused by various factors, such as, Electro-oculogram (EOG), Electromyogram (EMG), Electrocardiogram (ECG), movement artifact and line interference. The relatively high electrical energy cardiac activity causes EEG artifacts. In EEG signal processing the general approach is to remove the ECG signal. In this paper, we introduce an automated method to extract the ECG signal from EEG using wavelet and Teager-Kaiser energy operator for R-peak enhancement and detection. From the detected R-peaks the heart rate (HR) is calculated for clinical diagnosis. To check the efficiency of our method, we compare the HR calculated from ECG signal recorded in synchronous with EEG. The proposed method yields a mean error of 1.4% for the heart rate and 1.7% for mean R-R interval. The result illustrates that, proposed method can be used for ECG extraction from single channel EEG and used in clinical diagnosis like estimation for stress analysis, fatigue, and sleep stages classification studies as a multi-model system. In addition, this method eliminates the dependence of additional synchronous ECG in extraction of ECG from EEG signal process.

  11. The research and development of the non-contact detection of the tubing internal thread with a line structured light

    NASA Astrophysics Data System (ADS)

    Hu, Yuanyuan; Xu, Yingying; Hao, Qun; Hu, Yao

    2013-12-01

    The tubing internal thread plays an irreplaceable role in the petroleum equipment. The unqualified tubing can directly lead to leakage, slippage and bring huge losses for oil industry. For the purpose of improving efficiency and precision of tubing internal thread detection, we develop a new non-contact tubing internal thread measurement system based on the laser triangulation principle. Firstly, considering that the tubing thread had a small diameter and relatively smooth surface, we built a set of optical system with a line structured light to irradiate the internal thread surface and obtain an image which contains the internal thread profile information through photoelectric sensor. Secondly, image processing techniques were used to do the edge detection of the internal thread from the obtained image. One key method was the sub-pixel technique which greatly improved the detection accuracy under the same hardware conditions. Finally, we restored the real internal thread contour information on the basis of laser triangulation method and calculated tubing thread parameters such as the pitch, taper and tooth type angle. In this system, the profile of several thread teeth can be obtained at the same time. Compared with other existing scanning methods using point light and stepper motor, this system greatly improves the detection efficiency. Experiment results indicate that this system can achieve the high precision and non-contact measurement of the tubing internal thread.

  12. Detection of Landmines by Neutron Backscattering: Effects of Soil Moisture on the Detection System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baysoy, D. Y.; Subasi, M.

    2010-01-21

    Detection of buried land mines by using neutron backscattering technique (NBS) is a well established method. It depends on detecting a hydrogen anomaly in dry soil. Since a landmine and its plastic casing contain much more hydrogen atoms than the dry soil, this anomaly can be detected by observing a rise in the number of neutrons moderated to thermal or epithermal energy. But, the presence of moisture in the soil limits the effectiveness of the measurements. In this work, a landmine detection system using the NBS technique was designed. A series of Monte Carlo calculations was carried out to determinemore » the limits of the system due to the moisture content of the soil. In the simulations, an isotropic fast neutron source ({sup 252}Cf, 100 mug) and a neutron detection system which consists of five {sup 3}He detectors were used in a practicable geometry. In order to see the effects of soil moisture on the efficiency of the detection system, soils with different water contents were tested.« less

  13. [Cost-effectiveness analysis on colorectal cancer screening program].

    PubMed

    Huang, Q C; Ye, D; Jiang, X Y; Li, Q L; Yao, K Y; Wang, J B; Jin, M J; Chen, K

    2017-01-10

    Objective: To evaluate the cost-effectiveness of colorectal cancer screening program in different age groups from the view of health economics. Methods: The screening compliance rates, detection rates in different age groups were calculated by using the data from colorectal cancer screening program in Jiashan county, Zhejiang province. The differences in indicator among age groups were analyzed with χ (2) test or trend χ (2) test. The ratios of cost to the number of case were calculated according to cost statistics. Results: The detection rates of immunochemical fecal occult blood test (iFOBT) positivity, advanced adenoma and colorectal cancer and early stage cancer increased with age, while the early diagnosis rates were negatively associated with age. After exclusion the younger counterpart, the cost-effectiveness of individuals aged >50 years could be reduced by 15 %- 30 % . Conclusion: From health economic perspective, it is beneficial to start colorectal cancer screening at age of 50 years to improve the efficiency of the screening.

  14. WDR-PK-AK-018

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hollister, R

    2009-08-26

    Method - CES SOP-HW-P556 'Field and Bulk Gamma Analysis'. Detector - High-purity germanium, 40% relative efficiency. Calibration - The detector was calibrated on February 8, 2006 using a NIST-traceable sealed source, and the calibration was verified using an independent sealed source. Count Time and Geometry - The sample was counted for 20 minutes at 72 inches from the detector. A lead collimator was used to limit the field-of-view to the region of the sample. The drum was rotated 180 degrees halfway through the count time. Date and Location of Scans - June 1,2006 in Building 235 Room 1136. Spectral Analysismore » Spectra were analyzed with ORTEC GammaVision software. Matrix and geometry corrections were calculated using OR TEC Isotopic software. A background spectrum was measured at the counting location. No man-made radioactivity was observed in the background. Results were determined from the sample spectra without background subtraction. Minimum detectable activities were calculated by the Nureg 4.16 method. Results - Detected Pu-238, Pu-239, Am-241 and Am-243.« less

  15. a Novel Approach to Camera Calibration Method for Smart Phones Under Road Environment

    NASA Astrophysics Data System (ADS)

    Lee, Bijun; Zhou, Jian; Ye, Maosheng; Guo, Yuan

    2016-06-01

    Monocular vision-based lane departure warning system has been increasingly used in advanced driver assistance systems (ADAS). By the use of the lane mark detection and identification, we proposed an automatic and efficient camera calibration method for smart phones. At first, we can detect the lane marker feature in a perspective space and calculate edges of lane markers in image sequences. Second, because of the width of lane marker and road lane is fixed under the standard structural road environment, we can automatically build a transformation matrix between perspective space and 3D space and get a local map in vehicle coordinate system. In order to verify the validity of this method, we installed a smart phone in the `Tuzhi' self-driving car of Wuhan University and recorded more than 100km image data on the road in Wuhan. According to the result, we can calculate the positions of lane markers which are accurate enough for the self-driving car to run smoothly on the road.

  16. A High Resolution Liquid Xenon Imaging Telescope for 0.3-10 MeV Gamma Ray Astrophysics: Construction and Initial Balloon Flights

    NASA Technical Reports Server (NTRS)

    Aprile, Elena

    1993-01-01

    The results achieved with a 3.5 liter liquid xenon time projection chamber (LXe-TPC) prototype during the first year include: the efficiency of detecting the primary scintillation light for event triggering has been measured to be higher than 85%; the charge response has been measured to be stable to within 0.1% for a period of time of about 30 hours; the electron lifetime has been measured to be in excess of 1.3 ms; the energy resolution has been measured to be consistent with previous results obtained with small volume chambers; X-Y gamma ray imaging has been demonstrated with a nondestructive orthogonal wires readout; Monte Carlo simulation results on detection efficiency, expected background count rate at balloon altitude, background reduction algorithms, telescope response to point-like and diffuse sources, and polarization sensitivity calculations; and work on a 10 liter LXe-TPC prototype and gas purification/recovery system.

  17. Exploration of sensing of nitrogen dioxide and ozone molecules using novel TiO2/Stanene heterostructures employing DFT calculations

    NASA Astrophysics Data System (ADS)

    Abbasi, Amirali; Sardroodi, Jaber Jahanbin

    2018-06-01

    Based on the density functional theory (DFT) calculations, we explored the sensing capabilities and electronic structures of TiO2/Stanene heterostructures as novel and highly efficient materials for detection of toxic NO2 and O3 molecules in the environment. Studied gas molecules were positioned at different sites and orientations towards the nanocomposite, and the adsorption process was examined based on the most stable structures. We found that both of these molecules are chemically adsorbed on the TiO2/Stanene heterostructures. The calculations of the adsorption energy indicate that the fivefold coordinated titanium sites of the TiO2/Stanene are the most stable sites for the adsorption of NO2 and O3 molecules. The side oxygen atoms of the gas molecules were found to be chemically bonded to these titanium atoms. The adsorption of gas molecules is an exothermic process, and the adsorption on the pristine nanocomposite is more favorable in energy than that on the nitrogen-doped nanocomposite. The effects of van der Waals interactions were taken into account, which indicate the adsorption energies were increased for the most sable configurations. The gas sensing response and charge transfers were analyzed in detail. The pristine nanocomposites have better sensing response than the doped ones. The spin density distribution plots indicate that the magnetization was mainly located over the adsorbed gas molecules. Mulliken charge analysis reveals that both NO2 and O3 molecules behave as charge acceptors, as evidenced by the accumulation of electronic charges on the adsorbed molecules predicted by charge density difference calculations. Our DFT results provide a theoretical basis for an innovative gas sensor system designed from a sensitive TiO2/Stanene heterostructures for efficient detection of harmful air pollutants such as NO2 and O3.

  18. Study of the interaction of 6-mercaptopurine with protein by microdialysis coupled with LC and electrochemical detection based on functionalized multi-wall carbon nanotubes modified electrode.

    PubMed

    Cao, Xu-Ni; Lin, Li; Zhou, Yu-Yan; Zhang, Wen; Shi, Guo-Yue; Yamamoto, Katsunobu; Jin, Li-Tong

    2003-07-14

    Microdialysis sampling coupled with liquid chromatography and electrochemical detection (LC-ECD) was developed and applied to study the interaction of 6-Mercaptopurine (6-MP) with bovine serum albumin (BSA). In the LC-ECD, the multi-wall carbon nanotubes fuctionalized with carboxylic groups modified electrode (MWNT-COOH CME) was used as the working electrode for the determination of 6-MP. The results indicated that this chemically modified electrode (CME) exhibited efficiently electrocatalytic oxidation for 6-MP with relatively high sensitivity, stability and long-life. The peak currents of 6-MP were linear to its concentrations ranging from 4.0 x 10(-7) to 1.0 x 10(-4) mol l(-1) with the calculated detection limit (S/N = 3) of 2.0 x 10(-7) mol l(-1). The method had been successfully applied to assess the association constant (K) and the number of the binding sites (n) on a BSA molecular, which calculated by Scatchard equation, were 3.97 x 10(3) mol(-1) l and 1.51, respectively. This method provided a fast, sensible and simple technique for the study of drug-protein interactions.

  19. Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network.

    PubMed

    Li, Yuexiang; Shen, Linlin

    2018-02-11

    Skin lesions are a severe disease globally. Early detection of melanoma in dermoscopy images significantly increases the survival rate. However, the accurate recognition of melanoma is extremely challenging due to the following reasons: low contrast between lesions and skin, visual similarity between melanoma and non-melanoma lesions, etc. Hence, reliable automatic detection of skin tumors is very useful to increase the accuracy and efficiency of pathologists. In this paper, we proposed two deep learning methods to address three main tasks emerging in the area of skin lesion image processing, i.e., lesion segmentation (task 1), lesion dermoscopic feature extraction (task 2) and lesion classification (task 3). A deep learning framework consisting of two fully convolutional residual networks (FCRN) is proposed to simultaneously produce the segmentation result and the coarse classification result. A lesion index calculation unit (LICU) is developed to refine the coarse classification results by calculating the distance heat-map. A straight-forward CNN is proposed for the dermoscopic feature extraction task. The proposed deep learning frameworks were evaluated on the ISIC 2017 dataset. Experimental results show the promising accuracies of our frameworks, i.e., 0.753 for task 1, 0.848 for task 2 and 0.912 for task 3 were achieved.

  20. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    NASA Astrophysics Data System (ADS)

    Di Mauro, M.; Manconi, S.; Zechlin, H.-S.; Ajello, M.; Charles, E.; Donato, F.

    2018-04-01

    The Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (| b| > 20^\\circ ), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10‑12 ph cm‑2 s‑1. With this method, we detect a flux break at (3.5 ± 0.4) × 10‑11 ph cm‑2 s‑1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ∼10‑11 ph cm‑2 s‑1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.

  1. Efficient Geometric Probabilities of Multi-transiting Systems, Circumbinary Planets, and Exoplanet Mutual Events

    NASA Astrophysics Data System (ADS)

    Brakensiek, Joshua; Ragozzine, D.

    2012-10-01

    The transit method for discovering extra-solar planets relies on detecting regular diminutions of light from stars due to the shadows of planets passing in between the star and the observer. NASA's Kepler Mission has successfully discovered thousands of exoplanet candidates using this technique, including hundreds of stars with multiple transiting planets. In order to estimate the frequency of these valuable systems, our research concerns the efficient calculation of geometric probabilities for detecting multiple transiting extrasolar planets around the same parent star. In order to improve on previous studies that used numerical methods (e.g., Ragozzine & Holman 2010, Tremaine & Dong 2011), we have constructed an efficient, analytical algorithm which, given a collection of conjectured exoplanets orbiting a star, computes the probability that any particular group of exoplanets are transiting. The algorithm applies theorems of elementary differential geometry to compute the areas bounded by circular curves on the surface of a sphere (see Ragozzine & Holman 2010). The implemented algorithm is more accurate and orders of magnitude faster than previous algorithms, based on comparison with Monte Carlo simulations. Expanding this work, we have also developed semi-analytical methods for determining the frequency of exoplanet mutual events, i.e., the geometric probability two planets will transit each other (Planet-Planet Occultation) and the probability that this transit occurs simultaneously as they transit their star (Overlapping Double Transits; see Ragozzine & Holman 2010). The latter algorithm can also be applied to calculating the probability of observing transiting circumbinary planets (Doyle et al. 2011, Welsh et al. 2012). All of these algorithms have been coded in C and will be made publicly available. We will present and advertise these codes and illustrate their value for studying exoplanetary systems.

  2. A New Parameter for Cardiac Efficiency Analysis

    NASA Astrophysics Data System (ADS)

    Borazjani, Iman; Rajan, Navaneetha Krishnan; Song, Zeying; Hoffmann, Kenneth; MacMahon, Eileen; Belohlavek, Marek

    2014-11-01

    Detecting and evaluating a heart with suboptimal pumping efficiency is a significant clinical goal. However, the routine parameters such as ejection fraction, quantified with current non-invasive techniques are not predictive of heart disease prognosis. Furthermore, they only represent left-ventricular (LV) ejection function and not the efficiency, which might be affected before apparent changes in the function. We propose a new parameter, called the hemodynamic efficiency (H-efficiency) and defined as the ratio of the useful to total power, for cardiac efficiency analysis. Our results indicate that the change in the shape/motion of the LV will change the pumping efficiency of the LV even if the ejection fraction is kept constant at 55% (normal value), i.e., H-efficiency can be used for suboptimal cardiac performance diagnosis. To apply H-efficiency on a patient-specific basis, we are developing a system that combines echocardiography (echo) and computational fluid dynamics (CFD) to provide the 3D pressure and velocity field to directly calculate the H-efficiency parameter. Because the method is based on clinically used 2D echo, which has faster acquisition time and lower cost relative to other imaging techniques, it can have a significant impact on a large number of patients. This work is partly supported by the American Heart Association.

  3. Automatic detection of sleep apnea based on EEG detrended fluctuation analysis and support vector machine.

    PubMed

    Zhou, Jing; Wu, Xiao-ming; Zeng, Wei-jie

    2015-12-01

    Sleep apnea syndrome (SAS) is prevalent in individuals and recently, there are many studies focus on using simple and efficient methods for SAS detection instead of polysomnography. However, not much work has been done on using nonlinear behavior of the electroencephalogram (EEG) signals. The purpose of this study is to find a novel and simpler method for detecting apnea patients and to quantify nonlinear characteristics of the sleep apnea. 30 min EEG scaling exponents that quantify power-law correlations were computed using detrended fluctuation analysis (DFA) and compared between six SAS and six healthy subjects during sleep. The mean scaling exponents were calculated every 30 s and 360 control values and 360 apnea values were obtained. These values were compared between the two groups and support vector machine (SVM) was used to classify apnea patients. Significant difference was found between EEG scaling exponents of the two groups (p < 0.001). SVM was used and obtained high and consistent recognition rate: average classification accuracy reached 95.1% corresponding to the sensitivity 93.2% and specificity 98.6%. DFA of EEG is an efficient and practicable method and is helpful clinically in diagnosis of sleep apnea.

  4. Monte Carlo simulation of a NaI(Tl) detector for in situ radioactivity measurements in the marine environment.

    PubMed

    Zhang, Yingying; Li, Changkai; Liu, Dongyan; Zhang, Ying; Liu, Yan

    2015-04-01

    To develop in situ NaI(Tl) detector for radioactivity measurement in the marine environment, the Monte Carlo N-Particle (MCNP) Transport Code was utilized to simulate the measurement of NaI(Tl) detector immersed in seawater, taking into account the material and geometry of the detector, and the interactions between the photons with the atoms of the seawater and the detector. The simulation results of the marine detection efficiency and distance were deduced and analyzed. In order to test their reliability, the field measurement was made at open sea and the experimental value of the marine detection efficiency was deduced and seems to be in good agreement with the simulated one. The minimum detectable activity for (137)Cs in the seawater of NaI(Tl) detector developed was determined mathematically at last. The simulation method and results in the paper can be used for the better design and quantitative calculation of in situ NaI(Tl) detector for radioactivity measurement in the marine environment, and also for some applications such as the installation on the marine monitoring platform and the quantitative analysis of radionuclides. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Geothermal area detection using Landsat ETM+ thermal infrared data and its mechanistic analysis—A case study in Tengchong, China

    NASA Astrophysics Data System (ADS)

    Qin, Qiming; Zhang, Ning; Nan, Peng; Chai, Leilei

    2011-08-01

    Thermal infrared (TIR) remote sensing is an important technique in the exploration of geothermal resources. In this study, a geothermal survey is conducted in Tengchong area of Yunnan province in China using TIR data from Landsat-7 Enhanced Thematic Mapper Plus (ETM+) sensor. Based on radiometric calibration, atmospheric correction and emissivity calculation, a simple but efficient single channel algorithm with acceptable precision is applied to retrieve the land surface temperature (LST) of study area. The LST anomalous areas with temperature about 4-10 K higher than background area are discovered. Four geothermal areas are identified with the discussion of geothermal mechanism and the further analysis of regional geologic structure. The research reveals that the distribution of geothermal areas is consistent with the fault development in study area. Magmatism contributes abundant thermal source to study area and the faults provide thermal channels for heat transfer from interior earth to land surface and facilitate the present of geothermal anomalies. Finally, we conclude that TIR remote sensing is a cost-effective technique to detect LST anomalies. Combining TIR remote sensing with geological analysis and the understanding of geothermal mechanism is an accurate and efficient approach to geothermal area detection.

  6. Algorithm architecture co-design for ultra low-power image sensor

    NASA Astrophysics Data System (ADS)

    Laforest, T.; Dupret, A.; Verdant, A.; Lattard, D.; Villard, P.

    2012-03-01

    In a context of embedded video surveillance, stand alone leftbehind image sensors are used to detect events with high level of confidence, but also with a very low power consumption. Using a steady camera, motion detection algorithms based on background estimation to find regions in movement are simple to implement and computationally efficient. To reduce power consumption, the background is estimated using a down sampled image formed of macropixels. In order to extend the class of moving objects to be detected, we propose an original mixed mode architecture developed thanks to an algorithm architecture co-design methodology. This programmable architecture is composed of a vector of SIMD processors. A basic RISC architecture was optimized in order to implement motion detection algorithms with a dedicated set of 42 instructions. Definition of delta modulation as a calculation primitive has allowed to implement algorithms in a very compact way. Thereby, a 1920x1080@25fps CMOS image sensor performing integrated motion detection is proposed with a power estimation of 1.8 mW.

  7. A comprehensive model for x-ray projection imaging system efficiency and image quality characterization in the presence of scattered radiation

    NASA Astrophysics Data System (ADS)

    Monnin, P.; Verdun, F. R.; Bosmans, H.; Rodríguez Pérez, S.; Marshall, N. W.

    2017-07-01

    This work proposes a method for assessing the detective quantum efficiency (DQE) of radiographic imaging systems that include both the x-ray detector and the antiscatter device. Cascaded linear analysis of the antiscatter device efficiency (DQEASD) with the x-ray detector DQE is used to develop a metric of system efficiency (DQEsys); the new metric is then related to the existing system efficiency parameters of effective DQE (eDQE) and generalized DQE (gDQE). The effect of scatter on signal transfer was modelled through its point spread function (PSF), leading to an x-ray beam transfer function (BTF) that multiplies with the classical presampling modulation transfer function (MTF) to give the system MTF. Expressions are then derived for the influence of scattered radiation on signal-difference to noise ratio (SDNR) and contrast-detail (c-d) detectability. The DQEsys metric was tested using two digital mammography systems, for eight x-ray beams (four with and four without scatter), matched in terms of effective energy. The model was validated through measurements of contrast, SDNR and MTF for poly(methyl)methacrylate thicknesses covering the range of scatter fractions expected in mammography. The metric also successfully predicted changes in c-d detectability for different scatter conditions. Scatter fractions for the four beams with scatter were established with the beam stop method using an extrapolation function derived from the scatter PSF, and validated through Monte Carlo (MC) simulations. Low-frequency drop of the MTF from scatter was compared to both theory and MC calculations. DQEsys successfully quantified the influence of the grid on SDNR and accurately gave the break-even object thickness at which system efficiency was improved by the grid. The DQEsys metric is proposed as an extension of current detector characterization methods to include a performance evaluation in the presence of scattered radiation, with an antiscatter device in place.

  8. Flux transformers made of commercial high critical temperature superconducting wires.

    PubMed

    Dyvorne, H; Scola, J; Fermon, C; Jacquinot, J F; Pannetier-Lecoeur, M

    2008-02-01

    We have designed flux transformers made of commercial BiSCCO tapes closed by soldering with normal metal. The magnetic field transfer function of the flux transformer was calculated as a function of the resistance of the soldered contacts. The performances of different kinds of wires were investigated for signal delocalization and gradiometry. We also estimated the noise introduced by the resistance and showed that the flux transformer can be used efficiently for weak magnetic field detection down to 1 Hz.

  9. MCNP HPGe detector benchmark with previously validated Cyltran model.

    PubMed

    Hau, I D; Russ, W R; Bronson, F

    2009-05-01

    An exact copy of the detector model generated for Cyltran was reproduced as an MCNP input file and the detection efficiency was calculated similarly with the methodology used in previous experimental measurements and simulation of a 280 cm(3) HPGe detector. Below 1000 keV the MCNP data correlated to the Cyltran results within 0.5% while above this energy the difference between MCNP and Cyltran increased to about 6% at 4800 keV, depending on the electron cut-off energy.

  10. Analysis of sewage sludge using an experimental prompt gamma neutron activation analysis (pgnaa) set-up with an am-be source

    NASA Astrophysics Data System (ADS)

    Idiri, Z.; Redjem, F.; Beloudah, N.

    2016-09-01

    An experimental PGNAA set-up using a 1 Ci Am-Be source has been developed and used for analysis of bulk sewage sludge samples issued from a wastewater treatment plant situated in an industrial area of Algiers. The sample dimensions were optimized using thermal neutron flux calculations carried out with the MCNP5 Monte Carlo Code. A methodology is then proposed to perform quantitative analysis using the absolute method. For this, average thermal neutron flux inside the sludge samples is deduced using average thermal neutron flux in reference water samples and thermal flux measurements with the aid of a 3He neutron detector. The average absolute gamma detection efficiency is determined using the prompt gammas emitted by chlorine dissolved in a water sample. The gamma detection efficiency is normalized for sludge samples using gamma attenuation factors calculated with the MCNP5 code for water and sludge. Wet and dehydrated sludge samples were analyzed. Nutritive elements (Ca, N, P, K) and heavy metals elements like Cr and Mn were determined. For some elements, the PGNAA values were compared to those obtained using Atomic Absorption Spectroscopy (AAS) and Inductively Coupled Plasma (ICP) methods. Good agreement is observed between the different values. Heavy element concentrations are very high compared to normal values; this is related to the fact that the wastewater treatment plant is treating not only domestic but also industrial wastewater that is probably rejected by industries without removal of pollutant elements. The detection limits for almost all elements of interest are sufficiently low for the method to be well suited for such analysis.

  11. Development and Application of Quantitative Detection Method for Viral Hemorrhagic Septicemia Virus (VHSV) Genogroup IVa

    PubMed Central

    Kim, Jong-Oh; Kim, Wi-Sik; Kim, Si-Woo; Han, Hyun-Ja; Kim, Jin Woo; Park, Myoung Ae; Oh, Myung-Joo

    2014-01-01

    Viral hemorrhagic septicemia virus (VHSV) is a problematic pathogen in olive flounder (Paralichthys olivaceus) aquaculture farms in Korea. Thus, it is necessary to develop a rapid and accurate diagnostic method to detect this virus. We developed a quantitative RT-PCR (qRT-PCR) method based on the nucleocapsid (N) gene sequence of Korean VHSV isolate (Genogroup IVa). The slope and R2 values of the primer set developed in this study were −0.2928 (96% efficiency) and 0.9979, respectively. Its comparison with viral infectivity calculated by traditional quantifying method (TCID50) showed a similar pattern of kinetic changes in vitro and in vivo. The qRT-PCR method reduced detection time compared to that of TCID50, making it a very useful tool for VHSV diagnosis. PMID:24859343

  12. Vacuum ultraviolet photofragmentation of octadecane: photoionization mass spectrometric and theoretical investigation.

    PubMed

    Xu, Jing; Sang, Pengpeng; Zhao, Lianming; Guo, Wenyue; Qi, Fei; Xing, Wei; Yan, Zifeng

    The photoionization and fragmentation of octadecane were investigated with infrared laser desorption/tunable synchrotron vacuum ultraviolet (VUV) photoionization mass spectrometry (IRLD/VUV PIMS) and theoretical calculations. Mass spectra of octadecane were measured at various photon energies. The fragment ions were gradually detected with the increase of photon energy. The main fragment ions were assigned to radical ions (C n H 2 n +1 + , n  = 4-11) and alkene ions (C n H 2 n + , n  = 5-10). The ionization energy of the precursor and appearance energy of ionic fragments were obtained by measuring the photoionization efficiency spectrum. Possible formation pathways of the fragment ions were discussed with the help of density functional theory calculations.

  13. Index to Estimate the Efficiency of an Ophthalmic Practice.

    PubMed

    Chen, Andrew; Kim, Eun Ah; Aigner, Dennis J; Afifi, Abdelmonem; Caprioli, Joseph

    2015-08-01

    A metric of efficiency, a function of the ratio of quality to cost per patient, will allow the health care system to better measure the impact of specific reforms and compare the effectiveness of each. To develop and evaluate an efficiency index that estimates the performance of an ophthalmologist's practice as a function of cost, number of patients receiving care, and quality of care. Retrospective review of 36 ophthalmology subspecialty practices from October 2011 to September 2012 at a university-based eye institute. The efficiency index (E) was defined as a function of adjusted number of patients (N(a)), total practice adjusted costs (C(a)), and a preliminary measure of quality (Q). Constant b limits E between 0 and 1. Constant y modifies the influence of Q on E. Relative value units and geographic cost indices determined by the Centers for Medicare and Medicaid for 2012 were used to calculate adjusted costs. The efficiency index is expressed as the following: E = b(N(a)/C(a))Q(y). Independent, masked auditors reviewed 20 random patient medical records for each practice and filled out 3 questionnaires to obtain a process-based quality measure. The adjusted number of patients, adjusted costs, quality, and efficiency index were calculated for 36 ophthalmology subspecialties. The median adjusted number of patients was 5516 (interquartile range, 3450-11,863), the median adjusted cost was 1.34 (interquartile range, 0.99-1.96), the median quality was 0.89 (interquartile range, 0.79-0.91), and the median value of the efficiency index was 0.26 (interquartile range, 0.08-0.42). The described efficiency index is a metric that provides a broad overview of performance for a variety of ophthalmology specialties as estimated by resources used and a preliminary measure of quality of care provided. The results of the efficiency index could be used in future investigations to determine its sensitivity to detect the impact of interventions on a practice such as training modules or practice restructuring.

  14. Occurrence, removal, and risk assessment of antibiotics in 12 wastewater treatment plants from Dalian, China.

    PubMed

    Zhang, Xin; Zhao, Hongxia; Du, Juan; Qu, Yixuan; Shen, Chen; Tan, Feng; Chen, Jingwen; Quan, Xie

    2017-07-01

    In this study, the occurrence and removal efficiencies of 31 antibiotics, including 11 sulfonamides (SAs), five fluoroquinolones (FQs), four macrolides (MLs), four tetracyclines (TCs), three chloramphenicols (CAPs), and four other antibiotics (Others), were investigated in 12 municipal wastewater treatment plants (WWTPs) in Dalian, China. A total of 29 antibiotics were detected in wastewater samples with the concentration ranging from 63.6 to 5404.6 ng/L. FQs and SAs were the most abundant antibiotic classes in most wastewater samples, accounting for 42.2 and 23.9% of total antibiotic concentrations, respectively, followed by TCs (16.0%) and MLs (14.8%). Sulfamethoxazole, erythromycin, clarithromycin, azithromycin, ofloxacin, and norfloxacin were the most frequently detected antibiotics; of these, the concentration of ofloxacin was the highest in most of influent (average concentration = 609.8 ng/L) and effluent (average concentration = 253.4 ng/L) samples. The removal efficiencies varied among WWTPs in the range of -189.9% (clarithromycin) to 100% (enoxacin, doxycycline, etc), and more than 50% of antibiotics could not be efficiently removed with the removal efficiency less than 65%. An environmental risk assessment was also performed in the WWTP effluents by calculating the risk quotient (RQ), and high RQ values (>1) indicated erythromycin and clarithromycin might cause the ecological risk on organisms in surrounding water near discharge point of WWTPs in this area, which warrants further attention.

  15. Efficient Terahertz Wide-Angle NUFFT-Based Inverse Synthetic Aperture Imaging Considering Spherical Wavefront.

    PubMed

    Gao, Jingkun; Deng, Bin; Qin, Yuliang; Wang, Hongqiang; Li, Xiang

    2016-12-14

    An efficient wide-angle inverse synthetic aperture imaging method considering the spherical wavefront effects and suitable for the terahertz band is presented. Firstly, the echo signal model under spherical wave assumption is established, and the detailed wavefront curvature compensation method accelerated by 1D fast Fourier transform (FFT) is discussed. Then, to speed up the reconstruction procedure, the fast Gaussian gridding (FGG)-based nonuniform FFT (NUFFT) is employed to focus the image. Finally, proof-of-principle experiments are carried out and the results are compared with the ones obtained by the convolution back-projection (CBP) algorithm. The results demonstrate the effectiveness and the efficiency of the presented method. This imaging method can be directly used in the field of nondestructive detection and can also be used to provide a solution for the calculation of the far-field RCSs (Radar Cross Section) of targets in the terahertz regime.

  16. Luminous Efficiency of Hypervelocity Meteoroid Impacts on the Moon Derived from the 2015 Geminid Meteor Shower

    NASA Technical Reports Server (NTRS)

    Moser, D. E.; Suggs, R. M.; Ehlert, S. R.

    2017-01-01

    Since early 2006 the Meteoroid Environment Office (MEO) at NASA's Marshall Space Flight Center has routinely monitored the Moon for impact flashes produced by meteoroids striking the lunar surface. Activity from the Geminid meteor shower (EM) was observed in 2015, resulting in the detection of 45 lunar impact flashes (roughly 10% of the NASA dataset), in about 10 hours of observation with peak R magnitudes ranging from 6.5 to 11. A subset of 30 of these flashes, observed 14-15 December, was analyzed in order to determine the luminous efficiency, the ratio of emitted luminous energy to the meteoroid's kinetic energy. The resulting luminous efficiency, found to range between n = 1.8 x 10(exp -4) and 3.3 x 10(exp -3), depending on the assumed mass index and flux, was than applied to calculate the masses of Geminid meteoroids striking the Moon in 2015.

  17. Measurements of induced radioactivity in some LDEF samples

    NASA Technical Reports Server (NTRS)

    Moss, C. E.; Reedy, R. C.

    1992-01-01

    Twenty-six stainless steel trunnion samples, five aluminum end support retainer plate samples, two aluminum keel plate samples, and two titanium clips were analyzed. The shielded high purity germanium detectors used had efficiencies of 33, 54, and 80 percent at 1332 keV. Detector efficiencies as a function of energy and corrections for self-absorption in the samples were determined with calibrated sources and unactivated control samples. Several measurements were made on most samples. In the trunnion samples, Mn-54 and Co-57 were seen and limits were obtained for other isotopes. The results agree well with 1-D activation calculations for an anisotropic trapped proton model. In the aluminum and titanium samples, Na-22 was detected. Other results are presented.

  18. Parallax-Robust Surveillance Video Stitching

    PubMed Central

    He, Botao; Yu, Shaohua

    2015-01-01

    This paper presents a parallax-robust video stitching technique for timely synchronized surveillance video. An efficient two-stage video stitching procedure is proposed in this paper to build wide Field-of-View (FOV) videos for surveillance applications. In the stitching model calculation stage, we develop a layered warping algorithm to align the background scenes, which is location-dependent and turned out to be more robust to parallax than the traditional global projective warping methods. On the selective seam updating stage, we propose a change-detection based optimal seam selection approach to avert ghosting and artifacts caused by moving foregrounds. Experimental results demonstrate that our procedure can efficiently stitch multi-view videos into a wide FOV video output without ghosting and noticeable seams. PMID:26712756

  19. The structural and functional correlates of the efficiency in fearful face detection.

    PubMed

    Wang, Yongchao; Guo, Nana; Zhao, Li; Huang, Hui; Yao, Xiaonan; Sang, Na; Hou, Xin; Mao, Yu; Bi, Taiyong; Qiu, Jiang

    2017-06-01

    Human visual system is found to be much efficient in searching for a fearful face. Some individuals are more sensitive to this threat-related stimulus. However, we still know little about the neural correlates of such variability. In the current study, we exploited a visual search paradigm, and asked the subjects to search for a fearful face or a target gender. Every subject showed a shallower search function for fearful face search than face gender search, indicating a stable fearful face advantage. We then used voxel-based morphometry (VBM) analysis and correlated this advantage to the gray matter volume (GMV) of some presumably face related cortical areas. The result revealed that only the left fusiform gyrus showed a significant positive correlation. Next, we defined the left fusiform gyrus as the seed region and calculated its resting state functional connectivity to the whole brain. Correlations were also calculated between fearful face advantage and these connectivities. In this analysis, we found positive correlations in the inferior parietal lobe and the ventral medial prefrontal cortex. These results suggested that the anatomical structure of the left fusiform gyrus might determine the search efficiency of fearful face, and frontoparietal attention network involved in this process through top-down attentional modulation. Copyright © 2017. Published by Elsevier Ltd.

  20. VarDetect: a nucleotide sequence variation exploratory tool

    PubMed Central

    Ngamphiw, Chumpol; Kulawonganunchai, Supasak; Assawamakin, Anunchai; Jenwitheesuk, Ekachai; Tongsima, Sissades

    2008-01-01

    Background Single nucleotide polymorphisms (SNPs) are the most commonly studied units of genetic variation. The discovery of such variation may help to identify causative gene mutations in monogenic diseases and SNPs associated with predisposing genes in complex diseases. Accurate detection of SNPs requires software that can correctly interpret chromatogram signals to nucleotides. Results We present VarDetect, a stand-alone nucleotide variation exploratory tool that automatically detects nucleotide variation from fluorescence based chromatogram traces. Accurate SNP base-calling is achieved using pre-calculated peak content ratios, and is enhanced by rules which account for common sequence reading artifacts. The proposed software tool is benchmarked against four other well-known SNP discovery software tools (PolyPhred, novoSNP, Genalys and Mutation Surveyor) using fluorescence based chromatograms from 15 human genes. These chromatograms were obtained from sequencing 16 two-pooled DNA samples; a total of 32 individual DNA samples. In this comparison of automatic SNP detection tools, VarDetect achieved the highest detection efficiency. Availability VarDetect is compatible with most major operating systems such as Microsoft Windows, Linux, and Mac OSX. The current version of VarDetect is freely available at . PMID:19091032

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imazono, Takashi, E-mail: imazono.takashi@jaea.go.jp; Koike, Masato; Nagano, Tetsuya

    Efficiently detecting the B-K emission band around 6.76 nm from a trace concentration of boron in steel compounds has motivated a theoretical exploration of means of increasing the diffraction efficiency of a laminar grating with carbon overcoating. To experimentally evaluate this enhancement, a Ni grating was coated with a high-density carbon film, i.e., diamond-like carbon (DLC). The first order diffraction efficiencies of the Ni gratings coated with and without DLC were measured to be 25.8 % and 16.9 %, respectively, at a wavelength of 6.76 nm and an angle of incidence of 87.07°. The ratio of diffraction efficiency obtained experimentallymore » vs. that calculated by numerical simulation is 0.87 for the DLC-coated Ni grating. The diffraction efficiency of a Ni grating coated with a low-density carbon film, amorphous carbon (a-C), was also slightly improved to be 19.6 %. Furthermore, a distinct minimum of the zeroth order lights of the two carbon-coated Ni gratings were observed at around 6.76 nm, which is coincident with the maximum of the first order light.« less

  2. THE DIFFERENCE IMAGING PIPELINE FOR THE TRANSIENT SEARCH IN THE DARK ENERGY SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kessler, R.; Scolnic, D.; Marriner, J.

    2015-12-15

    We describe the operation and performance of the difference imaging pipeline (DiffImg) used to detect transients in deep images from the Dark Energy Survey Supernova program (DES-SN) in its first observing season from 2013 August through 2014 February. DES-SN is a search for transients in which ten 3 deg{sup 2} fields are repeatedly observed in the g, r, i, z passbands with a cadence of about 1 week. The observing strategy has been optimized to measure high-quality light curves and redshifts for thousands of Type Ia supernovae (SNe Ia) with the goal of measuring dark energy parameters. The essential DiffImgmore » functions are to align each search image to a deep reference image, do a pixel-by-pixel subtraction, and then examine the subtracted image for significant positive detections of point-source objects. The vast majority of detections are subtraction artifacts, but after selection requirements and image filtering with an automated scanning program, there are ∼130 detections per deg{sup 2} per observation in each band, of which only ∼25% are artifacts. Of the ∼7500 transients discovered by DES-SN in its first observing season, each requiring a detection on at least two separate nights, Monte Carlo (MC) simulations predict that 27% are expected to be SNe Ia or core-collapse SNe. Another ∼30% of the transients are artifacts in which a small number of observations satisfy the selection criteria for a single-epoch detection. Spectroscopic analysis shows that most of the remaining transients are AGNs and variable stars. Fake SNe Ia are overlaid onto the images to rigorously evaluate detection efficiencies and to understand the DiffImg performance. The DiffImg efficiency measured with fake SNe agrees well with expectations from a MC simulation that uses analytical calculations of the fluxes and their uncertainties. In our 8 “shallow” fields with single-epoch 50% completeness depth ∼23.5, the SN Ia efficiency falls to 1/2 at redshift z ≈ 0.7; in our 2 “deep” fields with mag-depth ∼24.5, the efficiency falls to 1/2 at z ≈ 1.1. A remaining performance issue is that the measured fluxes have additional scatter (beyond Poisson fluctuations) that increases with the host galaxy surface brightness at the transient location. This bright-galaxy issue has minimal impact on the SNe Ia program, but it may lower the efficiency for finding fainter transients on bright galaxies.« less

  3. The Difference Imaging Pipeline for the Transient Search in the Dark Energy Survey

    NASA Astrophysics Data System (ADS)

    Kessler, R.; Marriner, J.; Childress, M.; Covarrubias, R.; D'Andrea, C. B.; Finley, D. A.; Fischer, J.; Foley, R. J.; Goldstein, D.; Gupta, R. R.; Kuehn, K.; Marcha, M.; Nichol, R. C.; Papadopoulos, A.; Sako, M.; Scolnic, D.; Smith, M.; Sullivan, M.; Wester, W.; Yuan, F.; Abbott, T.; Abdalla, F. B.; Allam, S.; Benoit-Lévy, A.; Bernstein, G. M.; Bertin, E.; Brooks, D.; Carnero Rosell, A.; Carrasco Kind, M.; Castander, F. J.; Crocce, M.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Eifler, T. F.; Fausti Neto, A.; Flaugher, B.; Frieman, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; Honscheid, K.; James, D. J.; Kuropatkin, N.; Li, T. S.; Maia, M. A. G.; Marshall, J. L.; Martini, P.; Miller, C. J.; Miquel, R.; Nord, B.; Ogando, R.; Plazas, A. A.; Reil, K.; Romer, A. K.; Roodman, A.; Sanchez, E.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Tarle, G.; Thaler, J.; Thomas, R. C.; Tucker, D.; Walker, A. R.; DES Collaboration

    2015-12-01

    We describe the operation and performance of the difference imaging pipeline (DiffImg) used to detect transients in deep images from the Dark Energy Survey Supernova program (DES-SN) in its first observing season from 2013 August through 2014 February. DES-SN is a search for transients in which ten 3 deg2 fields are repeatedly observed in the g, r, i, z passbands with a cadence of about 1 week. The observing strategy has been optimized to measure high-quality light curves and redshifts for thousands of Type Ia supernovae (SNe Ia) with the goal of measuring dark energy parameters. The essential DiffImg functions are to align each search image to a deep reference image, do a pixel-by-pixel subtraction, and then examine the subtracted image for significant positive detections of point-source objects. The vast majority of detections are subtraction artifacts, but after selection requirements and image filtering with an automated scanning program, there are ˜130 detections per deg2 per observation in each band, of which only ˜25% are artifacts. Of the ˜7500 transients discovered by DES-SN in its first observing season, each requiring a detection on at least two separate nights, Monte Carlo (MC) simulations predict that 27% are expected to be SNe Ia or core-collapse SNe. Another ˜30% of the transients are artifacts in which a small number of observations satisfy the selection criteria for a single-epoch detection. Spectroscopic analysis shows that most of the remaining transients are AGNs and variable stars. Fake SNe Ia are overlaid onto the images to rigorously evaluate detection efficiencies and to understand the DiffImg performance. The DiffImg efficiency measured with fake SNe agrees well with expectations from a MC simulation that uses analytical calculations of the fluxes and their uncertainties. In our 8 “shallow” fields with single-epoch 50% completeness depth ˜23.5, the SN Ia efficiency falls to 1/2 at redshift z ≈ 0.7; in our 2 “deep” fields with mag-depth ˜24.5, the efficiency falls to 1/2 at z ≈ 1.1. A remaining performance issue is that the measured fluxes have additional scatter (beyond Poisson fluctuations) that increases with the host galaxy surface brightness at the transient location. This bright-galaxy issue has minimal impact on the SNe Ia program, but it may lower the efficiency for finding fainter transients on bright galaxies.

  4. THE DIFFERENCE IMAGING PIPELINE FOR THE TRANSIENT SEARCH IN THE DARK ENERGY SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kessler, R.; Marriner, J.; Childress, M.

    2015-11-06

    We describe the operation and performance of the difference imaging pipeline (DiffImg) used to detect transients in deep images from the Dark Energy Survey Supernova program (DES-SN) in its first observing season from 2013 August through 2014 February. DES-SN is a search for transients in which ten 3 deg(2) fields are repeatedly observed in the g, r, i, z passbands with a cadence of about 1 week. The observing strategy has been optimized to measure high-quality light curves and redshifts for thousands of Type Ia supernovae (SNe Ia) with the goal of measuring dark energy parameters. The essential DiffImg functionsmore » are to align each search image to a deep reference image, do a pixel-by-pixel subtraction, and then examine the subtracted image for significant positive detections of point-source objects. The vast majority of detections are subtraction artifacts, but after selection requirements and image filtering with an automated scanning program, there are similar to 130 detections per deg(2) per observation in each band, of which only similar to 25% are artifacts. Of the similar to 7500 transients discovered by DES-SN in its first observing season, each requiring a detection on at least two separate nights, Monte Carlo (MC) simulations predict that 27% are expected to be SNe Ia or core-collapse SNe. Another similar to 30% of the transients are artifacts in which a small number of observations satisfy the selection criteria for a single-epoch detection. Spectroscopic analysis shows that most of the remaining transients are AGNs and variable stars. Fake SNe Ia are overlaid onto the images to rigorously evaluate detection efficiencies and to understand the DiffImg performance. The DiffImg efficiency measured with fake SNe agrees well with expectations from a MC simulation that uses analytical calculations of the fluxes and their uncertainties. In our 8 "shallow" fields with single-epoch 50% completeness depth similar to 23.5, the SN Ia efficiency falls to 1/2 at redshift z approximate to 0.7; in our 2 "deep" fields with mag-depth similar to 24.5, the efficiency falls to 1/2 at z approximate to 1.1. A remaining performance issue is that the measured fluxes have additional scatter (beyond Poisson fluctuations) that increases with the host galaxy surface brightness at the transient location. This bright-galaxy issue has minimal impact on the SNe Ia program, but it may lower the efficiency for finding fainter transients on bright galaxies.« less

  5. The Difference Imaging Pipeline for the Transient Search in the Dark Energy Survey

    DOE PAGES

    Kessler, R.

    2015-09-09

    We describe the operation and performance of the difference imaging pipeline (DiffImg) used to detect transients in deep images from the Dark Energy Survey Supernova program (DES-SN) in its first observing season from 2013 August through 2014 February. DES-SN is a search for transients in which ten 3 deg 2 fields are repeatedly observed in the g, r, i, zpassbands with a cadence of about 1 week. Our observing strategy has been optimized to measure high-quality light curves and redshifts for thousands of Type Ia supernovae (SNe Ia) with the goal of measuring dark energy parameters. The essential DiffImg functionsmore » are to align each search image to a deep reference image, do a pixel-by-pixel subtraction, and then examine the subtracted image for significant positive detections of point-source objects. The vast majority of detections are subtraction artifacts, but after selection requirements and image filtering with an automated scanning program, there are ~130 detections per deg 2 per observation in each band, of which only ~25% are artifacts. Of the ~7500 transients discovered by DES-SN in its first observing season, each requiring a detection on at least two separate nights, Monte Carlo (MC) simulations predict that 27% are expected to be SNe Ia or core-collapse SNe. Another ~30% of the transients are artifacts in which a small number of observations satisfy the selection criteria for a single-epoch detection. Spectroscopic analysis shows that most of the remaining transients are AGNs and variable stars. Fake SNe Ia are overlaid onto the images to rigorously evaluate detection efficiencies and to understand the DiffImg performance. Furthermore, the DiffImg efficiency measured with fake SNe agrees well with expectations from a MC simulation that uses analytical calculations of the fluxes and their uncertainties. In our 8 "shallow" fields with single-epoch 50% completeness depth ~23.5, the SN Ia efficiency falls to 1/2 at redshift z ≈ 0.7; in our 2 "deep" fields with mag-depth ~24.5, the efficiency falls to 1/2 at z ≈ 1.1. A remaining performance issue is that the measured fluxes have additional scatter (beyond Poisson fluctuations) that increases with the host galaxy surface brightness at the transient location. This bright-galaxy issue has minimal impact on the SNe Ia program, but it may lower the efficiency for finding fainter transients on bright galaxies.« less

  6. [Full Sibling Identification by IBS Scoring Method and Establishment of the Query Table of Its Critical Value].

    PubMed

    Li, R; Li, C T; Zhao, S M; Li, H X; Li, L; Wu, R G; Zhang, C C; Sun, H Y

    2017-04-01

    To establish a query table of IBS critical value and identification power for the detection systems with different numbers of STR loci under different false judgment standards. Samples of 267 pairs of full siblings and 360 pairs of unrelated individuals were collected and 19 autosomal STR loci were genotyped by Golden e ye™ 20A system. The full siblings were determined using IBS scoring method according to the 'Regulation for biological full sibling testing'. The critical values and identification power for the detection systems with different numbers of STR loci under different false judgment standards were calculated by theoretical methods. According to the formal IBS scoring criteria, the identification power of full siblings and unrelated individuals was 0.764 0 and the rate of false judgment was 0. The results of theoretical calculation were consistent with that of sample observation. The query table of IBS critical value for identification of full sibling detection systems with different numbers of STR loci was successfully established. The IBS scoring method defined by the regulation has high detection efficiency and low false judgment rate, which provides a relatively conservative result. The query table of IBS critical value for identification of full sibling detection systems with different numbers of STR loci provides an important reference data for the result judgment of full sibling testing and owns a considerable practical value. Copyright© by the Editorial Department of Journal of Forensic Medicine

  7. Bismuth chalcohalides and oxyhalides as optoelectronic materials

    DOE PAGES

    Du, Mao -Hua; Shi, Hongliang; Ming, Wenmei

    2016-03-29

    Several Tl and Pb based halides and chalcohalides have recently been discovered as promising optoelectronic materials [i.e., photovoltaic (PV) and gamma-ray detection materials]. Efficient carrier transport in these materials is attributed partly to the special chemistry of ns 2 ions (e.g., Tl +, Pb 2+, and Bi 3+). However, the toxicity of Tl and Pb is challenging to the development and the wide use of Tl and Pb based materials. In this paper, we investigate materials that contain Bi 3+, which is also an ns 2 ion. By combining Bi halides with Bi chalcogenides or oxides, the resulting ternary compoundsmore » exhibit a wide range of band gaps, offering opportunities in various optoelectronic applications. Density functional calculations of electronic structure, dielectric properties, optical properties, and defect properties are performed on selected Bi 3+ based chalcohalides and oxyhalides, i.e., BiSeBr, BiSI, BiSeI, and BiOBr. We propose different applications for these Bi compounds based on calculated properties, i.e., n-BiSeBr, p-BiSI, and p-BiSeI as PV materials, BiSeBr and BiSI as room-temperature radiation detection materials, and BiOBr as a p-type transparent conducting material. BiSeBr, BiSI, and BiSeBr have chain structures while BiOBr has a layered structure. However, in BiSI, BiSeI, and BiOBr, significant valence-band dispersion is found in the directions perpendicular to the atomic chain or layer because the valence-band edge states are dominated by the halogen states that have strong interchain or interlayer coupling. We find significantly enhanced Born effective charges and anomalously large static dielectric constants of the Bi compounds, which should reduce carrier scattering and trapping and promote efficient carrier transport in these materials. The strong screening and the small anion coordination numbers in Bi chalcohalides should lead to weak potentials for electron localization at anion vacancies. As a result, defect calculations indeed show that the anion vacancies (Se and Br vacancies) in BiSeBr are shallow, which is beneficial to efficient electron transport.« less

  8. Large Footprint LiDAR Data Processing for Ground Detection and Biomass Estimation

    NASA Astrophysics Data System (ADS)

    Zhuang, Wei

    Ground detection in large footprint waveform Light Detection And Ranging (LiDAR) data is important in calculating and estimating downstream products, especially in forestry applications. For example, tree heights are calculated as the difference between the ground peak and first returned signal in a waveform. Forest attributes, such as aboveground biomass, are estimated based on the tree heights. This dissertation investigated new metrics and algorithms for estimating aboveground biomass and extracting ground peak location in large footprint waveform LiDAR data. In the first manuscript, an accurate and computationally efficient algorithm, named Filtering and Clustering Algorithm (FICA), was developed based on a set of multiscale second derivative filters for automatically detecting the ground peak in an waveform from Land, Vegetation and Ice Sensor. Compared to existing ground peak identification algorithms, FICA was tested in different land cover type plots and showed improved accuracy in ground detections of the vegetation plots and similar accuracy in developed area plots. Also, FICA adopted a peak identification strategy rather than following a curve-fitting process, and therefore, exhibited improved efficiency. In the second manuscript, an algorithm was developed specifically for shrub waveforms. The algorithm only partially fitted the shrub canopy reflection and detected the ground peak by investigating the residual signal, which was generated by deducting a Gaussian fitting function from the raw waveform. After the deduction, the overlapping ground peak was identified as the local maximum of the residual signal. In addition, an applicability model was built for determining waveforms where the proposed PCF algorithm should be applied. In the third manuscript, a new set of metrics was developed to increase accuracy in biomass estimation models. The metrics were based on the results of Gaussian decomposition. They incorporated both waveform intensity represented by the area covered by a Gaussian function and its associated heights, which was the centroid of the Gaussian function. By considering signal reflection of different vegetation layers, the developed metrics obtained better estimation accuracy in aboveground biomass when compared to existing metrics. In addition, the new developed metrics showed strong correlation with other forest structural attributes, such as mean Diameter at Breast Height (DBH) and stem density. In sum, the dissertation investigated the various techniques for large footprint waveform LiDAR processing for detecting the ground peak and estimating biomass. The novel techniques developed in this dissertation showed better performance than existing methods or metrics.

  9. Research on Abnormal Detection Based on Improved Combination of K - means and SVDD

    NASA Astrophysics Data System (ADS)

    Hao, Xiaohong; Zhang, Xiaofeng

    2018-01-01

    In order to improve the efficiency of network intrusion detection and reduce the false alarm rate, this paper proposes an anomaly detection algorithm based on improved K-means and SVDD. The algorithm first uses the improved K-means algorithm to cluster the training samples of each class, so that each class is independent and compact in class; Then, according to the training samples, the SVDD algorithm is used to construct the minimum superspheres. The subordinate relationship of the samples is determined by calculating the distance of the minimum superspheres constructed by SVDD. If the test sample is less than the center of the hypersphere, the test sample belongs to this class, otherwise it does not belong to this class, after several comparisons, the final test of the effective detection of the test sample.In this paper, we use KDD CUP99 data set to simulate the proposed anomaly detection algorithm. The results show that the algorithm has high detection rate and low false alarm rate, which is an effective network security protection method.

  10. Vehicle detection from very-high-resolution (VHR) aerial imagery using attribute belief propagation (ABP)

    NASA Astrophysics Data System (ADS)

    Wang, Yanli; Li, Ying; Zhang, Li; Huang, Yuchun

    2016-10-01

    With the popularity of very-high-resolution (VHR) aerial imagery, the shape, color, and context attribute of vehicles are better characterized. Due to the various road surroundings and imaging conditions, vehicle attributes could be adversely affected so that vehicle is mistakenly detected or missed. This paper is motivated to robustly extract the rich attribute feature for detecting the vehicles of VHR imagery under different scenarios. Based on the hierarchical component tree of vehicle context, attribute belief propagation (ABP) is proposed to detect salient vehicles from the statistical perspective. With the Max-tree data structure, the multi-level component tree around the road network is efficiently created. The spatial relationship between vehicle and its belonging context is established with the belief definition of vehicle attribute. To effectively correct single-level belief error, the inter-level belief linkages enforce consistency of belief assignment between corresponding components at different levels. ABP starts from an initial set of vehicle belief calculated by vehicle attribute, and then iterates through each component by applying inter-level belief passing until convergence. The optimal value of vehicle belief of each component is obtained via minimizing its belief function iteratively. The proposed algorithm is tested on a diverse set of VHR imagery acquired in the city and inter-city areas of the West and South China. Experimental results show that the proposed algorithm can detect vehicle efficiently and suppress the erroneous effectively. The proposed ABP framework is promising to robustly classify the vehicles from VHR Aerial imagery.

  11. CHERENCUBE: concept definition and implementation challenges of a Cherenkov-based detector block for PET.

    PubMed

    Somlai-Schweiger, I; Ziegler, S I

    2015-04-01

    A new concept for a depth-of-interaction (DOI) capable time-of-flight (TOF) PET detector is defined, based only on the detection of Cherenkov photons. The proposed "CHERENCUBE" consists of a cubic Cherenkov radiator with position-sensitive photodetectors covering each crystal face. By means of the spatial distribution of the detected photons and their time of arrival, the point of interaction of the gamma-ray in the crystal can be determined. This study analyzes through theoretical calculations and Monte Carlo simulations the potential advantages of the concept toward reaching a Cherenkov-only detector for TOF-PET with DOI capability. Furthermore, an algorithm for the DOI estimation is presented and the requirements for a practical implementation of the proposed concept are defined. The Monte Carlo simulations consisted of a cubic crystal with one photodetector coupled to each one of the faces of the cube. The sensitive area of the detector matched exactly the crystal size, which was varied in 1 mm steps between 1 × 1 × 1 mm(3) and 10 × 10 × 10 mm(3). For each size, five independent simulations of ten thousand 511 keV gamma-rays were triggered at a fixed distance of 10 mm. The crystal chosen was PbWO4. Its scintillation properties were simulated, but only Cherenkov photons were analyzed. Photodetectors were simulated having perfect photodetection efficiency and infinite time resolution. For every generated particle, the analysis considered its creation process, parent and daughter particles, energy, origin coordinates, trajectory, and time and position of detection. The DOI determination is based on the distribution of the emission time of all photons per event. These values are calculated as a function of the coordinates of detection and origin for every photon. The common origin is estimated by finding the distribution with the most similar emission time-points. Detection efficiency increases with crystal size from 8.2% (1 × 1 × 1 mm(3)) to 58.6% (10 × 10 × 10 mm(3)) and decreases applying a photon detection threshold of 5/10/20 photons to 6.3%/4.3%/0.7% and 49.3%/30.4%/2.8%, respectively. The detection rate in the six photodetectors is uniform due to the nearly isotropic cone emission. Most cones originated after a photoelectric effect interaction, with two dominating peaks for the kinetic energy of the electron at 422.99 and 441.47 keV. The detection distance between same-event photons defines the spatial resolution of the detector required for individual photon recognition, with 20% of the detected photons having their closest neighbor within a distance of 5% of the length of the cube. Same-event photons are detected within a time window whose width is determined by the crystal size, with values of 30 and 150 ps for a 1 × 1 × 1 mm(3) and a 10 × 10 × 10 mm(3) cube, respectively. The DOI reconstruction has an accuracy of approximately 23% of the length of the cube, with an average value of 2.2 mm for a 10 × 10 × 10 mm(3) CHERENCUBE. The proposed concept requires a detector with high photodetection efficiency. The structure of the sensitive surface of the detector should be a two dimensional array of microcells, able to provide individual detection coordinates and time stamps. The microcell size determines the ability to recognize individual photons, influencing detection efficiency. The 3D DOI recognition relies on the accuracy of the time stamps and detection coordinates, without the need for a recognition of the projected patterns of photons. The refractive index of the material defines a detector intrinsic energy-based rejection of scattered PET events at the cost of reduced sensitivity.

  12. CHERENCUBE: Concept definition and implementation challenges of a Cherenkov-based detector block for PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Somlai-Schweiger, I., E-mail: ian.somlai@tum.de; Ziegler, S. I.

    Purpose: A new concept for a depth-of-interaction (DOI) capable time-of-flight (TOF) PET detector is defined, based only on the detection of Cherenkov photons. The proposed “CHERENCUBE” consists of a cubic Cherenkov radiator with position-sensitive photodetectors covering each crystal face. By means of the spatial distribution of the detected photons and their time of arrival, the point of interaction of the gamma-ray in the crystal can be determined. This study analyzes through theoretical calculations and Monte Carlo simulations the potential advantages of the concept toward reaching a Cherenkov-only detector for TOF-PET with DOI capability. Furthermore, an algorithm for the DOI estimationmore » is presented and the requirements for a practical implementation of the proposed concept are defined. Methods: The Monte Carlo simulations consisted of a cubic crystal with one photodetector coupled to each one of the faces of the cube. The sensitive area of the detector matched exactly the crystal size, which was varied in 1 mm steps between 1 × 1 × 1 mm{sup 3} and 10 × 10 × 10 mm{sup 3}. For each size, five independent simulations of ten thousand 511 keV gamma-rays were triggered at a fixed distance of 10 mm. The crystal chosen was PbWO{sub 4}. Its scintillation properties were simulated, but only Cherenkov photons were analyzed. Photodetectors were simulated having perfect photodetection efficiency and infinite time resolution. For every generated particle, the analysis considered its creation process, parent and daughter particles, energy, origin coordinates, trajectory, and time and position of detection. The DOI determination is based on the distribution of the emission time of all photons per event. These values are calculated as a function of the coordinates of detection and origin for every photon. The common origin is estimated by finding the distribution with the most similar emission time-points. Results: Detection efficiency increases with crystal size from 8.2% (1 × 1 × 1 mm{sup 3}) to 58.6% (10 × 10 × 10 mm{sup 3}) and decreases applying a photon detection threshold of 5/10/20 photons to 6.3%/4.3%/0.7% and 49.3%/30.4%/2.8%, respectively. The detection rate in the six photodetectors is uniform due to the nearly isotropic cone emission. Most cones originated after a photoelectric effect interaction, with two dominating peaks for the kinetic energy of the electron at 422.99 and 441.47 keV. The detection distance between same-event photons defines the spatial resolution of the detector required for individual photon recognition, with 20% of the detected photons having their closest neighbor within a distance of 5% of the length of the cube. Same-event photons are detected within a time window whose width is determined by the crystal size, with values of 30 and 150 ps for a 1 × 1 × 1 mm{sup 3} and a 10 × 10 × 10 mm{sup 3} cube, respectively. The DOI reconstruction has an accuracy of approximately 23% of the length of the cube, with an average value of 2.2 mm for a 10 × 10 × 10 mm{sup 3} CHERENCUBE. Conclusions: The proposed concept requires a detector with high photodetection efficiency. The structure of the sensitive surface of the detector should be a two dimensional array of microcells, able to provide individual detection coordinates and time stamps. The microcell size determines the ability to recognize individual photons, influencing detection efficiency. The 3D DOI recognition relies on the accuracy of the time stamps and detection coordinates, without the need for a recognition of the projected patterns of photons. The refractive index of the material defines a detector intrinsic energy-based rejection of scattered PET events at the cost of reduced sensitivity.« less

  13. Optical Observation, Image-processing, and Detection of Space Debris in Geosynchronous Earth Orbit

    NASA Astrophysics Data System (ADS)

    Oda, H.; Yanagisawa, T.; Kurosaki, H.; Tagawa, M.

    2014-09-01

    We report on optical observations and an efficient detection method of space debris in the geosynchronous Earth orbit (GEO). We operate our new Australia Remote Observatory (ARO) where an 18 cm optical telescope with a charged-coupled device (CCD) camera covering a 3.14-degree field of view is used for GEO debris survey, and analyse datasets of successive CCD images using the line detection method (Yanagisawa and Nakajima 2005). In our operation, the exposure time of each CCD image is set to be 3 seconds (or 5 seconds), and the time interval of CCD shutter open is about 4.7 seconds (or 6.7 seconds). In the line detection method, a sufficient number of sample objects are taken from each image based on their shape and intensity, which includes not only faint signals but also background noise (we take 500 sample objects from each image in this paper). Then we search a sequence of sample objects aligning in a straight line in the successive images to exclude the noise sample. We succeed in detecting faint signals (down to about 1.8 sigma of background noise) by applying the line detection method to 18 CCD images. As a result, we detected about 300 GEO objects up to magnitude of 15.5 among 5 nights data. We also calculate orbits of objects detected using the Simplified General Perturbations Satellite Orbit Model 4(SGP4), and identify the objects listed in the two-line-element (TLE) data catalogue publicly provided by the U.S. Strategic Command (USSTRATCOM). We found that a certain amount of our detections are new objects that are not contained in the catalogue. We conclude that our ARO and detection method posse a high efficiency detection of GEO objects despite the use of comparatively-inexpensive observation and analysis system. We also describe the image-processing specialized for the detection of GEO objects (not for usual astronomical objects like stars) in this paper.

  14. Automatic Recognition of Road Signs

    NASA Astrophysics Data System (ADS)

    Inoue, Yasuo; Kohashi, Yuuichirou; Ishikawa, Naoto; Nakajima, Masato

    2002-11-01

    The increase in traffic accidents is becoming a serious social problem with the recent rapid traffic increase. In many cases, the driver"s carelessness is the primary factor of traffic accidents, and the driver assistance system is demanded for supporting driver"s safety. In this research, we propose the new method of automatic detection and recognition of road signs by image processing. The purpose of this research is to prevent accidents caused by driver"s carelessness, and call attention to a driver when the driver violates traffic a regulation. In this research, high accuracy and the efficient sign detecting method are realized by removing unnecessary information except for a road sign from an image, and detect a road sign using shape features. At first, the color information that is not used in road signs is removed from an image. Next, edges except for circular and triangle ones are removed to choose sign shape. In the recognition process, normalized cross correlation operation is carried out to the two-dimensional differentiation pattern of a sign, and the accurate and efficient method for detecting the road sign is realized. Moreover, the real-time operation in a software base was realized by holding down calculation cost, maintaining highly precise sign detection and recognition. Specifically, it becomes specifically possible to process by 0.1 sec(s)/frame using a general-purpose PC (CPU: Pentium4 1.7GHz). As a result of in-vehicle experimentation, our system could process on real time and has confirmed that detection and recognition of a sign could be performed correctly.

  15. The detective quantum efficiency of photon-counting x-ray detectors using cascaded-systems analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tanguay, Jesse; Yun, Seungman; School of Mechanical Engineering, Pusan National University, Jangjeon-dong, Geumjeong-gu, Busan 609-735

    Purpose: Single-photon counting (SPC) x-ray imaging has the potential to improve image quality and enable new advanced energy-dependent methods. The purpose of this study is to extend cascaded-systems analyses (CSA) to the description of image quality and the detective quantum efficiency (DQE) of SPC systems. Methods: Point-process theory is used to develop a method of propagating the mean signal and Wiener noise-power spectrum through a thresholding stage (required to identify x-ray interaction events). The new transfer relationships are used to describe the zero-frequency DQE of a hypothetical SPC detector including the effects of stochastic conversion of incident photons to secondarymore » quanta, secondary quantum sinks, additive noise, and threshold level. Theoretical results are compared with Monte Carlo calculations assuming the same detector model. Results: Under certain conditions, the CSA approach can be applied to SPC systems with the additional requirement of propagating the probability density function describing the total number of image-forming quanta through each stage of a cascaded model. Theoretical results including DQE show excellent agreement with Monte Carlo calculations under all conditions considered. Conclusions: Application of the CSA method shows that false counts due to additive electronic noise results in both a nonlinear image signal and increased image noise. There is a window of allowable threshold values to achieve a high DQE that depends on conversion gain, secondary quantum sinks, and additive noise.« less

  16. Detection and tracking of a moving target using SAR images with the particle filter-based track-before-detect algorithm.

    PubMed

    Gao, Han; Li, Jingwen

    2014-06-19

    A novel approach to detecting and tracking a moving target using synthetic aperture radar (SAR) images is proposed in this paper. Achieved with the particle filter (PF) based track-before-detect (TBD) algorithm, the approach is capable of detecting and tracking the low signal-to-noise ratio (SNR) moving target with SAR systems, which the traditional track-after-detect (TAD) approach is inadequate for. By incorporating the signal model of the SAR moving target into the algorithm, the ambiguity in target azimuth position and radial velocity is resolved while tracking, which leads directly to the true estimation. With the sub-area substituted for the whole area to calculate the likelihood ratio and a pertinent choice of the number of particles, the computational efficiency is improved with little loss in the detection and tracking performance. The feasibility of the approach is validated and the performance is evaluated with Monte Carlo trials. It is demonstrated that the proposed approach is capable to detect and track a moving target with SNR as low as 7 dB, and outperforms the traditional TAD approach when the SNR is below 14 dB.

  17. Detection and Tracking of a Moving Target Using SAR Images with the Particle Filter-Based Track-Before-Detect Algorithm

    PubMed Central

    Gao, Han; Li, Jingwen

    2014-01-01

    A novel approach to detecting and tracking a moving target using synthetic aperture radar (SAR) images is proposed in this paper. Achieved with the particle filter (PF) based track-before-detect (TBD) algorithm, the approach is capable of detecting and tracking the low signal-to-noise ratio (SNR) moving target with SAR systems, which the traditional track-after-detect (TAD) approach is inadequate for. By incorporating the signal model of the SAR moving target into the algorithm, the ambiguity in target azimuth position and radial velocity is resolved while tracking, which leads directly to the true estimation. With the sub-area substituted for the whole area to calculate the likelihood ratio and a pertinent choice of the number of particles, the computational efficiency is improved with little loss in the detection and tracking performance. The feasibility of the approach is validated and the performance is evaluated with Monte Carlo trials. It is demonstrated that the proposed approach is capable to detect and track a moving target with SNR as low as 7 dB, and outperforms the traditional TAD approach when the SNR is below 14 dB. PMID:24949640

  18. [Study of automatic marine oil spills detection using imaging spectroscopy].

    PubMed

    Liu, De-Lian; Han, Liang; Zhang, Jian-Qi

    2013-11-01

    To reduce artificial auxiliary works in oil spills detection process, an automatic oil spill detection method based on adaptive matched filter is presented. Firstly, the characteristics of reflectance spectral signature of C-H bond in oil spill are analyzed. And an oil spill spectral signature extraction model is designed by using the spectral feature of C-H bond. It is then used to obtain the reference spectral signature for the following oil spill detection step. Secondly, the characteristics of reflectance spectral signature of sea water, clouds, and oil spill are compared. The bands which have large difference in reflectance spectral signatures of the sea water, clouds, and oil spill are selected. By using these bands, the sea water pixels are segmented. And the background parameters are then calculated. Finally, the classical adaptive matched filter from target detection algorithms is improved and introduced for oil spill detection. The proposed method is applied to the real airborne visible infrared imaging spectrometer (AVIRIS) hyperspectral image captured during the deepwater horizon oil spill in the Gulf of Mexico for oil spill detection. The results show that the proposed method has, high efficiency, does not need artificial auxiliary work, and can be used for automatic detection of marine oil spill.

  19. Process Analyzer

    NASA Technical Reports Server (NTRS)

    1994-01-01

    The ChemScan UV-6100 is a spectrometry system originally developed by Biotronics Technologies, Inc. under a Small Business Innovation Research (SBIR) contract. It is marketed to the water and wastewater treatment industries, replacing "grab sampling" with on-line data collection. It analyzes the light absorbance characteristics of a water sample, simultaneously detects hundreds of individual wavelengths absorbed by chemical substances in a process solution, and quantifies the information. Spectral data is then processed by ChemScan analyzer and compared with calibration files in the system's memory in order to calculate concentrations of chemical substances that cause UV light absorbance in specific patterns. Monitored substances can be analyzed for quality and quantity. Applications include detection of a variety of substances, and the information provided enables an operator to control a process more efficiently.

  20. Detection of single electron spin resonance in a double quantum dota)

    NASA Astrophysics Data System (ADS)

    Koppens, F. H. L.; Buizert, C.; Vink, I. T.; Nowack, K. C.; Meunier, T.; Kouwenhoven, L. P.; Vandersypen, L. M. K.

    2007-04-01

    Spin-dependent transport measurements through a double quantum dot are a valuable tool for detecting both the coherent evolution of the spin state of a single electron, as well as the hybridization of two-electron spin states. In this article, we discuss a model that describes the transport cycle in this regime, including the effects of an oscillating magnetic field (causing electron spin resonance) and the effective nuclear fields on the spin states in the two dots. We numerically calculate the current flow due to the induced spin flips via electron spin resonance, and we study the detector efficiency for a range of parameters. The experimental data are compared with the model and we find a reasonable agreement.

  1. Absolute ion detection efficiencies of microchannel plates and funnel microchannel plates for multi-coincidence detection

    NASA Astrophysics Data System (ADS)

    Fehre, K.; Trojanowskaja, D.; Gatzke, J.; Kunitski, M.; Trinter, F.; Zeller, S.; Schmidt, L. Ph. H.; Stohner, J.; Berger, R.; Czasch, A.; Jagutzki, O.; Jahnke, T.; Dörner, R.; Schöffler, M. S.

    2018-04-01

    Modern momentum imaging techniques allow for the investigation of complex molecules in the gas phase by detection of several fragment ions in coincidence. For these studies, it is of great importance that the single-particle detection efficiency ɛ is as high as possible, as the overall efficiency scales with ɛn, i.e., the power of the number of detected particles. Here we present measured absolute detection efficiencies for protons of several micro-channel plates (MCPs), including efficiency enhanced "funnel MCPs." Furthermore, the relative detection efficiency for two-, three-, four-, and five-body fragmentation of CHBrClF has been examined. The "funnel" MCPs exhibit an efficiency of approximately 90%, gaining a factor of 24 (as compared to "normal" MCPs) in the case of a five-fold ion coincidence detection.

  2. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    DOE PAGES

    Di Mauro, M.; Manconi, S.; Zechlin, H. -S.; ...

    2018-03-29

    Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less

  3. The effectiveness of detection of splashed particles using a system of three integrated high-speed cameras

    NASA Astrophysics Data System (ADS)

    Ryżak, Magdalena; Beczek, Michał; Mazur, Rafał; Sochan, Agata; Bieganowski, Andrzej

    2017-04-01

    The phenomenon of splash, which is one of the factors causing erosion of the soil surface, is the subject of research of various scientific teams. One of efficient methods of observation and analysis of this phenomenon are high-speed cameras to measure particles at 2000 frames per second or higher. Analysis of the phenomenon of splash with the use of high-speed cameras and specialized software can reveal, among other things, the number of broken particles, their speeds, trajectories, and the distances over which they were transferred. The paper presents an attempt at evaluation of the efficiency of detection of splashed particles with the use of a set of 3 cameras (Vision Research MIRO 310) and software Dantec Dynamics Studio, using a 3D module (Volumetric PTV). In order to assess the effectiveness of estimating the number of particles, the experiment was performed on glass beads with a diameter of 0.5 mm (corresponding to the sand fraction). Water droplets with a diameter of 4.2 mm fell on a sample from a height of 1.5 m. Two types of splashed particles were observed: particle having a low range (up to 18 mm) splashed at larger angles and particles of a high range (up to 118 mm) splashed at smaller angles. The detection efficiency the number of splashed particles estimated by the software was 45 - 65% for particles with a large range. The effectiveness of the detection of particles by the software has been calculated on the basis of comparison with the number of beads that fell on the adhesive surface around the sample. This work was partly financed from the National Science Centre, Poland; project no. 2014/14/E/ST10/00851.

  4. Deriving the Contribution of Blazars to the Fermi-LAT Extragalactic γ-ray Background at E > 10 GeV with Efficiency Corrections and Photon Statistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Mauro, M.; Manconi, S.; Zechlin, H. -S.

    Here, the Fermi Large Area Telescope (LAT) Collaboration has recently released the Third Catalog of Hard Fermi-LAT Sources (3FHL), which contains 1556 sources detected above 10 GeV with seven years of Pass 8 data. Building upon the 3FHL results, we investigate the flux distribution of sources at high Galactic latitudes (more » $$|b| \\gt 20^\\circ $$), which are mostly blazars. We use two complementary techniques: (1) a source-detection efficiency correction method and (2) an analysis of pixel photon count statistics with the one-point probability distribution function (1pPDF). With the first method, using realistic Monte Carlo simulations of the γ-ray sky, we calculate the efficiency of the LAT to detect point sources. This enables us to find the intrinsic source-count distribution at photon fluxes down to 7.5 × 10 –12 ph cm –2 s –1. With this method, we detect a flux break at (3.5 ± 0.4) × 10 –11 ph cm –2 s –1 with a significance of at least 5.4σ. The power-law indexes of the source-count distribution above and below the break are 2.09 ± 0.04 and 1.07 ± 0.27, respectively. This result is confirmed with the 1pPDF method, which has a sensitivity reach of ~10 –11 ph cm –2 s –1. Integrating the derived source-count distribution above the sensitivity of our analysis, we find that (42 ± 8)% of the extragalactic γ-ray background originates from blazars.« less

  5. Sensitivity, specificity, and efficiency in detecting opiates in oral fluid with the Cozart Opiate Microplate EIA and GC-MS following controlled codeine administration.

    PubMed

    Barnes, Allan J; Kim, Insook; Schepers, Raf; Moolchan, Eric T; Wilson, Lisa; Cooper, Gail; Reid, Claire; Hand, Chris; Huestis, Marilyn A

    2003-10-01

    Oral fluid specimens (N = 1406) were collected from 19 subjects prior to and up to 72 h following controlled administration of oral codeine. Volunteers provided informed consent to participate in this National Institute on Drug Abuse Institutional Review Board-approved protocol. A modification of Cozart Microplate Opiate EIA Oral Fluid Kit (Opiate ELISA), employing codeine calibrators, was used for semiquantitative analysis of opiates, followed by gas chromatography-mass spectrometry (GC-MS) for the confirmation and quantitation of codeine, norcodeine, morphine, and normorphine in oral fluid. GC-MS limits of detection and quantitation were 2.5 microg/L for all analytes. The Substance Abuse and Mental Health Services Administration (SAMHSA) has proposed a 40-microg/L opiate screening and a 40-microg/L morphine or codeine confirmation cutoff for the detection of opiate use. Oral fluid opiate screening and confirmation cutoffs of 30 micro g/L are in use in the U.K. Utilizing 2.5-, 20-, 30-, and 40-microg/L GC-MS cutoffs, 26%, 20%, 19%, and 18% of the oral fluid specimens were positive for codeine or one of its metabolites. Six Opiate ELISA/confirmation cutoff criteria (2.5/2.5, 10/2.5, 20/20, 30/20, 30/30, and 40/40 microg/L) were evaluated. Calculations for Opiate ELISA sensitivity, specificity, and efficiency were determined from the number of true-positive, true-negative, false-positive, and false-negative results at each screening/confirmation cutoff. Sensitivity, specificity, and efficiency for the lowest cutoff were 91.5%, 88.6%, and 89.3%. Application of the cutoff currently used in the U.K. yielded sensitivity, specificity, and efficiency results of 79.7%, 99.0%, and 95.4% and similar results of 76.7%, 99.1%, and 95.1% when applying the SAMHSA criteria. These data indicate that the Opiate ELISA efficiently detects oral codeine use. In addition, the data, collected following controlled oral codeine administration, may aid in the interpretation of opiate oral fluid test results and in the selection of appropriate oral fluid screening and confirmation cutoffs.

  6. A program for the calculation of paraboloidal-dish solar thermal power plant performance

    NASA Technical Reports Server (NTRS)

    Bowyer, J. M., Jr.

    1985-01-01

    A program capable of calculating the design-point and quasi-steady-state annual performance of a paraboloidal-concentrator solar thermal power plant without energy storage was written for a programmable calculator equipped with suitable printer. The power plant may be located at any site for which a histogram of annual direct normal insolation is available. Inputs required by the program are aperture area and the design and annual efficiencies of the concentrator; the intercept factor and apparent efficiency of the power conversion subsystem and a polynomial representation of its normalized part-load efficiency; the efficiency of the electrical generator or alternator; the efficiency of the electric power conditioning and transport subsystem; and the fractional parasitic loses for the plant. Losses to auxiliaries associated with each individual module are to be deducted when the power conversion subsystem efficiencies are calculated. Outputs provided by the program are the system design efficiency, the annualized receiver efficiency, the annualized power conversion subsystem efficiency, total annual direct normal insolation received per unit area of concentrator aperture, and the system annual efficiency.

  7. Infrared small target detection based on multiscale center-surround contrast measure

    NASA Astrophysics Data System (ADS)

    Fu, Hao; Long, Yunli; Zhu, Ran; An, Wei

    2018-04-01

    Infrared(IR) small target detection plays a critical role in the Infrared Search And Track (IRST) system. Although it has been studied for years, there are some difficulties remained to the clutter environment. According to the principle of human discrimination of small targets from a natural scene that there is a signature of discontinuity between the object and its neighboring regions, we develop an efficient method for infrared small target detection called multiscale centersurround contrast measure (MCSCM). First, to determine the maximum neighboring window size, an entropy-based window selection technique is used. Then, we construct a novel multiscale center-surround contrast measure to calculate the saliency map. Compared with the original image, the MCSCM map has less background clutters and noise residual. Subsequently, a simple threshold is used to segment the target. Experimental results show our method achieves better performance.

  8. Coincidence probabilities for spacecraft gravitational wave experiments - Massive coalescing binaries

    NASA Technical Reports Server (NTRS)

    Tinto, Massimo; Armstrong, J. W.

    1991-01-01

    Massive coalescing binary systems are candidate sources of gravitational radiation in the millihertz frequency band accessible to spacecraft Doppler tracking experiments. This paper discusses signal processing and detection probability for waves from coalescing binaries in the regime where the signal frequency increases linearly with time, i.e., 'chirp' signals. Using known noise statistics, thresholds with given false alarm probabilities are established for one- and two-spacecraft experiments. Given the threshold, the detection probability is calculated as a function of gravitational wave amplitude for both one- and two-spacecraft experiments, assuming random polarization states and under various assumptions about wave directions. This allows quantitative statements about the detection efficiency of these experiments and the utility of coincidence experiments. In particular, coincidence probabilities for two-spacecraft experiments are insensitive to the angle between the directions to the two spacecraft, indicating that near-optical experiments can be done without constraints on spacecraft trajectories.

  9. Use of LANDSAT-1 data for the detection and mapping of saline seeps in Montana

    NASA Technical Reports Server (NTRS)

    May, G. A. (Principal Investigator); Petersen, G. W.

    1976-01-01

    The author has identified the following significant results. April, May, and August are the best times to detect saline seeps. Specific times within these months would be dependent upon weather, phenology, and growth conditions. Saline seeps can be efficiently and accurately mapped, within resolution capabilities, from merged May and August LANDSAT 1 data. Seeps were mapped by detecting salt crusts in the spring and indicator plants in the fall. These indicator plants were kochia, inkweed, and foxtail barley. The total hectares of the mapped saline seeps were calculated and tabulated. Saline seeps less than two hectares in size or that have linear configurations less than 200 meters in width were not mapped using the LANDSAT 1 data. Saline seep signatures developed in the Coffee Creek test site were extended to map saline seeps located outside this area.

  10. Whole exome sequencing is an efficient, sensitive and specific method of mutation detection in osteogenesis imperfecta and Marfan syndrome

    PubMed Central

    McInerney-Leo, Aideen M; Marshall, Mhairi S; Gardiner, Brooke; Coucke, Paul J; Van Laer, Lut; Loeys, Bart L; Summers, Kim M; Symoens, Sofie; West, Jennifer A; West, Malcolm J; Paul Wordsworth, B; Zankl, Andreas; Leo, Paul J; Brown, Matthew A; Duncan, Emma L

    2013-01-01

    Osteogenesis imperfecta (OI) and Marfan syndrome (MFS) are common Mendelian disorders. Both conditions are usually diagnosed clinically, as genetic testing is expensive due to the size and number of potentially causative genes and mutations. However, genetic testing may benefit patients, at-risk family members and individuals with borderline phenotypes, as well as improving genetic counseling and allowing critical differential diagnoses. We assessed whether whole exome sequencing (WES) is a sensitive method for mutation detection in OI and MFS. WES was performed on genomic DNA from 13 participants with OI and 10 participants with MFS who had known mutations, with exome capture followed by massive parallel sequencing of multiplexed samples. Single nucleotide polymorphisms (SNPs) and small indels were called using Genome Analysis Toolkit (GATK) and annotated with ANNOVAR. CREST, exomeCopy and exomeDepth were used for large deletion detection. Results were compared with the previous data. Specificity was calculated by screening WES data from a control population of 487 individuals for mutations in COL1A1, COL1A2 and FBN1. The target capture of five exome capture platforms was compared. All 13 mutations in the OI cohort and 9/10 in the MFS cohort were detected (sensitivity=95.6%) including non-synonymous SNPs, small indels (<10 bp), and a large UTR5/exon 1 deletion. One mutation was not detected by GATK due to strand bias. Specificity was 99.5%. Capture platforms and analysis programs differed considerably in their ability to detect mutations. Consumable costs for WES were low. WES is an efficient, sensitive, specific and cost-effective method for mutation detection in patients with OI and MFS. Careful selection of platform and analysis programs is necessary to maximize success. PMID:24501682

  11. Automated grain extraction and classification by combining improved region growing segmentation and shape descriptors in electromagnetic mill classification system

    NASA Astrophysics Data System (ADS)

    Budzan, Sebastian

    2018-04-01

    In this paper, the automatic method of grain detection and classification has been presented. As input, it uses a single digital image obtained from milling process of the copper ore with an high-quality digital camera. The grinding process is an extremely energy and cost consuming process, thus granularity evaluation process should be performed with high efficiency and time consumption. The method proposed in this paper is based on the three-stage image processing. First, using Seeded Region Growing (SRG) segmentation with proposed adaptive thresholding based on the calculation of Relative Standard Deviation (RSD) all grains are detected. In the next step results of the detection are improved using information about the shape of the detected grains using distance map. Finally, each grain in the sample is classified into one of the predefined granularity class. The quality of the proposed method has been obtained by using nominal granularity samples, also with a comparison to the other methods.

  12. Novel algorithm by low complexity filter on retinal vessel segmentation

    NASA Astrophysics Data System (ADS)

    Rostampour, Samad

    2011-10-01

    This article shows a new method to detect blood vessels in the retina by digital images. Retinal vessel segmentation is important for detection of side effect of diabetic disease, because diabetes can form new capillaries which are very brittle. The research has been done in two phases: preprocessing and processing. Preprocessing phase consists to apply a new filter that produces a suitable output. It shows vessels in dark color on white background and make a good difference between vessels and background. The complexity is very low and extra images are eliminated. The second phase is processing and used the method is called Bayesian. It is a built-in in supervision classification method. This method uses of mean and variance of intensity of pixels for calculate of probability. Finally Pixels of image are divided into two classes: vessels and background. Used images are related to the DRIVE database. After performing this operation, the calculation gives 95 percent of efficiency average. The method also was performed from an external sample DRIVE database which has retinopathy, and perfect result was obtained

  13. 43 CFR Appendix A to Part 418 - Calculation of Efficiency Equation

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 43 Public Lands: Interior 1 2011-10-01 2011-10-01 false Calculation of Efficiency Equation A Appendix A to Part 418 Public Lands: Interior Regulations Relating to Public Lands BUREAU OF RECLAMATION.... 418, App. A Appendix A to Part 418—Calculation of Efficiency Equation ER18DE97.008 ER18DE97.009 ...

  14. 43 CFR Appendix A to Part 418 - Calculation of Efficiency Equation

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false Calculation of Efficiency Equation A Appendix A to Part 418 Public Lands: Interior Regulations Relating to Public Lands BUREAU OF RECLAMATION.... 418, App. A Appendix A to Part 418—Calculation of Efficiency Equation ER18DE97.008 ER18DE97.009 ...

  15. Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network

    PubMed Central

    2018-01-01

    Skin lesions are a severe disease globally. Early detection of melanoma in dermoscopy images significantly increases the survival rate. However, the accurate recognition of melanoma is extremely challenging due to the following reasons: low contrast between lesions and skin, visual similarity between melanoma and non-melanoma lesions, etc. Hence, reliable automatic detection of skin tumors is very useful to increase the accuracy and efficiency of pathologists. In this paper, we proposed two deep learning methods to address three main tasks emerging in the area of skin lesion image processing, i.e., lesion segmentation (task 1), lesion dermoscopic feature extraction (task 2) and lesion classification (task 3). A deep learning framework consisting of two fully convolutional residual networks (FCRN) is proposed to simultaneously produce the segmentation result and the coarse classification result. A lesion index calculation unit (LICU) is developed to refine the coarse classification results by calculating the distance heat-map. A straight-forward CNN is proposed for the dermoscopic feature extraction task. The proposed deep learning frameworks were evaluated on the ISIC 2017 dataset. Experimental results show the promising accuracies of our frameworks, i.e., 0.753 for task 1, 0.848 for task 2 and 0.912 for task 3 were achieved. PMID:29439500

  16. Texture functions in image analysis: A computationally efficient solution

    NASA Technical Reports Server (NTRS)

    Cox, S. C.; Rose, J. F.

    1983-01-01

    A computationally efficient means for calculating texture measurements from digital images by use of the co-occurrence technique is presented. The calculation of the statistical descriptors of image texture and a solution that circumvents the need for calculating and storing a co-occurrence matrix are discussed. The results show that existing efficient algorithms for calculating sums, sums of squares, and cross products can be used to compute complex co-occurrence relationships directly from the digital image input.

  17. Determination of antidepressants in human urine extracted by magnetic multiwalled carbon nanotube poly(styrene-co-divinylbenzene) composites and separation by capillary electrophoresis.

    PubMed

    Murtada, Khaled; de Andrés, Fernando; Ríos, Angel; Zougagh, Mohammed

    2018-04-20

    Poly(styrene-co-divinylbenzene)-coated magnetic multiwalled carbon nanotube composite synthesized by in-situ high temperature combination and precipitation polymerization of styrene-co-divinylbenzene has been employed as a magnetic sorbent for the solid phase extraction of antidepressants in human urine samples. Fluoxetine, venlafaxine, citalopram and sertraline were, afterwards, separated and determined by capillary electrophoresis with diode array detection. The presence of magnetic multiwalled carbon nanotubes in native poly(styrene-co-divinylbenzene) not only simplified sample treatment but also enhanced the adsorption efficiencies, obtaining extraction recoveries higher than 89.5% for all analytes. Moreover, this composite can be re-used at least 10 times without loss of efficiency and limits of detection ranging from 0.014 to 0.041 μg mL -1 were calculated. Additionally, precision values ranging from 0.08 to 7.50% and from 0.21 to 3.05% were obtained for the responses and for the migration times of the analytes, respectively. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  18. 40 CFR 63.753 - Reporting requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the control system is calculated to be less than 81%, the initial material balance calculation, and... used, (A) each rolling period when the overall control efficiency of the control system is calculated... the overall control efficiency of the control system is calculated to be less than 81%, the initial...

  19. 40 CFR 63.753 - Reporting requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the control system is calculated to be less than 81%, the initial material balance calculation, and... used, (A) each rolling period when the overall control efficiency of the control system is calculated... the overall control efficiency of the control system is calculated to be less than 81%, the initial...

  20. 40 CFR 63.753 - Reporting requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the control system is calculated to be less than 81%, the initial material balance calculation, and... used, (A) each rolling period when the overall control efficiency of the control system is calculated... the overall control efficiency of the control system is calculated to be less than 81%, the initial...

  1. 40 CFR 63.753 - Reporting requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the control system is calculated to be less than 81%, the initial material balance calculation, and... used, (A) each rolling period when the overall control efficiency of the control system is calculated... the overall control efficiency of the control system is calculated to be less than 81%, the initial...

  2. 40 CFR 63.753 - Reporting requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the control system is calculated to be less than 81%, the initial material balance calculation, and... used, (A) each rolling period when the overall control efficiency of the control system is calculated... the overall control efficiency of the control system is calculated to be less than 81%, the initial...

  3. Applications of Nanomaterials Based on Magnetite and Mesoporous Silica on the Selective Detection of Zinc Ion in Live Cell Imaging.

    PubMed

    Erami, Roghayeh Sadeghi; Ovejero, Karina; Meghdadi, Soraia; Filice, Marco; Amirnasr, Mehdi; Rodríguez-Diéguez, Antonio; De La Orden, María Ulagares; Gómez-Ruiz, Santiago

    2018-06-14

    Functionalized magnetite nanoparticles (FMNPs) and functionalized mesoporous silica nanoparticles (FMSNs) were synthesized by the conjugation of magnetite and mesoporous silica with the small and fluorogenic benzothiazole ligand, that is, 2(2-hydroxyphenyl)benzothiazole ( hpbtz ). The synthesized fluorescent nanoparticles were characterized by FTIR, XRD, XRF, 13 C CP MAS NMR, BET, and TEM. The photophysical behavior of FMNPs and FMSNs in ethanol was studied using fluorescence spectroscopy. The modification of magnetite and silica scaffolds with the highly fluorescent benzothiazole ligand enabled the nanoparticles to be used as selective and sensitive optical probes for zinc ion detection. Moreover, the presence of hpbtz in FMNPs and FMSNs induced efficient cell viability and zinc ion uptake, with desirable signaling in the normal human kidney epithelial (Hek293) cell line. The significant viability of FMNPs and FMSNs (80% and 92%, respectively) indicates a potential applicability of these nanoparticles as in vitro imaging agents. The calculated limit of detections (LODs) were found to be 2.53 × 10 −6 and 2.55 × 10 −6 M for Fe₃O₄-H@hpbtz and MSN-Et₃N-IPTMS-hpbtz-f1, respectively. FMSNs showed more pronounced zinc signaling relative to FMNPs, as a result of the more efficient penetration into the cells.

  4. A study on efficient detection of network-based IP spoofing DDoS and malware-infected Systems.

    PubMed

    Seo, Jung Woo; Lee, Sang Jin

    2016-01-01

    Large-scale network environments require effective detection and response methods against DDoS attacks. Depending on the advancement of IT infrastructure such as the server or network equipment, DDoS attack traffic arising from a few malware-infected systems capable of crippling the organization's internal network has become a significant threat. This study calculates the frequency of network-based packet attributes and analyzes the anomalies of the attributes in order to detect IP-spoofed DDoS attacks. Also, a method is proposed for the effective detection of malware infection systems triggering IP-spoofed DDoS attacks on an edge network. Detection accuracy and performance of the collected real-time traffic on a core network is analyzed thru the use of the proposed algorithm, and a prototype was developed to evaluate the performance of the algorithm. As a result, DDoS attacks on the internal network were detected in real-time and whether or not IP addresses were spoofed was confirmed. Detecting hosts infected by malware in real-time allowed the execution of intrusion responses before stoppage of the internal network caused by large-scale attack traffic.

  5. Brownian rotational relaxation and power absorption in magnetite nanoparticles

    NASA Astrophysics Data System (ADS)

    Goya, G. F.; Fernandez-Pacheco, R.; Arruebo, M.; Cassinelli, N.; Ibarra, M. R.

    2007-09-01

    We present a study of the power absorption efficiency in several magnetite-based colloids, to asses their potential as magnetic inductive hyperthermia (MIH) agents. Relaxation times τ were measured through the imaginary susceptibility component χ″(T), and analyzed within Debye's theory of dipolar fluid. The results indicated Brownian rotational relaxation and allowed to calculate the hydrodynamic radius close to the values obtained from photon correlation. The study of the colloid performances as power absorbers showed no detectable increase of temperature for dextran-coated Fe 3O 4 nanoparticles, whereas a second Fe 3O 4-based dispersion of similar concentration could be heated up to 12 K after 30 min under similar experimental conditions. The different power absorption efficiencies are discussed in terms of the magnetic structure of the nanoparticles.

  6. Quantum cryptography with perfect multiphoton entanglement.

    PubMed

    Luo, Yuhui; Chan, Kam Tai

    2005-05-01

    Multiphoton entanglement in the same polarization has been shown theoretically to be obtainable by type-I spontaneous parametric downconversion (SPDC), which can generate bright pulses more easily than type-II SPDC. A new quantum cryptographic protocol utilizing polarization pairs with the detected type-I entangled multiphotons is proposed as quantum key distribution. We calculate the information capacity versus photon number corresponding to polarization after considering the transmission loss inside the optical fiber, the detector efficiency, and intercept-resend attacks at the level of channel error. The result compares favorably with all other schemes employing entanglement.

  7. Developing the Model of Fuel Injection Process Efficiency Analysis for Injector for Diesel Engines

    NASA Astrophysics Data System (ADS)

    Anisimov, M. Yu; Kayukov, S. S.; Gorshkalev, A. A.; Belousov, A. V.; Gallyamov, R. E.; Lysenko, Yu D.

    2018-01-01

    The article proposes an assessment option for analysing the quality of fuel injection by the injector constituting the development of calculation blocks in a common injector model within LMS Imagine.Lab AMESim. The parameters of the injector model in the article correspond to the serial injector Common Rail-type with solenoid. The possibilities of this approach are demonstrated with providing the results using the example of modelling the modified injector. Following the research results, the advantages of the proposed approach to analysing assessing the fuel injection quality were detected.

  8. An Efficient Energy Management Strategy, Unique Power Split & Energy Distribution, Based on Calculated Vehicle Road Loads

    DTIC Science & Technology

    2012-08-01

    HMMWV for the current given inputs based on the current vehicle speed, acceleration pedal , and brake pedal . From this driver requested power at the...HMMWV engine, b) base HMMWV gear ratios of the 4 speed transmission, c) acceleration and brake pedal pressed for the hybrid vehicle and d) Torque...coefficient. µb: Threshold for detecting brake pedal pressed ? 2 tanE4FGH 2 tanE4 I [K ρ: Air mass density, ρ = ma/Va where ma is mass of air

  9. A highly selective and simple fluorescent sensor for mercury (II) ion detection based on cysteamine-capped CdTe quantum dots synthesized by the reflux method.

    PubMed

    Ding, Xiaojie; Qu, Lingbo; Yang, Ran; Zhou, Yuchen; Li, Jianjun

    2015-06-01

    Cysteamine (CA)-capped CdTe quantum dots (QDs) (CA-CdTe QDs) were prepared by the reflux method and utilized as an efficient nano-sized fluorescent sensor to detect mercury (II) ions (Hg(2+) ). Under optimum conditions, the fluorescence quenching effect of CA-CdTe QDs was linear at Hg(2+) concentrations in the range of 6.0-450 nmol/L. The detection limit was calculated to be 4.0 nmol/L according to the 3σ IUPAC criteria. The influence of 10-fold Pb(2+) , Cu(2+) and Ag(+) on the determination of Hg(2+) was < 7% (superior to other reports based on crude QDs). Furthermore, the detection sensitivity and selectivity were much improved relative to a sensor based on the CA-CdTe QDs probe, which was prepared using a one-pot synthetic method. This CA-CdTe QDs sensor system represents a new feasibility to improve the detection performance of a QDs sensor by changing the synthesis method. Copyright © 2014 John Wiley & Sons, Ltd.

  10. SU-F-T-368: Improved HPGe Detector Precise Efficiency Calibration with Monte Carlo Simulations and Radioactive Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhai, Y. John

    2016-06-15

    Purpose: To obtain an improved precise gamma efficiency calibration curve of HPGe (High Purity Germanium) detector with a new comprehensive approach. Methods: Both of radioactive sources and Monte Carlo simulation (CYLTRAN) are used to determine HPGe gamma efficiency for energy range of 0–8 MeV. The HPGe is a GMX coaxial 280 cm{sup 3} N-type 70% gamma detector. Using Momentum Achromat Recoil Spectrometer (MARS) at the K500 superconducting cyclotron of Texas A&M University, the radioactive nucleus {sup 24} Al was produced and separated. This nucleus has positron decays followed by gamma transitions up to 8 MeV from {sup 24} Mg excitedmore » states which is used to do HPGe efficiency calibration. Results: With {sup 24} Al gamma energy spectrum up to 8MeV, the efficiency for γ ray 7.07 MeV at 4.9 cm distance away from the radioactive source {sup 24} Al was obtained at a value of 0.194(4)%, by carefully considering various factors such as positron annihilation, peak summing effect, beta detector efficiency and internal conversion effect. The Monte Carlo simulation (CYLTRAN) gave a value of 0.189%, which was in agreement with the experimental measurements. Applying to different energy points, then a precise efficiency calibration curve of HPGe detector up to 7.07 MeV at 4.9 cm distance away from the source {sup 24} Al was obtained. Using the same data analysis procedure, the efficiency for the 7.07 MeV gamma ray at 15.1 cm from the source {sup 24} Al was obtained at a value of 0.0387(6)%. MC simulation got a similar value of 0.0395%. This discrepancy led us to assign an uncertainty of 3% to the efficiency at 15.1 cm up to 7.07 MeV. The MC calculations also reproduced the intensity of observed single-and double-escape peaks, providing that the effects of positron annihilation-in-flight were incorporated. Conclusion: The precision improved gamma efficiency calibration curve provides more accurate radiation detection and dose calculation for cancer radiotherapy treatment.« less

  11. Inhibition of the electron cyclotron maser instability in the dense magnetosphere of a hot Jupiter

    NASA Astrophysics Data System (ADS)

    Daley-Yates, S.; Stevens, I. R.

    2018-06-01

    Hot Jupiter (HJ) type exoplanets are expected to produce strong radio emission in the MHz range via the Electron Cyclotron Maser Instability (ECMI). To date, no repeatable detections have been made. To explain the absence of observational results, we conduct 3D adaptive mess refinement (AMR) magnetohydrodynamic (MHD) simulations of the magnetic interactions between a solar type star and HJ using the publicly available code PLUTO. The results are used to calculate the efficiency of the ECMI at producing detectable radio emission from the planets magnetosphere. We also calculate the frequency of the ECMI emission, providing an upper and lower bounds, placing it at the limits of detectability due to Earth's ionospheric cutoff of ˜10 MHz. The incident kinetic and magnetic power available to the ECMI is also determined and a flux of 0.075 mJy for an observer at 10 pc is calculated. The magnetosphere is also characterized and an analysis of the bow shock which forms upstream of the planet is conducted. This shock corresponds to the thin shell model for a colliding wind system. A result consistent with a colliding wind system. The simulation results show that the ECMI process is completely inhibited by the planets expanding atmosphere, due to absorption of UV radiation form the host star. The density, velocity, temperature and magnetic field of the planetary wind are found to result in a magnetosphere where the plasma frequency is raised above that due to the ECMI process making the planet undetectable at radio MHz frequencies.

  12. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems

    PubMed Central

    Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.

    2015-01-01

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211

  13. Integral Images: Efficient Algorithms for Their Computation and Storage in Resource-Constrained Embedded Vision Systems.

    PubMed

    Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D

    2015-07-10

    The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

  14. Molecular Hydrogen Formation : Effect of Dust Grain Temperature Fluctuations

    NASA Astrophysics Data System (ADS)

    Bron, Emeric; Le Bourlot, Jacques; Le Petit, Franck

    2013-06-01

    H_{2} formation is a hot topic in astrochemistry. Thanks to Copernicus and FUSE satellites, its formation rate on dust grains in diffuse interstellar gas has been inferred (Jura 1974, Gry et al. 2002). Nevertheless, detection of H_2 emission in PDRs by ISO and Spitzer (Habart et al., 2004, 2005, 2011 ) showed that its formation mechanism can be efficient on warm grains (warmer than 30K), whereas experimental studies showed that Langmuir-Hinshelwood mechanism is only efficient in a narrow window of grain temperatures (typically between 10-20 K). The Eley-Rideal mechanism, in which H atoms are chemically bound to grains surfaces could explain such a formation rate in PDRs (Le Bourlot et al. 2012 ). Usual dust size distributions (e.g. Mathis et al. 1977 ) favor smaller grains in a way that makes most of the available grain surface belong to small grains. As small grains are subject to large temperature fluctuations due to UV-photons absorption, calculations at a fixed temperature give incorrect results under strong UV-fields. Here, we present a comprehensive study of the influence of this stochastic effect on H_2 formation by Langmuir-Hinshelwood and Eley-Rideal mechanisms. We use a master equation approach to calculate the statistics of coupled fluctuations of the temperature and adsorbed H population of a grain. Doing so, we are able to calculate the formation rate on a grain under a given radiation field and given gas conditions. We find that the Eley-Rideal mechanism remains an efficient mechanism in PDRs, and that the Langmuir-Hinshelwood mechanism is more efficient than expected on warm grains. This procedure is then coupled to full cloud simulations with the Meudon PDR code. We compare the new results with more classical evaluations of the formation rate, and present the differences in terms of chemical structure of the cloud and observable line intensities. We will also highlight the influence of some microphysical parameters on the results.

  15. Highly Sensitive and Selective Uranium Detection in Natural Water Systems Using a Luminescent Mesoporous Metal-Organic Framework Equipped with Abundant Lewis Basic Sites: A Combined Batch, X-ray Absorption Spectroscopy, and First Principles Simulation Investigation.

    PubMed

    Liu, Wei; Dai, Xing; Bai, Zhuanling; Wang, Yanlong; Yang, Zaixing; Zhang, Linjuan; Xu, Lin; Chen, Lanhua; Li, Yuxiang; Gui, Daxiang; Diwu, Juan; Wang, Jianqiang; Zhou, Ruhong; Chai, Zhifang; Wang, Shuao

    2017-04-04

    Uranium is not only a strategic resource for the nuclear industry but also a global contaminant with high toxicity. Although several strategies have been established for detecting uranyl ions in water, searching for new uranium sensor material with great sensitivity, selectivity, and stability remains a challenge. We introduce here a hydrolytically stable mesoporous terbium(III)-based MOF material compound 1, whose channels are as large as 27 Å × 23 Å and are equipped with abundant exposed Lewis basic sites, the luminescence intensity of which can be efficiently and selectively quenched by uranyl ions. The detection limit in deionized water reaches 0.9 μg/L, far below the maximum contamination standard of 30 μg/L in drinking water defined by the United States Environmental Protection Agency, making compound 1 currently the only MOF material that can achieve this goal. More importantly, this material exhibits great capability in detecting uranyl ions in natural water systems such as lake water and seawater with pH being adjusted to 4, where huge excesses of competing ions are present. The uranyl detection limits in Dushu Lake water and in seawater were calculated to be 14.0 and 3.5 μg/L, respectively. This great detection capability originates from the selective binding of uranyl ions onto the Lewis basic sites of the MOF material, as demonstrated by synchrotron radiation extended X-ray adsorption fine structure, X-ray adsorption near edge structure, and first principles calculations, further leading to an effective energy transfer between the uranyl ions and the MOF skeleton.

  16. Design Studies of a CZT-based Detector Combined with a Pixel-Geometry-Matching Collimator for SPECT Imaging.

    PubMed

    Weng, Fenghua; Bagchi, Srijeeta; Huang, Qiu; Seo, Youngho

    2013-10-01

    Single Photon Emission Computed Tomography (SPECT) suffers limited efficiency due to the need for collimators. Collimator properties largely decide the data statistics and image quality. Various materials and configurations of collimators have been investigated in many years. The main thrust of our study is to evaluate the design of pixel-geometry-matching collimators to investigate their potential performances using Geant4 Monte Carlo simulations. Here, a pixel-geometry-matching collimator is defined as a collimator which is divided into the same number of pixels as the detector's and the center of each pixel in the collimator is a one-to-one correspondence to that in the detector. The detector is made of Cadmium Zinc Telluride (CZT), which is one of the most promising materials for applications to detect hard X-rays and γ -rays due to its ability to obtain good energy resolution and high light output at room temperature. For our current project, we have designed a large-area, CZT-based gamma camera (20.192 cm×20.192 cm) with a small pixel pitch (1.60 mm). The detector is pixelated and hence the intrinsic resolution can be as small as the size of the pixel. Materials of collimator, collimator hole geometry, detection efficiency, and spatial resolution of the CZT detector combined with the pixel-matching collimator were calculated and analyzed under different conditions. From the simulation studies, we found that such a camera using rectangular holes has promising imaging characteristics in terms of spatial resolution, detection efficiency, and energy resolution.

  17. Automated Weight-Window Generation for Threat Detection Applications Using ADVANTG

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, Scott W; Miller, Thomas Martin; Evans, Thomas M

    2009-01-01

    Deterministic transport codes have been used for some time to generate weight-window parameters that can improve the efficiency of Monte Carlo simulations. As the use of this hybrid computational technique is becoming more widespread, the scope of applications in which it is being applied is expanding. An active source of new applications is the field of homeland security--particularly the detection of nuclear material threats. For these problems, automated hybrid methods offer an efficient alternative to trial-and-error variance reduction techniques (e.g., geometry splitting or the stochastic weight window generator). The ADVANTG code has been developed to automate the generation of weight-windowmore » parameters for MCNP using the Consistent Adjoint Driven Importance Sampling method and employs the TORT or Denovo 3-D discrete ordinates codes to generate importance maps. In this paper, we describe the application of ADVANTG to a set of threat-detection simulations. We present numerical results for an 'active-interrogation' problem in which a standard cargo container is irradiated by a deuterium-tritium fusion neutron generator. We also present results for two passive detection problems in which a cargo container holding a shielded neutron or gamma source is placed near a portal monitor. For the passive detection problems, ADVANTG obtains an O(10{sup 4}) speedup and, for a detailed gamma spectrum tally, an average O(10{sup 2}) speedup relative to implicit-capture-only simulations, including the deterministic calculation time. For the active-interrogation problem, an O(10{sup 4}) speedup is obtained when compared to a simulation with angular source biasing and crude geometry splitting.« less

  18. Porphyrin-based magnetic nanocomposites for efficient extraction of polycyclic aromatic hydrocarbons from water samples.

    PubMed

    Yu, Jing; Zhu, Shukui; Pang, Liling; Chen, Pin; Zhu, Gang-Tian

    2018-03-09

    Stable and reusable porphyrin-based magnetic nanocomposites were successfully synthesized for efficient extraction of polycyclic aromatic hydrocarbons (PAHs) from environmental water samples. Meso-Tetra (4-carboxyphenyl) porphyrin (TCPP), a kind of porphyrin, can connect the copolymer after amidation and was linked to Fe 3 O 4 @SiO 2 magnetic nanospheres via cross-coupling. Several characteristic techniques such as field emission scanning electron microscopy, transmission electron microscopy, X-ray diffraction, Fourier transform infrared spectrometry, vibrating sample magnetometry and a tensiometer were used to characterize the as-synthesized materials. The structure of the copolymer was similar to that of graphene, possessing sp 2 -conjugated carbon rings, but with an appropriate amount of delocalized π-electrons giving rise to the higher extraction efficiency for heavy PAHs without sacrificing the performance in the extraction of light PAHs. Six extraction parameters, including the TCPP:Fe 3 O 4 @SiO 2 (m:m) ratio, the amount of adsorbents, the type of desorption solvent, the desorption solvent volume, the adsorption time and the desorption time, were investigated. After the optimization of extraction conditions, a comparison of the extraction efficiency of Fe 3 O 4 @SiO 2 -TCPP and Fe 3 O 4 @SiO 2 @GO was carried out. The adsorption mechanism of TCPP to PAHs was studied by first-principles density functional theory (DFT) calculations. Combining experimental and calculated results, it was shown that the π-π stacking interaction was the main adsorption mechanism of TCPP for PAHs and that the amount of delocalized π-electrons plays an important role in the elution process. Under the optimal conditions, Fe 3 O 4 @SiO 2 -porphyrin showed good precision in intra-day (<8.9%) and inter-day (<13.0%) detection, low method detection limits (2-10 ng L -1 ), and wide linearity (10-10000 ng L -1 ). The method was applied to simultaneous analysis of 15 PAHs with acceptable recoveries, which were 71.1%-106.0% for ground water samples and 73.7%-107.1% for Yangtze River water samples, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Investigations of fluid-strain interaction using Plate Boundary Observatory borehole data

    NASA Astrophysics Data System (ADS)

    Boyd, Jeffrey Michael

    Software has a great impact on the energy efficiency of any computing system--it can manage the components of a system efficiently or inefficiently. The impact of software is amplified in the context of a wearable computing system used for activity recognition. The design space this platform opens up is immense and encompasses sensors, feature calculations, activity classification algorithms, sleep schedules, and transmission protocols. Design choices in each of these areas impact energy use, overall accuracy, and usefulness of the system. This thesis explores methods software can influence the trade-off between energy consumption and system accuracy. In general the more energy a system consumes the more accurate will be. We explore how finding the transitions between human activities is able to reduce the energy consumption of such systems without reducing much accuracy. We introduce the Log-likelihood Ratio Test as a method to detect transitions, and explore how choices of sensor, feature calculations, and parameters concerning time segmentation affect the accuracy of this method. We discovered an approximate 5X increase in energy efficiency could be achieved with only a 5% decrease in accuracy. We also address how a system's sleep mode, in which the processor enters a low-power state and sensors are turned off, affects a wearable computing platform that does activity recognition. We discuss the energy trade-offs in each stage of the activity recognition process. We find that careful analysis of these parameters can result in great increases in energy efficiency if small compromises in overall accuracy can be tolerated. We call this the ``Great Compromise.'' We found a 6X increase in efficiency with a 7% decrease in accuracy. We then consider how wireless transmission of data affects the overall energy efficiency of a wearable computing platform. We find that design decisions such as feature calculations and grouping size have a great impact on the energy consumption of the system because of the amount of data that is stored and transmitted. For example, storing and transmitting vector-based features such as FFT or DCT do not compress the signal and would use more energy than storing and transmitting the raw signal. The effect of grouping size on energy consumption depends on the feature. For scalar features energy consumption is proportional in the inverse of grouping size, so it's reduced as grouping size goes up. For features that depend on the grouping size, such as FFT, energy increases with the logarithm of grouping size, so energy consumption increases slowly as grouping size increases. We find that compressing data through activity classification and transition detection significantly reduces energy consumption and that the energy consumed for the classification overhead is negligible compared to the energy savings from data compression. We provide mathematical models of energy usage and data generation, and test our ideas using a mobile computing platform, the Texas Instruments Chronos watch.

  20. A semi-automatic traffic sign detection, classification, and positioning system

    NASA Astrophysics Data System (ADS)

    Creusen, I. M.; Hazelhoff, L.; de With, P. H. N.

    2012-01-01

    The availability of large-scale databases containing street-level panoramic images offers the possibility to perform semi-automatic surveying of real-world objects such as traffic signs. These inventories can be performed significantly more efficiently than using conventional methods. Governmental agencies are interested in these inventories for maintenance and safety reasons. This paper introduces a complete semi-automatic traffic sign inventory system. The system consists of several components. First, a detection algorithm locates the 2D position of the traffic signs in the panoramic images. Second, a classification algorithm is used to identify the traffic sign. Third, the 3D position of the traffic sign is calculated using the GPS position of the photographs. Finally, the results are listed in a table for quick inspection and are also visualized in a web browser.

  1. Designing an anion-functionalized fluorescent ionic liquid as an efficient and reversible turn-off sensor for detecting SO2.

    PubMed

    Che, Siying; Dao, Rina; Zhang, Weidong; Lv, Xiaoyu; Li, Haoran; Wang, Congmin

    2017-03-30

    A novel anion-functionalized fluorescent ionic liquid was designed and prepared, which was capable of capturing sulphur dioxide with high capacity and could also be used as a good colorimetric and fluorescent SO 2 sensor. Compared to conventional fluorescent sensors, this fluorescent ionic liquid did not undergo aggregation-caused quenching or aggregation-induced emission, and the fluorescence was quenched when exposed to SO 2 , and the fluorescence would quench when exposed to SO 2 . The experimental absorption, spectroscopic investigation, and quantum chemical calculations indicated that the quenching of the fluorescence originated from SO 2 physical absorption, not chemical absorption. Furthermore, this fluorescent ionic liquid exhibited high selectivity, good quantification, and excellent reversibility for SO 2 detection, and showed potential for an excellent liquid sensor.

  2. Critical Current Statistics of a Graphene-Based Josephson Junction Infrared Single Photon Detector

    NASA Astrophysics Data System (ADS)

    Walsh, Evan D.; Lee, Gil-Ho; Efetov, Dmitri K.; Heuck, Mikkel; Crossno, Jesse; Taniguchi, Takashi; Watanabe, Kenji; Ohki, Thomas A.; Kim, Philip; Englund, Dirk; Fong, Kin Chung

    Graphene is a promising material for single photon detection due to its broadband absorption and exceptionally low specific heat. We present a photon detector using a graphene sheet as the weak link in a Josephson junction (JJ) to form a threshold detector for single infrared photons. Calculations show that such a device could experience temperature changes of a few hundred percent leading to sub-Hz dark count rates and internal efficiencies approaching unity. We have fabricated the graphene-based JJ (gJJ) detector and measure switching events that are consistent with single photon detection under illumination by an attenuated laser. We study the physical mechanism for these events through the critical current behavior of the gJJ as a function of incident photon flux.

  3. Semi-automatic mapping of geological Structures using UAV-based photogrammetric data: An image analysis approach

    NASA Astrophysics Data System (ADS)

    Vasuki, Yathunanthan; Holden, Eun-Jung; Kovesi, Peter; Micklethwaite, Steven

    2014-08-01

    Recent advances in data acquisition technologies, such as Unmanned Aerial Vehicles (UAVs), have led to a growing interest in capturing high-resolution rock surface images. However, due to the large volumes of data that can be captured in a short flight, efficient analysis of this data brings new challenges, especially the time it takes to digitise maps and extract orientation data. We outline a semi-automated method that allows efficient mapping of geological faults using photogrammetric data of rock surfaces, which was generated from aerial photographs collected by a UAV. Our method harnesses advanced automated image analysis techniques and human data interaction to rapidly map structures and then calculate their dip and dip directions. Geological structures (faults, joints and fractures) are first detected from the primary photographic dataset and the equivalent three dimensional (3D) structures are then identified within a 3D surface model generated by structure from motion (SfM). From this information the location, dip and dip direction of the geological structures are calculated. A structure map generated by our semi-automated method obtained a recall rate of 79.8% when compared against a fault map produced using expert manual digitising and interpretation methods. The semi-automated structure map was produced in 10 min whereas the manual method took approximately 7 h. In addition, the dip and dip direction calculation, using our automated method, shows a mean±standard error of 1.9°±2.2° and 4.4°±2.6° respectively with field measurements. This shows the potential of using our semi-automated method for accurate and efficient mapping of geological structures, particularly from remote, inaccessible or hazardous sites.

  4. The methodology of the gas turbine efficiency calculation

    NASA Astrophysics Data System (ADS)

    Kotowicz, Janusz; Job, Marcin; Brzęczek, Mateusz; Nawrat, Krzysztof; Mędrych, Janusz

    2016-12-01

    In the paper a calculation methodology of isentropic efficiency of a compressor and turbine in a gas turbine installation on the basis of polytropic efficiency characteristics is presented. A gas turbine model is developed into software for power plant simulation. There are shown the calculation algorithms based on iterative model for isentropic efficiency of the compressor and for isentropic efficiency of the turbine based on the turbine inlet temperature. The isentropic efficiency characteristics of the compressor and the turbine are developed by means of the above mentioned algorithms. The gas turbine development for the high compressor ratios was the main driving force for this analysis. The obtained gas turbine electric efficiency characteristics show that an increase of pressure ratio above 50 is not justified due to the slight increase in the efficiency with a significant increase of turbine inlet combustor outlet and temperature.

  5. A large 2D PSD for thermal neutron detection

    NASA Astrophysics Data System (ADS)

    Knott, R. B.; Smith, G. C.; Watt, G.; Boldeman, J. W.

    1997-02-01

    A 2D PSD based on a MWPC has been constructed for a small angle neutron scattering instrument. The active area of the detector was 640 × 640 mm 2. To meet the specifications for neutron detection efficiency and spatial resolution, and to minimise parallax, the gas mixture was 190 kPa 3He plus 100 kPa CF 4, and the active volume had a thickness of 30 mm. The design maximum neutron count rate of the detector was 10 5 events per secod. The (calculated) neutron detection efficiency was 60% for 2 Å neutrons and the (measured) neutron energy resolution on the anode grid was typically 20% (fwhm). The location of a neutron detection event within the active area was determined using the wire-by-wire method: the spatial resolution (5 × 5 mm 2) was thereby defined by the wire geometry. A 16-channel charge-sensitive preamplifier/amplifier/comparator module has been developed with a channel sensitivity of 0.1 V/fC, noise line width of 0.4 fC (fwhm) and channel-to-channel cross-talk of less than 5%. The Proportional Counter Operating System (PCOS III) (LeCroy Corp, USA) was used for event encoding. The ECL signals produced by the 16 channel modules were latched in PCOS III by a trigger pulse from the anode and the fast encoders produce a position and width for each event. The information was transferred to a UNIX workstation for accumulation and online display.

  6. Evaluation of collimation and imaging configuration in scintimammography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsui, B.M.W.; Frey, E.C.; Wessell, D.E.

    1996-12-31

    Conventional scintimammography (SM) with {sup 99m}Tc sestamibi has been limited to taking a single lateral view of the breast using a parallel-hole high resolution (LEHR) collimator. The collimator is placed close to the breast for best possible spatial resolution. However, the collimator geometry precludes imaging the breast from other views. We evaluated using a pinhole collimator instead of a LEHR collimator in SM for improved spatial resolution and detection efficiency, and to allow additional imaging views. Results from theoretical calculations indicated that pinhole collimators could be designed with higher spatial resolution and detection efficiency than LEHR when imaging small tomore » medium size breasts. The geometrical shape of the pinhole collimator allows imaging of the breasts from both the lateral and craniocaudal views. The dual-view images allow better determination of the location of the tumors within the breast and improved detection of tumors located in the medial region of the breast. A breast model that simulates the shape and composition of the breast and breast tumors with different sizes and locations was added to an existing 3D mathematical cardiac-torso (MCAT) phantom. A cylindrically shaped phantom with 10 cm diameter and spherical inserts with different sizes and {sup 99m}Tc sestamibi uptakes with respect to the background provide physical models of breast with tumors. Simulation studies using the breast and MCAT phantoms and experimental studies using the cylindrical phantom confirmed the utility of the pinhole collimator in SM for improved breast tumor detection.« less

  7. Efficient and Scalable Graph Similarity Joins in MapReduce

    PubMed Central

    Chen, Yifan; Zhang, Weiming; Tang, Jiuyang

    2014-01-01

    Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results. PMID:25121135

  8. Efficient and scalable graph similarity joins in MapReduce.

    PubMed

    Chen, Yifan; Zhao, Xiang; Xiao, Chuan; Zhang, Weiming; Tang, Jiuyang

    2014-01-01

    Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results.

  9. An information-theoretic approach to the gravitational-wave burst detection problem

    NASA Astrophysics Data System (ADS)

    Katsavounidis, E.; Lynch, R.; Vitale, S.; Essick, R.; Robinet, F.

    2016-03-01

    The advanced era of gravitational-wave astronomy, with data collected in part by the LIGO gravitational-wave interferometers, has begun as of fall 2015. One potential type of detectable gravitational waves is short-duration gravitational-wave bursts, whose waveforms can be difficult to predict. We present the framework for a new detection algorithm - called oLIB - that can be used in relatively low-latency to turn calibrated strain data into a detection significance statement. This pipeline consists of 1) a sine-Gaussian matched-filter trigger generator based on the Q-transform - known as Omicron -, 2) incoherent down-selection of these triggers to the most signal-like set, and 3) a fully coherent analysis of this signal-like set using the Markov chain Monte Carlo (MCMC) Bayesian evidence calculator LALInferenceBurst (LIB). We optimally extract this information by using a likelihood-ratio test (LRT) to map these search statistics into a significance statement. Using representative archival LIGO data, we show that the algorithm can detect gravitational-wave burst events of realistic strength in realistic instrumental noise with good detection efficiencies across different burst waveform morphologies. With support from the National Science Foundation under Grant PHY-0757058.

  10. Detection of abnormal item based on time intervals for recommender systems.

    PubMed

    Gao, Min; Yuan, Quan; Ling, Bin; Xiong, Qingyu

    2014-01-01

    With the rapid development of e-business, personalized recommendation has become core competence for enterprises to gain profits and improve customer satisfaction. Although collaborative filtering is the most successful approach for building a recommender system, it suffers from "shilling" attacks. In recent years, the research on shilling attacks has been greatly improved. However, the approaches suffer from serious problem in attack model dependency and high computational cost. To solve the problem, an approach for the detection of abnormal item is proposed in this paper. In the paper, two common features of all attack models are analyzed at first. A revised bottom-up discretized approach is then proposed based on time intervals and the features for the detection. The distributions of ratings in different time intervals are compared to detect anomaly based on the calculation of chi square distribution (χ(2)). We evaluated our approach on four types of items which are defined according to the life cycles of these items. The experimental results show that the proposed approach achieves a high detection rate with low computational cost when the number of attack profiles is more than 15. It improves the efficiency in shilling attacks detection by narrowing down the suspicious users.

  11. Automatic detection system of shaft part surface defect based on machine vision

    NASA Astrophysics Data System (ADS)

    Jiang, Lixing; Sun, Kuoyuan; Zhao, Fulai; Hao, Xiangyang

    2015-05-01

    Surface physical damage detection is an important part of the shaft parts quality inspection and the traditional detecting methods are mostly human eye identification which has many disadvantages such as low efficiency, bad reliability. In order to improve the automation level of the quality detection of shaft parts and establish its relevant industry quality standard, a machine vision inspection system connected with MCU was designed to realize the surface detection of shaft parts. The system adopt the monochrome line-scan digital camera and use the dark-field and forward illumination technology to acquire images with high contrast; the images were segmented to Bi-value images through maximum between-cluster variance method after image filtering and image enhancing algorithms; then the mainly contours were extracted based on the evaluation criterion of the aspect ratio and the area; then calculate the coordinates of the centre of gravity of defects area, namely locating point coordinates; At last, location of the defects area were marked by the coding pen communicated with MCU. Experiment show that no defect was omitted and false alarm error rate was lower than 5%, which showed that the designed system met the demand of shaft part on-line real-time detection.

  12. A simplified implementation of edge detection in MATLAB is faster and more sensitive than fast fourier transform for actin fiber alignment quantification.

    PubMed

    Kemeny, Steven Frank; Clyne, Alisa Morss

    2011-04-01

    Fiber alignment plays a critical role in the structure and function of cells and tissues. While fiber alignment quantification is important to experimental analysis and several different methods for quantifying fiber alignment exist, many studies focus on qualitative rather than quantitative analysis perhaps due to the complexity of current fiber alignment methods. Speed and sensitivity were compared in edge detection and fast Fourier transform (FFT) for measuring actin fiber alignment in cells exposed to shear stress. While edge detection using matrix multiplication was consistently more sensitive than FFT, image processing time was significantly longer. However, when MATLAB functions were used to implement edge detection, MATLAB's efficient element-by-element calculations and fast filtering techniques reduced computation cost 100 times compared to the matrix multiplication edge detection method. The new computation time was comparable to the FFT method, and MATLAB edge detection produced well-distributed fiber angle distributions that statistically distinguished aligned and unaligned fibers in half as many sample images. When the FFT sensitivity was improved by dividing images into smaller subsections, processing time grew larger than the time required for MATLAB edge detection. Implementation of edge detection in MATLAB is simpler, faster, and more sensitive than FFT for fiber alignment quantification.

  13. Symmetry compression method for discovering network motifs.

    PubMed

    Wang, Jianxin; Huang, Yuannan; Wu, Fang-Xiang; Pan, Yi

    2012-01-01

    Discovering network motifs could provide a significant insight into systems biology. Interestingly, many biological networks have been found to have a high degree of symmetry (automorphism), which is inherent in biological network topologies. The symmetry due to the large number of basic symmetric subgraphs (BSSs) causes a certain redundant calculation in discovering network motifs. Therefore, we compress all basic symmetric subgraphs before extracting compressed subgraphs and propose an efficient decompression algorithm to decompress all compressed subgraphs without loss of any information. In contrast to previous approaches, the novel Symmetry Compression method for Motif Detection, named as SCMD, eliminates most redundant calculations caused by widespread symmetry of biological networks. We use SCMD to improve three notable exact algorithms and two efficient sampling algorithms. Results of all exact algorithms with SCMD are the same as those of the original algorithms, since SCMD is a lossless method. The sampling results show that the use of SCMD almost does not affect the quality of sampling results. For highly symmetric networks, we find that SCMD used in both exact and sampling algorithms can help get a remarkable speedup. Furthermore, SCMD enables us to find larger motifs in biological networks with notable symmetry than previously possible.

  14. DASS: efficient discovery and p-value calculation of substructures in unordered data.

    PubMed

    Hollunder, Jens; Friedel, Maik; Beyer, Andreas; Workman, Christopher T; Wilhelm, Thomas

    2007-01-01

    Pattern identification in biological sequence data is one of the main objectives of bioinformatics research. However, few methods are available for detecting patterns (substructures) in unordered datasets. Data mining algorithms mainly developed outside the realm of bioinformatics have been adapted for that purpose, but typically do not determine the statistical significance of the identified patterns. Moreover, these algorithms do not exploit the often modular structure of biological data. We present the algorithm DASS (Discovery of All Significant Substructures) that first identifies all substructures in unordered data (DASS(Sub)) in a manner that is especially efficient for modular data. In addition, DASS calculates the statistical significance of the identified substructures, for sets with at most one element of each type (DASS(P(set))), or for sets with multiple occurrence of elements (DASS(P(mset))). The power and versatility of DASS is demonstrated by four examples: combinations of protein domains in multi-domain proteins, combinations of proteins in protein complexes (protein subcomplexes), combinations of transcription factor target sites in promoter regions and evolutionarily conserved protein interaction subnetworks. The program code and additional data are available at http://www.fli-leibniz.de/tsb/DASS

  15. EGRET High Energy Capability and Multiwavelength Flare Studies and Solar Flare Proton Spectra

    NASA Technical Reports Server (NTRS)

    Chupp, Edward L.

    1997-01-01

    UNH was assigned the responsibility to use their accelerator neutron measurements to verify the TASC response function and to modify the TASC fitting program to include a high energy neutron contribution. Direct accelerator-based measurements by UNH of the energy-dependent efficiencies for detecting neutrons with energies from 36 to 720 MeV in NaI were compared with Monte Carlo TASC calculations. The calculated TASC efficiencies are somewhat lower (by about 20%) than the accelerator results in the energy range 70-300 MeV. The measured energy-loss spectrum for 207 MeV neutron interactions in NaI were compared with the Monte Carlo response for 200 MeV neutrons in the TASC indicating good agreement. Based on this agreement, the simulation was considered to be sufficiently accurate to generate a neutron response library to be used by UNH in modifying the TASC fitting program to include a neutron component in the flare spectrum modeling. TASC energy-loss data on the 1991 June 11 flare was transferred to UNH. Also included appendix: Gamma-rays and neutrons as a probe of flare proton spectra: the solar flare of 11 June 1991.

  16. Applicability of Monte-Carlo Simulation to Equipment Design of Radioactive Noble Gas Monitor

    NASA Astrophysics Data System (ADS)

    Sakai, Hirotaka; Hattori, Kanako; Umemura, Norihiro

    In the nuclear facilities, radioactive noble gas is continuously monitored by using the radioactive noble gas monitor with beta-sensitive plastic scintillation radiation detector. The detection efficiency of the monitor is generally calibrated by using a calibration loop and standard radioactive noble gases such as 85Kr. In this study, the applicability of PHITS to the equipment design of the radioactive noble gas monitor was evaluated by comparing the calculated results to the test results obtained by actual calibration loop tests to simplify the radiation monitor design evaluation. It was confirmed that the calculated results were well matched to the test results of the monitor after the modeling. In addition, the key parameters for equipment design, such as thickness of detector window or depth of the sampler, were also specified and evaluated.

  17. Using a Calculated Pulse Rate with an Artificial Neural Network to Detect Irregular Interbeats.

    PubMed

    Yeh, Bih-Chyun; Lin, Wen-Piao

    2016-03-01

    Heart rate is an important clinical measure that is often used in pathological diagnosis and prognosis. Valid detection of irregular heartbeats is crucial in the clinical practice. We propose an artificial neural network using the calculated pulse rate to detect irregular interbeats. The proposed system measures the calculated pulse rate to determine an "irregular interbeat on" or "irregular interbeat off" event. If an irregular interbeat is detected, the proposed system produces a danger warning, which is helpful for clinicians. If a non-irregular interbeat is detected, the proposed system displays the calculated pulse rate. We include a flow chart of the proposed software. In an experiment, we measure the calculated pulse rates and achieve an error percentage of < 3% in 20 participants with a wide age range. When we use the calculated pulse rates to detect irregular interbeats, we find such irregular interbeats in eight participants.

  18. Calculation of the detection limits for radionuclides identified in gamma-ray spectra based on post-processing peak analysis results.

    PubMed

    Korun, M; Vodenik, B; Zorko, B

    2018-03-01

    A new method for calculating the detection limits of gamma-ray spectrometry measurements is presented. The method is applicable for gamma-ray emitters, irrespective of the influences of the peaked background, the origin of the background and the overlap with other peaks. It offers the opportunity for multi-gamma-ray emitters to calculate the common detection limit, corresponding to more peaks. The detection limit is calculated by approximating the dependence of the uncertainty in the indication on its value with a second-order polynomial. In this approach the relation between the input quantities and the detection limit are described by an explicit expression and can be easy investigated. The detection limit is calculated from the data usually provided by the reports of peak-analyzing programs: the peak areas and their uncertainties. As a result, the need to use individual channel contents for calculating the detection limit is bypassed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. An 81.6 μW FastICA processor for epileptic seizure detection.

    PubMed

    Yang, Chia-Hsiang; Shih, Yi-Hsin; Chiueh, Herming

    2015-02-01

    To improve the performance of epileptic seizure detection, independent component analysis (ICA) is applied to multi-channel signals to separate artifacts and signals of interest. FastICA is an efficient algorithm to compute ICA. To reduce the energy dissipation, eigenvalue decomposition (EVD) is utilized in the preprocessing stage to reduce the convergence time of iterative calculation of ICA components. EVD is computed efficiently through an array structure of processing elements running in parallel. Area-efficient EVD architecture is realized by leveraging the approximate Jacobi algorithm, leading to a 77.2% area reduction. By choosing proper memory element and reduced wordlength, the power and area of storage memory are reduced by 95.6% and 51.7%, respectively. The chip area is minimized through fixed-point implementation and architectural transformations. Given a latency constraint of 0.1 s, an 86.5% area reduction is achieved compared to the direct-mapped architecture. Fabricated in 90 nm CMOS, the core area of the chip is 0.40 mm(2). The FastICA processor, part of an integrated epileptic control SoC, dissipates 81.6 μW at 0.32 V. The computation delay of a frame of 256 samples for 8 channels is 84.2 ms. Compared to prior work, 0.5% power dissipation, 26.7% silicon area, and 3.4 × computation speedup are achieved. The performance of the chip was verified by human dataset.

  20. Ice Accretion Calculations for a Commercial Transport Using the LEWICE3D, ICEGRID3D and CMARC Programs

    NASA Technical Reports Server (NTRS)

    Bidwell, Colin S.; Pinella, David; Garrison, Peter

    1999-01-01

    Collection efficiency and ice accretion calculations were made for a commercial transport using the NASA Lewis LEWICE3D ice accretion code, the ICEGRID3D grid code and the CMARC panel code. All of the calculations were made on a Windows 95 based personal computer. The ice accretion calculations were made for the nose, wing, horizontal tail and vertical tail surfaces. Ice shapes typifying those of a 30 minute hold were generated. Collection efficiencies were also generated for the entire aircraft using the newly developed unstructured collection efficiency method. The calculations highlight the flexibility and cost effectiveness of the LEWICE3D, ICEGRID3D, CMARC combination.

  1. Detection of abnormal living patterns for elderly living alone using support vector data description.

    PubMed

    Shin, Jae Hyuk; Lee, Boreom; Park, Kwang Suk

    2011-05-01

    In this study, we developed an automated behavior analysis system using infrared (IR) motion sensors to assist the independent living of the elderly who live alone and to improve the efficiency of their healthcare. An IR motion-sensor-based activity-monitoring system was installed in the houses of the elderly subjects to collect motion signals and three different feature values, activity level, mobility level, and nonresponse interval (NRI). These factors were calculated from the measured motion signals. The support vector data description (SVDD) method was used to classify normal behavior patterns and to detect abnormal behavioral patterns based on the aforementioned three feature values. The simulation data and real data were used to verify the proposed method in the individual analysis. A robust scheme is presented in this paper for optimally selecting the values of different parameters especially that of the scale parameter of the Gaussian kernel function involving in the training of the SVDD window length, T of the circadian rhythmic approach with the aim of applying the SVDD to the daily behavior patterns calculated over 24 h. Accuracies by positive predictive value (PPV) were 95.8% and 90.5% for the simulation and real data, respectively. The results suggest that the monitoring system utilizing the IR motion sensors and abnormal-behavior-pattern detection with SVDD are effective methods for home healthcare of elderly people living alone.

  2. LH launcher Arcs Formation and Detection on JET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baranov, Yu. F.; Challis, C. D.; Kirov, K.

    2011-12-23

    Mechanisms of arc formation have been analyzed and the critical electric fields for the multipactor effect calculated, compared to the experimental values and found to be within the normal operational space of the LH system on JET. It has been shown that the characteristic electron energy (20-1000)eV for the highest multipactor resonances (N = 4-9) are within the limits of secondary electron yield above 1 required for multipactoring. Electrons with these energies provide the highest gas desorption efficiency when hitting the waveguide walls. The effect of higher waveguide modes and magnetic field on the multipactor was also considered. The distributionmore » function for electrons accelerated by LH waves in front of the launcher has been calculated. The field emission currents have been estimated and found to be small. It is proposed that emission of Fel5, 16 lines, which can be obtained with improved diagnostics, could be used to detect arcs that are missed by a protection system based on the reflected power. The reliability and time response of these signals are discussed. A similar technique based on the observation of the emission of low ionized atoms can be used for a fast detection of other undesirable events to avoid sputtering or melting of the plasma facing components such as RF antenna. These techniques are especially powerful if they are based on emission uniquely associated with specific locations and components.« less

  3. Factorizing the motion sensitivity function into equivalent input noise and calculation efficiency.

    PubMed

    Allard, Rémy; Arleo, Angelo

    2017-01-01

    The photopic motion sensitivity function of the energy-based motion system is band-pass peaking around 8 Hz. Using an external noise paradigm to factorize the sensitivity into equivalent input noise and calculation efficiency, the present study investigated if the variation in photopic motion sensitivity as a function of the temporal frequency is due to a variation of equivalent input noise (e.g., early temporal filtering) or calculation efficiency (ability to select and integrate motion). For various temporal frequencies, contrast thresholds for a direction discrimination task were measured in presence and absence of noise. Up to 15 Hz, the sensitivity variation was mainly due to a variation of equivalent input noise and little variation in calculation efficiency was observed. The sensitivity fall-off at very high temporal frequencies (from 15 to 30 Hz) was due to a combination of a drop of calculation efficiency and a rise of equivalent input noise. A control experiment in which an artificial temporal integration was applied to the stimulus showed that an early temporal filter (generally assumed to affect equivalent input noise, not calculation efficiency) could impair both the calculation efficiency and equivalent input noise at very high temporal frequencies. We conclude that at the photopic luminance intensity tested, the variation of motion sensitivity as a function of the temporal frequency was mainly due to early temporal filtering, not to the ability to select and integrate motion. More specifically, we conclude that photopic motion sensitivity at high temporal frequencies is limited by internal noise occurring after the transduction process (i.e., neural noise), not by quantal noise resulting from the probabilistic absorption of photons by the photoreceptors as previously suggested.

  4. Laser-Induced Photofragmentation Fluorescence Imaging of Alkali Compounds in Flames.

    PubMed

    Leffler, Tomas; Brackmann, Christian; Aldén, Marcus; Li, Zhongshan

    2017-06-01

    Laser-induced photofragmentation fluorescence has been investigated for the imaging of alkali compounds in premixed laminar methane-air flames. An ArF excimer laser, providing pulses of wavelength 193 nm, was used to photodissociate KCl, KOH, and NaCl molecules in the post-flame region and fluorescence from the excited atomic alkali fragment was detected. Fluorescence emission spectra showed distinct lines of the alkali atoms allowing for efficient background filtering. Temperature data from Rayleigh scattering measurements together with simulations of potassium chemistry presented in literature allowed for conclusions on the relative contributions of potassium species KOH and KCl to the detected signal. Experimental approaches for separate measurements of these components are discussed. Signal power dependence and calculated fractions of dissociated molecules indicate the saturation of the photolysis process, independent on absorption cross-section, under the experimental conditions. Quantitative KCl concentrations up to 30 parts per million (ppm) were evaluated from the fluorescence data and showed good agreement with results from ultraviolet absorption measurements. Detection limits for KCl photofragmentation fluorescence imaging of 0.5 and 1.0 ppm were determined for averaged and single-shot data, respectively. Moreover, simultaneous imaging of KCl and NaCl was demonstrated using a stereoscope with filters. The results indicate that the photofragmentation method can be employed for detailed studies of alkali chemistry in laboratory flames for validation of chemical kinetic mechanisms crucial for efficient biomass fuel utilization.

  5. Experimental Verification of Bayesian Planet Detection Algorithms with a Shaped Pupil Coronagraph

    NASA Astrophysics Data System (ADS)

    Savransky, D.; Groff, T. D.; Kasdin, N. J.

    2010-10-01

    We evaluate the feasibility of applying Bayesian detection techniques to discovering exoplanets using high contrast laboratory data with simulated planetary signals. Background images are generated at the Princeton High Contrast Imaging Lab (HCIL), with a coronagraphic system utilizing a shaped pupil and two deformable mirrors (DMs) in series. Estimates of the electric field at the science camera are used to correct for quasi-static speckle and produce symmetric high contrast dark regions in the image plane. Planetary signals are added in software, or via a physical star-planet simulator which adds a second off-axis point source before the coronagraph with a beam recombiner, calibrated to a fixed contrast level relative to the source. We produce a variety of images, with varying integration times and simulated planetary brightness. We then apply automated detection algorithms such as matched filtering to attempt to extract the planetary signals. This allows us to evaluate the efficiency of these techniques in detecting planets in a high noise regime and eliminating false positives, as well as to test existing algorithms for calculating the required integration times for these techniques to be applicable.

  6. The practical application of signal detection theory to image quality assessment in x-ray image intensifier-TV fluoroscopy.

    PubMed

    Marshall, N W

    2001-06-01

    This paper applies a published version of signal detection theory to x-ray image intensifier fluoroscopy data and compares the results with more conventional subjective image quality measures. An eight-bit digital framestore was used to acquire temporally contiguous frames of fluoroscopy data from which the modulation transfer function (MTF(u)) and noise power spectrum were established. These parameters were then combined to give detective quantum efficiency (DQE(u)) and used in conjunction with signal detection theory to calculate contrast-detail performance. DQE(u) was found to lie between 0.1 and 0.5 for a range of fluoroscopy systems. Two separate image quality experiments were then performed in order to assess the correspondence between the objective and subjective methods. First, image quality for a given fluoroscopy system was studied as a function of doserate using objective parameters and a standard subjective contrast-detail method. Following this, the two approaches were used to assess three different fluoroscopy units. Agreement between objective and subjective methods was good; doserate changes were modelled correctly while both methods ranked the three systems consistently.

  7. Rapid optimization method of the strong stray light elimination for extremely weak light signal detection.

    PubMed

    Wang, Geng; Xing, Fei; Wei, Minsong; You, Zheng

    2017-10-16

    The strong stray light has huge interference on the detection of weak and small optical signals, and is difficult to suppress. In this paper, a miniaturized baffle with angled vanes was proposed and a rapid optimization model of strong light elimination was built, which has better suppression of the stray lights than the conventional vanes and can optimize the positions of the vanes efficiently and accurately. Furthermore, the light energy distribution model was built based on the light projection at a specific angle, and the light propagation models of the vanes and sidewalls were built based on the Lambert scattering, both of which act as the bias of a calculation method of stray light. Moreover, the Monte-Carlo method was employed to realize the Point Source Transmittance (PST) simulation, and the simulation result indicated that it was consistent with the calculation result based on our models, and the PST could be improved by 2-3 times at the small incident angles for the baffle designed by the new method. Meanwhile, the simulation result was verified by laboratory tests, and the new model with derived analytical expressions which can reduce the simulation time significantly.

  8. Monte Carlo simulation of β γ coincidence system using plastic scintillators in 4π geometry

    NASA Astrophysics Data System (ADS)

    Dias, M. S.; Piuvezam-Filho, H.; Baccarelli, A. M.; Takeda, M. N.; Koskinas, M. F.

    2007-09-01

    A modified version of a Monte Carlo code called Esquema, developed at the Nuclear Metrology Laboratory in IPEN, São Paulo, Brazil, has been applied for simulating a 4 πβ(PS)-γ coincidence system designed for primary radionuclide standardisation. This system consists of a plastic scintillator in 4 π geometry, for alpha or electron detection, coupled to a NaI(Tl) counter for gamma-ray detection. The response curves for monoenergetic electrons and photons have been calculated previously by Penelope code and applied as input data to code Esquema. The latter code simulates all the disintegration processes, from the precursor nucleus to the ground state of the daughter radionuclide. As a result, the curve between the observed disintegration rate as a function of the beta efficiency parameter can be simulated. A least-squares fit between the experimental activity values and the Monte Carlo calculation provided the actual radioactive source activity, without need of conventional extrapolation procedures. Application of this methodology to 60Co and 133Ba radioactive sources is presented and showed results in good agreement with a conventional proportional counter 4 πβ(PC)-γ coincidence system.

  9. Study on the early warning mechanism for the security of blast furnace hearths

    NASA Astrophysics Data System (ADS)

    Zhao, Hong-bo; Huo, Shou-feng; Cheng, Shu-sen

    2013-04-01

    The campaign life of blast furnace (BF) hearths has become the limiting factor for safety and high efficiency production of modern BFs. However, the early warning mechanism of hearth security has not been clear. In this article, based on heat transfer calculations, heat flux and erosion monitoring, the features of heat flux and erosion were analyzed and compared among different types of hearths. The primary detecting elements, mathematical models, evaluating standards, and warning methods were discussed. A novel early warning mechanism with the three-level quantificational standards was proposed for BF hearth security.

  10. Room temperature operation of InxGa1-xSb/InAs type-II quantum well infrared photodetectors grown by MOCVD

    NASA Astrophysics Data System (ADS)

    Wu, D. H.; Zhang, Y. Y.; Razeghi, M.

    2018-03-01

    We demonstrate room temperature operation of In0.5Ga0.5Sb/InAs type-II quantum well photodetectors on an InAs substrate grown by metal-organic chemical vapor deposition. At 300 K, the detector exhibits a dark current density of 0.12 A/cm2 and a peak responsivity of 0.72 A/W corresponding to a quantum efficiency of 23.3%, with the calculated specific detectivity of 2.4 × 109 cm Hz1/2/W at 3.81 μm.

  11. Nature of gamma rays background radiation in new and old buildings of Qatar University

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Al-Houty, L.; Abou-Leila, H.; El-Kameesy, S.

    Measurements and analysis of gamma-background radiation spectrum in four different places of Qatar University campus were performed at the energy range 10 keV-3 MeV using hyper pure Ge-detector. The dependence of the detector absolute photopeak efficiency on gamma-ray energies was determined and correction of the data for that was also done. The absorbed dose for each gamma line was calculated and an estimation of the total absorbed dose for the detected gamma lines in the four different places was obtained. Comparison with other results was also performed.

  12. Calibration and performance of a real-time gamma-ray spectrometry water monitor using a LaBr3(Ce) detector

    NASA Astrophysics Data System (ADS)

    Prieto, E.; Casanovas, R.; Salvadó, M.

    2018-03-01

    A scintillation gamma-ray spectrometry water monitor with a 2″ × 2″ LaBr3(Ce) detector was characterized in this study. This monitor measures gamma-ray spectra of river water. Energy and resolution calibrations were performed experimentally, whereas the detector efficiency was determined using Monte Carlo simulations with EGS5 code system. Values of the minimum detectable activity concentrations for 131I and 137Cs were calculated for different integration times. As an example of the monitor performance after calibration, a radiological increment during a rainfall episode was studied.

  13. CZT drift strip detectors for high energy astrophysics

    NASA Astrophysics Data System (ADS)

    Kuvvetli, I.; Budtz-Jørgensen, C.; Caroli, E.; Auricchio, N.

    2010-12-01

    Requirements for X- and gamma ray detectors for future High Energy Astrophysics missions include high detection efficiency and good energy resolution as well as fine position sensitivity even in three dimensions. We report on experimental investigations on the CZT drift detector developed DTU Space. It is operated in the planar transverse field (PTF) mode, with the purpose of demonstrating that the good energy resolution of the CZT drift detector can be combined with the high efficiency of the PTF configuration. Furthermore, we demonstrated and characterized the 3D sensing capabilities of this detector configuration. The CZT drift strip detector (10 mm×10 mm×2.5 mm) was characterized in both standard illumination geometry, Photon Parallel Field (PPF) configuration and in PTF configuration. The detection efficiency and energy resolution are compared for both configurations . The PTF configuration provided a higher efficiency in agreement with calculations. The detector energy resolution was found to be the same (3 keV FWHM at 122 keV) in both in PPF and PTF . The depth sensing capabilities offered by drift strip detectors was investigated by illuminating the detector using a collimated photon beam of 57Co radiation in PTF configuration. The width (300μm FWHM at 122 keV) of the measured depth distributions was almost equal to the finite beam size. However, the data indicate that the best achievable depth resolution for the CZT drift detector is 90μm FWHM at 122 keV and that it is determined by the electronic noise from the setup.

  14. Quantitative evaluation of patient-specific quality assurance using online dosimetry system

    NASA Astrophysics Data System (ADS)

    Jung, Jae-Yong; Shin, Young-Ju; Sohn, Seung-Chang; Min, Jung-Whan; Kim, Yon-Lae; Kim, Dong-Su; Choe, Bo-Young; Suh, Tae-Suk

    2018-01-01

    In this study, we investigated the clinical performance of an online dosimetry system (Mobius FX system, MFX) by 1) dosimetric plan verification using gamma passing rates and dose volume metrics and 2) error-detection capability evaluation by deliberately introduced machine error. Eighteen volumetric modulated arc therapy (VMAT) plans were studied. To evaluate the clinical performance of the MFX, we used gamma analysis and dose volume histogram (DVH) analysis. In addition, to evaluate the error-detection capability, we used gamma analysis and DVH analysis utilizing three types of deliberately introduced errors (Type 1: gantry angle-independent multi-leaf collimator (MLC) error, Type 2: gantry angle-dependent MLC error, and Type 3: gantry angle error). A dosimetric verification comparison of physical dosimetry system (Delt4PT) and online dosimetry system (MFX), gamma passing rates of the two dosimetry systems showed very good agreement with treatment planning system (TPS) calculation. For the average dose difference between the TPS calculation and the MFX measurement, most of the dose metrics showed good agreement within a tolerance of 3%. For the error-detection comparison of Delta4PT and MFX, the gamma passing rates of the two dosimetry systems did not meet the 90% acceptance criterion with the magnitude of error exceeding 2 mm and 1.5 ◦, respectively, for error plans of Types 1, 2, and 3. For delivery with all error types, the average dose difference of PTV due to error magnitude showed good agreement between calculated TPS and measured MFX within 1%. Overall, the results of the online dosimetry system showed very good agreement with those of the physical dosimetry system. Our results suggest that a log file-based online dosimetry system is a very suitable verification tool for accurate and efficient clinical routines for patient-specific quality assurance (QA).

  15. Automated detection of Martian water ice clouds: the Valles Marineris

    NASA Astrophysics Data System (ADS)

    Ogohara, Kazunori; Munetomo, Takafumi; Hatanaka, Yuji; Okumura, Susumu

    2016-10-01

    We need to extract water ice clouds from the large number of Mars images in order to reveal spatial and temporal variations of water ice cloud occurrence and to meteorologically understand climatology of water ice clouds. However, visible images observed by Mars orbiters for several years are too many to visually inspect each of them even though the inspection was limited to one region. Therefore, an automated detection algorithm of Martian water ice clouds is necessary for collecting ice cloud images efficiently. In addition, it may visualize new aspects of spatial and temporal variations of water ice clouds that we have never been aware. We present a method for automatically evaluating the presence of Martian water ice clouds using difference images and cross-correlation distributions calculated from blue band images of the Valles Marineris obtained by the Mars Orbiter Camera onboard the Mars Global Surveyor (MGS/MOC). We derived one subtracted image and one cross-correlation distribution from two reflectance images. The difference between the maximum and the average, variance, kurtosis, and skewness of the subtracted image were calculated. Those of the cross-correlation distribution were also calculated. These eight statistics were used as feature vectors for training Support Vector Machine, and its generalization ability was tested using 10-fold cross-validation. F-measure and accuracy tended to be approximately 0.8 if the maximum in the normalized reflectance and the difference of the maximum and the average in the cross-correlation were chosen as features. In the process of the development of the detection algorithm, we found many cases where the Valles Marineris became clearly brighter than adjacent areas in the blue band. It is at present unclear whether the bright Valles Marineris means the occurrence of water ice clouds inside the Valles Marineris or not. Therefore, subtracted images showing the bright Valles Marineris were excluded from the detection of water ice clouds

  16. Knock detection system to improve petrol engine performance, using microphone sensor

    NASA Astrophysics Data System (ADS)

    Sujono, Agus; Santoso, Budi; Juwana, Wibawa Endra

    2017-01-01

    An increase of power and efficiency of spark ignition engines (petrol engines) are always faced with the problem of knock. Even the characteristics of the engine itself are always determined from the occurrence of knock. Until today, this knocking problem has not been solved completely. Knock is caused by principal factors that are influenced by the engine rotation, the load or opening the throttle and spark advance (ignition timing). In this research, the engine is mounted on the engine test bed (ETB) which is equipped with the necessary sensors. Knock detection using a new method, which is based on pattern recognition, which through the knock sound detection by using a microphone sensor, active filter, the regression of the normalized envelope function, and the calculation of the Euclidean distance is used for identifying knock. This system is implemented with a microcontroller which uses fuzzy logic controller ignition (FLIC), which aims to set proper spark advance, in accordance with operating conditions. This system can improve the engine performance for approximately 15%.

  17. Ion mobility spectrometry as a detector for molecular imprinted polymer separation and metronidazole determination in pharmaceutical and human serum samples.

    PubMed

    Jafari, M T; Rezaei, B; Zaker, B

    2009-05-01

    Application of ion mobility spectrometry (IMS) as the detection technique for a separation method based on molecular imprinted polymer (MIP) was investigated and evaluated for the first time. On the basis of the results obtained in this work, the MIP-IMS system can be used as a powerful technique for separation, preconcentration, and detection of the metronidazole drug in pharmaceutical and human serum samples. The method is exhaustively validated in terms of sensitivity, selectivity, recovery, reproducibility, and column capacity. The linear dynamic range of 0.05-70.00 microg/mL was obtained for the determination of metronidazole with IMS. The recovery of analyzed drug was calculated to be above 89%, and the relative standard deviation (RSD) was lower than 6% for all experiments. Various real samples were analyzed with the coupled techniques, and the results obtained revealed the efficient cleanup of the samples using MIP separation before the analysis by IMS as a detection technique.

  18. Multi-thresholds for fault isolation in the presence of uncertainties.

    PubMed

    Touati, Youcef; Mellal, Mohamed Arezki; Benazzouz, Djamel

    2016-05-01

    Monitoring of the faults is an important task in mechatronics. It involves the detection and isolation of faults which are performed by using the residuals. These residuals represent numerical values that define certain intervals called thresholds. In fact, the fault is detected if the residuals exceed the thresholds. In addition, each considered fault must activate a unique set of residuals to be isolated. However, in the presence of uncertainties, false decisions can occur due to the low sensitivity of certain residuals towards faults. In this paper, an efficient approach to make decision on fault isolation in the presence of uncertainties is proposed. Based on the bond graph tool, the approach is developed in order to generate systematically the relations between residuals and faults. The generated relations allow the estimation of the minimum detectable and isolable fault values. The latter is used to calculate the thresholds of isolation for each residual. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  19. NodePM: A Remote Monitoring Alert System for Energy Consumption Using Probabilistic Techniques

    PubMed Central

    Filho, Geraldo P. R.; Ueyama, Jó; Villas, Leandro A.; Pinto, Alex R.; Gonçalves, Vinícius P.; Pessin, Gustavo; Pazzi, Richard W.; Braun, Torsten

    2014-01-01

    In this paper, we propose an intelligent method, named the Novelty Detection Power Meter (NodePM), to detect novelties in electronic equipment monitored by a smart grid. Considering the entropy of each device monitored, which is calculated based on a Markov chain model, the proposed method identifies novelties through a machine learning algorithm. To this end, the NodePM is integrated into a platform for the remote monitoring of energy consumption, which consists of a wireless sensors network (WSN). It thus should be stressed that the experiments were conducted in real environments different from many related works, which are evaluated in simulated environments. In this sense, the results show that the NodePM reduces by 13.7% the power consumption of the equipment we monitored. In addition, the NodePM provides better efficiency to detect novelties when compared to an approach from the literature, surpassing it in different scenarios in all evaluations that were carried out. PMID:24399157

  20. Optical correlation based pose estimation using bipolar phase grayscale amplitude spatial light modulators

    NASA Astrophysics Data System (ADS)

    Outerbridge, Gregory John, II

    Pose estimation techniques have been developed on both optical and digital correlator platforms to aid in the autonomous rendezvous and docking of spacecraft. This research has focused on the optical architecture, which utilizes high-speed bipolar-phase grayscale-amplitude spatial light modulators as the image and correlation filter devices. The optical approach has the primary advantage of optical parallel processing: an extremely fast and efficient way of performing complex correlation calculations. However, the constraints imposed on optically implementable filters makes optical correlator based posed estimation technically incompatible with the popular weighted composite filter designs successfully used on the digital platform. This research employs a much simpler "bank of filters" approach to optical pose estimation that exploits the inherent efficiency of optical correlation devices. A novel logarithmically mapped optically implementable matched filter combined with a pose search algorithm resulted in sub-degree standard deviations in angular pose estimation error. These filters were extremely simple to generate, requiring no complicated training sets and resulted in excellent performance even in the presence of significant background noise. Common edge detection and scaling of the input image was the only image pre-processing necessary for accurate pose detection at all alignment distances of interest.

  1. A method based on Monte Carlo simulations and voxelized anatomical atlases to evaluate and correct uncertainties on radiotracer accumulation quantitation in beta microprobe studies in the rat brain

    NASA Astrophysics Data System (ADS)

    Pain, F.; Dhenain, M.; Gurden, H.; Routier, A. L.; Lefebvre, F.; Mastrippolito, R.; Lanièce, P.

    2008-10-01

    The β-microprobe is a simple and versatile technique complementary to small animal positron emission tomography (PET). It relies on local measurements of the concentration of positron-labeled molecules. So far, it has been successfully used in anesthetized rats for pharmacokinetics experiments and for the study of brain energetic metabolism. However, the ability of the technique to provide accurate quantitative measurements using 18F, 11C and 15O tracers is likely to suffer from the contribution of 511 keV gamma rays background to the signal and from the contribution of positrons from brain loci surrounding the locus of interest. The aim of the present paper is to provide a method of evaluating several parameters, which are supposed to affect the quantification of recordings performed in vivo with this methodology. We have developed realistic voxelized phantoms of the rat whole body and brain, and used them as input geometries for Monte Carlo simulations of previous β-microprobe reports. In the context of realistic experiments (binding of 11C-Raclopride to D2 dopaminergic receptors in the striatum; local glucose metabolic rate measurement with 18F-FDG and H2O15 blood flow measurements in the somatosensory cortex), we have calculated the detection efficiencies and corresponding contribution of 511 keV gammas from peripheral organs accumulation. We confirmed that the 511 keV gammas background does not impair quantification. To evaluate the contribution of positrons from adjacent structures, we have developed β-Assistant, a program based on a rat brain voxelized atlas and matrices of local detection efficiencies calculated by Monte Carlo simulations for several probe geometries. This program was used to calculate the 'apparent sensitivity' of the probe for each brain structure included in the detection volume. For a given localization of a probe within the brain, this allows us to quantify the different sources of beta signal. Finally, since stereotaxic accuracy is crucial for quantification in most microprobe studies, the influence of stereotaxic positioning error was studied for several realistic experiments in favorable and unfavorable experimental situations (binding of 11C-Raclopride to D2 dopaminergic receptors in the striatum; binding of 18F-MPPF to 5HT1A receptors in the dorsal raphe nucleus).

  2. [Early detection on the onset of scarlet fever epidemics in Beijing, using the Cumulative Sum].

    PubMed

    Li, Jing; Yang, Peng; Wu, Shuang-sheng; Wang, Xiao-li; Liu, Shuang; Wang, Quan-yi

    2013-05-01

    Based on data related to scarlet fever which was collected from the Disease Surveillance Information Reporting System in Beijing from 2005 to 2011, to explore the efficiency of Cumulative Sum (CUSUM) in detecting the onset of scarlet fever epidemics. Models as C1-MILD (C1), C2-MEDIUM (C2) and C3-ULTRA (C3) were used. Tools for evaluation as Youden's index and detection time were calculated to optimize the parameters and optimal model. Data on 2011 scarlet fever surveillance was used to verify the efficacy of these models. C1 (k = 0.5, H = 2σ), C2 (k = 0.7, H = 2σ), C3 (k = 1.1, H = 2σ) appeared to be the optimal parameters among these models. Youden's index of C1 was 83.0% and detection time being 0.64 weeks, Youden's index of C2 was 85.4% and detection time being 1.27 weeks, Youden's index of C1 was 85.1% and detection time being 1.36 weeks. Among the three early warning detection models, C1 had the highest efficacy. Three models all triggered the signals within 4 weeks after the onset of scarlet fever epidemics. The early warning detection model of CUSUM could be used to detect the onset of scarlet fever epidemics, with good efficacy.

  3. Modelling of celestial backgrounds

    NASA Astrophysics Data System (ADS)

    Hickman, Duncan L.; Smith, Moira I.; Lim, Jae-Wan; Jeon, Yun-Ho

    2018-05-01

    For applications where a sensor's image includes the celestial background, stars and Solar System Bodies compromise the ability of the sensor system to correctly classify a target. Such false targets are particularly significant for the detection of weak target signatures which only have a small relative angular motion. The detection of celestial features is well established in the visible spectral band. However, given the increasing sensitivity and low noise afforded by emergent infrared focal plane array technology together with larger and more efficient optics, the signatures of celestial features can also impact performance at infrared wavelengths. A methodology has been developed which allows the rapid generation of celestial signatures in any required spectral band using star data from star catalogues and other open-source information. Within this paper, the radiometric calculations are presented to determine the irradiance values of stars and planets in any spectral band.

  4. Differential phase-shift keying and channel equalization in free space optical communication system

    NASA Astrophysics Data System (ADS)

    Zhang, Dai; Hao, Shiqi; Zhao, Qingsong; Wan, Xiongfeng; Xu, Chenlu

    2018-01-01

    We present the performance benefits of differential phase-shift keying (DPSK) modulation in eliminating influence from atmospheric turbulence, especially for coherent free space optical (FSO) communication with a high communication rate. Analytic expression of detected signal is derived, based on which, homodyne detection efficiency is calculated to indicate the performance of wavefront compensation. Considered laser pulses always suffer from atmospheric scattering effect by clouds, intersymbol interference (ISI) in high-speed FSO communication link is analyzed. Correspondingly, the channel equalization method of a binormalized modified constant modulus algorithm based on set-membership filtering (SM-BNMCMA) is proposed to solve the ISI problem. Finally, through the comparison with existing channel equalization methods, its performance benefits of both ISI elimination and convergence speed are verified. The research findings have theoretical significance in a high-speed FSO communication system.

  5. Factors affecting calculation of L

    NASA Astrophysics Data System (ADS)

    Ciotola, Mark P.

    2001-08-01

    A detectable extraterrestrial civilization can be modeled as a series of successive regimes over time each of which is detectable for a certain proportion of its lifecycle. This methodology can be utilized to produce an estimate for L. Potential components of L include quantity of fossil fuel reserves, solar energy potential, quantity of regimes over time, lifecycle patterns of regimes, proportion of lifecycle regime is actually detectable, and downtime between regimes. Relationships between these components provide a means of calculating the lifetime of communicative species in a detectable state, L. An example of how these factors interact is provided, utilizing values that are reasonable given known astronomical data for components such as solar energy potential while existing knowledge about the terrestrial case is used as a baseline for other components including fossil fuel reserves, quantity of regimes over time, and lifecycle patterns of regimes, proportion of lifecycle regime is actually detectable, and gaps of time between regimes due to recovery from catastrophic war or resource exhaustion. A range of values is calculated for L when parameters are established for each component so as to determine the lowest and highest values of L. roadmap for SETI research at the SETI Institute for the next few decades. Three different approaches were identified. 1) Continue the radio search: build an affordable array incorporating consumer market technologies, expand the search frequency, and increase the target list to 100,000 stars. This array will also serve as a technology demonstration and enable the international radio astronomy community to realize an array that is a hundred times larger and capable (among other things) of searching a million stars. 2) Begin searches for very fast optical pulses from a million stars. 3) As Moore's Law delivers increased computational capacity, build an omni-directional sky survey array capable of detecting strong, transient, radio signals from billions of stars. SETI could succeed tomorrow, or it may be an endeavor for multiple generations. We are a very young technology in a very old galaxy. While our own leakage radiation continues to outshine the Sun at many frequencies, we remain detectable to others. When our use of the spectrum becomes more efficient, it will be time to consider deliberate transmissions and the really tough questions: Who will speak for Earth? What will they say?

  6. Improved detection probability of low level light and infrared image fusion system

    NASA Astrophysics Data System (ADS)

    Luo, Yuxiang; Fu, Rongguo; Zhang, Junju; Wang, Wencong; Chang, Benkang

    2018-02-01

    Low level light(LLL) image contains rich information on environment details, but is easily affected by the weather. In the case of smoke, rain, cloud or fog, much target information will lose. Infrared image, which is from the radiation produced by the object itself, can be "active" to obtain the target information in the scene. However, the image contrast and resolution is bad, the ability of the acquisition of target details is very poor, and the imaging mode does not conform to the human visual habit. The fusion of LLL and infrared image can make up for the deficiency of each sensor and give play to the advantages of single sensor. At first, we show the hardware design of fusion circuit. Then, through the recognition probability calculation of the target(one person) and the background image(trees), we find that the trees detection probability of LLL image is higher than that of the infrared image, and the person detection probability of the infrared image is obviously higher than that of LLL image. The detection probability of fusion image for one person and trees is higher than that of single detector. Therefore, image fusion can significantly enlarge recognition probability and improve detection efficiency.

  7. Highly selective and sensitive method for Cu2 + detection based on chiroptical activity of L-Cysteine mediated Au nanorod assemblies

    NASA Astrophysics Data System (ADS)

    Abbasi, Shahryar; Khani, Hamzeh

    2017-11-01

    Herein, we demonstrated a simple and efficient method to detect Cu2 + based on amplified optical activity in the chiral nanoassemblies of gold nanorods (Au NRs). L-Cysteine can induce side-by-side or end-to-end assembly of Au NRs with an evident plasmonic circular dichroism (PCD) response due to coupling between surface plasmon resonances (SPR) of Au NRs and the chiral signal of L-Cys. Because of the obvious stronger plasmonic circular dichrosim (CD) response of the side-by-side assembly compared with the end-to-end assemblies, SS assembled Au NRs was selected as a sensitive platform and used for Cu2 + detection. In the presence of Cu2 +, Cu2 + can catalyze O2 oxidation of cysteine to cystine. With an increase in Cu2 + concentration, the L-Cysteine-mediated assembly of Au NRs decreased because of decrease in the free cysteine thiol groups, and the PCD signal decreased. Taking advantage of this method, Cu2 + could be detected in the concentration range of 20 pM-5 nM. Under optimal conditions, the calculated detection limit was found to be 7 pM.

  8. SymDex: increasing the efficiency of chemical fingerprint similarity searches for comparing large chemical libraries by using query set indexing.

    PubMed

    Tai, David; Fang, Jianwen

    2012-08-27

    The large sizes of today's chemical databases require efficient algorithms to perform similarity searches. It can be very time consuming to compare two large chemical databases. This paper seeks to build upon existing research efforts by describing a novel strategy for accelerating existing search algorithms for comparing large chemical collections. The quest for efficiency has focused on developing better indexing algorithms by creating heuristics for searching individual chemical against a chemical library by detecting and eliminating needless similarity calculations. For comparing two chemical collections, these algorithms simply execute searches for each chemical in the query set sequentially. The strategy presented in this paper achieves a speedup upon these algorithms by indexing the set of all query chemicals so redundant calculations that arise in the case of sequential searches are eliminated. We implement this novel algorithm by developing a similarity search program called Symmetric inDexing or SymDex. SymDex shows over a 232% maximum speedup compared to the state-of-the-art single query search algorithm over real data for various fingerprint lengths. Considerable speedup is even seen for batch searches where query set sizes are relatively small compared to typical database sizes. To the best of our knowledge, SymDex is the first search algorithm designed specifically for comparing chemical libraries. It can be adapted to most, if not all, existing indexing algorithms and shows potential for accelerating future similarity search algorithms for comparing chemical databases.

  9. Influence of extraction methodologies on the analysis of five major volatile aromatic compounds of citronella grass (Cymbopogon nardus) and lemongrass (Cymbopogon citratus) grown in Thailand.

    PubMed

    Chanthai, Saksit; Prachakoll, Sujitra; Ruangviriyachai, Chalerm; Luthria, Devanand L

    2012-01-01

    This paper deals with the systematic comparison of extraction of major volatile aromatic compounds (VACs) of citronella grass and lemongrass by classical microhydrodistillation (MHD), as well as modern accelerated solvent extraction (ASE). Sixteen VACs were identified by GC/MS. GC-flame ionization detection was used for the quantification of five VACs (citronellal, citronellol, geraniol, citral, and eugenol) to compare the extraction efficiency of the two different methods. Linear range, LOD, and LOQ were calculated for the five VACs. Intraday and interday precisions for the analysis of VACs were determined for each sample. The extraction recovery, as calculated by a spiking experiment with known standards of VACs, by ASE and MHD ranged from 64.9 to 91.2% and 74.3 to 95.2%, respectively. The extraction efficiency of the VACs was compared for three solvents of varying polarities (hexane, dichloromethane, and methanol), seven different temperatures (ranging from 40 to 160 degrees C, with a gradual increment of 20 degrees C), five time periods (from 1 to 10 min), and three cycles (1, 2, and 3 repeated extractions). Optimum extraction yields of VACs were obtained when extractions were carried out for 7 min with dichloromethane and two extraction cycles at 120 degrees C. The results showed that the ASE technique is more efficient than MHD, as it results in improved yields and significant reduction in extraction time with automated extraction capabilities.

  10. Collection Efficiency and Ice Accretion Characteristics of Two Full Scale and One 1/4 Scale Business Jet Horizontal Tails

    NASA Technical Reports Server (NTRS)

    Bidwell, Colin S.; Papadakis, Michael

    2005-01-01

    Collection efficiency and ice accretion calculations have been made for a series of business jet horizontal tail configurations using a three-dimensional panel code, an adaptive grid code, and the NASA Glenn LEWICE3D grid based ice accretion code. The horizontal tail models included two full scale wing tips and a 25 percent scale model. Flow solutions for the horizontal tails were generated using the PMARC panel code. Grids used in the ice accretion calculations were generated using the adaptive grid code ICEGRID. The LEWICE3D grid based ice accretion program was used to calculate impingement efficiency and ice shapes. Ice shapes typifying rime and mixed icing conditions were generated for a 30 minute hold condition. All calculations were performed on an SGI Octane computer. The results have been compared to experimental flow and impingement data. In general, the calculated flow and collection efficiencies compared well with experiment, and the ice shapes appeared representative of the rime and mixed icing conditions for which they were calculated.

  11. [The effects of instruction about strategies for efficient calculation].

    PubMed

    Suzuki, Masayuki; Ichikawa, Shin'ichi

    2016-06-01

    Calculation problems such as "12x7÷3" can be solved rapidly and easily by using certain techniques; we call these problems "efficient calculation problems." However, it has been pointed out that many students do not always solve them efficiently. In the present study, we examined the effects of an intervention on 35 seventh grade students (23 males, 12 females). The students were instructed to use an overview strategy that stated, "Think carefully about the whole expression", and were then taught three sub-strategies. The results showed that students solved similar problems efficiently after the intervention and the effects were preserved for five months.

  12. Interaction Entropy: A New Paradigm for Highly Efficient and Reliable Computation of Protein-Ligand Binding Free Energy.

    PubMed

    Duan, Lili; Liu, Xiao; Zhang, John Z H

    2016-05-04

    Efficient and reliable calculation of protein-ligand binding free energy is a grand challenge in computational biology and is of critical importance in drug design and many other molecular recognition problems. The main challenge lies in the calculation of entropic contribution to protein-ligand binding or interaction systems. In this report, we present a new interaction entropy method which is theoretically rigorous, computationally efficient, and numerically reliable for calculating entropic contribution to free energy in protein-ligand binding and other interaction processes. Drastically different from the widely employed but extremely expensive normal mode method for calculating entropy change in protein-ligand binding, the new method calculates the entropic component (interaction entropy or -TΔS) of the binding free energy directly from molecular dynamics simulation without any extra computational cost. Extensive study of over a dozen randomly selected protein-ligand binding systems demonstrated that this interaction entropy method is both computationally efficient and numerically reliable and is vastly superior to the standard normal mode approach. This interaction entropy paradigm introduces a novel and intuitive conceptual understanding of the entropic effect in protein-ligand binding and other general interaction systems as well as a practical method for highly efficient calculation of this effect.

  13. Medicinal Cannabis: In Vitro Validation of Vaporizers for the Smoke-Free Inhalation of Cannabis.

    PubMed

    Lanz, Christian; Mattsson, Johan; Soydaner, Umut; Brenneisen, Rudolf

    2016-01-01

    Inhalation by vaporization is a promising application mode for cannabis in medicine. An in vitro validation of 5 commercial vaporizers was performed with THC-type and CBD-type cannabis. Gas chromatography/mass spectrometry was used to determine recoveries of total THC (THCtot) and total CBD (CBDtot) in the vapor. High-performance liquid chromatography with photodiode array detection was used for the quantitation of acidic cannabinoids in the residue and to calculate decarboxylation efficiencies. Recoveries of THCtot and CBDtot in the vapor of 4 electrically-driven vaporizers were 58.4 and 51.4%, 66.8 and 56.1%, 82.7 and 70.0% and 54.6 and 56.7% for Volcano Medic®, Plenty Vaporizer®, Arizer Solo® and DaVinci Vaporizer®, respectively. Decarboxylation efficiency was excellent for THC (≥ 97.3%) and CBD (≥ 94.6%). The gas-powered Vape-or-Smoke™ showed recoveries of THCtot and CBDtot in the vapor of 55.9 and 45.9%, respectively, and a decarboxylation efficiency of ≥ 87.7 for both cannabinoids. However, combustion of cannabis was observed with this device. Temperature-controlled, electrically-driven vaporizers efficiently decarboxylate inactive acidic cannabinoids and reliably release their corresponding neutral, active cannabinoids. Thus, they offer a promising application mode for the safe and efficient administration of medicinal cannabis.

  14. Medicinal Cannabis: In Vitro Validation of Vaporizers for the Smoke-Free Inhalation of Cannabis

    PubMed Central

    Lanz, Christian; Mattsson, Johan; Soydaner, Umut; Brenneisen, Rudolf

    2016-01-01

    Inhalation by vaporization is a promising application mode for cannabis in medicine. An in vitro validation of 5 commercial vaporizers was performed with THC-type and CBD-type cannabis. Gas chromatography/mass spectrometry was used to determine recoveries of total THC (THCtot) and total CBD (CBDtot) in the vapor. High-performance liquid chromatography with photodiode array detection was used for the quantitation of acidic cannabinoids in the residue and to calculate decarboxylation efficiencies. Recoveries of THCtot and CBDtot in the vapor of 4 electrically-driven vaporizers were 58.4 and 51.4%, 66.8 and 56.1%, 82.7 and 70.0% and 54.6 and 56.7% for Volcano Medic®, Plenty Vaporizer®, Arizer Solo® and DaVinci Vaporizer®, respectively. Decarboxylation efficiency was excellent for THC (≥ 97.3%) and CBD (≥ 94.6%). The gas-powered Vape-or-Smoke™ showed recoveries of THCtot and CBDtot in the vapor of 55.9 and 45.9%, respectively, and a decarboxylation efficiency of ≥ 87.7 for both cannabinoids. However, combustion of cannabis was observed with this device. Temperature-controlled, electrically-driven vaporizers efficiently decarboxylate inactive acidic cannabinoids and reliably release their corresponding neutral, active cannabinoids. Thus, they offer a promising application mode for the safe and efficient administration of medicinal cannabis. PMID:26784441

  15. Effect of Yb(3+) on the Crystal Structural Modification and Photoluminescence Properties of GGAG:Ce(3+).

    PubMed

    Luo, Zhao-Hua; Liu, Yong-Fu; Zhang, Chang-Hua; Zhang, Jian-Xin; Qin, Hai-Ming; Jiang, Hao-Chuan; Jiang, Jun

    2016-03-21

    Gadolinium gallium aluminum garnet (GGAG) is a very promising host for the highly efficient luminescence of Ce(3+) and shows potential in radiation detection applications. However, the thermodynamically metastable structure would be slanted against it from getting high transparency. To stabilize the crystal structure of GGAG, Yb(3+) ions were codoped at the Gd(3+) site. It is found that the decomposition of garnet was suppressed and the transparency of GGAG ceramic was evidently improved. Moreover, the photoluminescence of GGAG:Ce(3+),xYb(3+) with different Yb(3+) contents has been investigated. When the Ce(3+) ions were excited under 475 nm, a typical near-infrared region emission of Yb(3+) ions can be observed, where silicon solar cells have the strongest absorption. Basing on the lifetimes of Ce(3+) ions in the GGAG:Ce(3+),xYb(3+) sample, the transfer efficiency from Ce(3+) to Yb(3+) and the theoretical internal quantum efficiency can be calculated and reach up to 86% and 186%, respectively. This would make GGAG:Ce(3+),Yb(3+) a potential attractive downconversion candidate for improving the energy conversion efficiency of crystalline silicon (c-Si) solar cells.

  16. Validation of a Custom-made Software for DQE Assessment in Mammography Digital Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ayala-Dominguez, L.; Perez-Ponce, H.; Brandan, M. E.

    2010-12-07

    This works presents the validation of a custom-made software, designed and developed in Matlab, intended for routine evaluation of detective quantum efficiency DQE, according to algorithms described in the IEC 62220-1-2 standard. DQE, normalized noise power spectrum NNPS and pre-sampling modulation transfer function MTF were calculated from RAW images from a GE Senographe DS (FineView disabled) and a Siemens Novation system. Calculated MTF is in close agreement with results obtained with alternative codes: MTF lowbar tool (Maidment), ImageJ plug-in (Perez-Ponce) and MIQuaELa (Ayala). Overall agreement better than {approx_equal}90% was found in MTF; the largest differences were observed at frequencies closemore » to the Nyquist limit. For the measurement of NNPS and DQE, agreement is similar to that obtained in the MTF. These results suggest that the developed software can be used with confidence for image quality assessment.« less

  17. Charge-Transfer Processes in Warm Dense Matter: Selective Spectral Filtering for Laser-Accelerated Ion Beams

    NASA Astrophysics Data System (ADS)

    Braenzel, J.; Barriga-Carrasco, M. D.; Morales, R.; Schnürer, M.

    2018-05-01

    We investigate, both experimentally and theoretically, how the spectral distribution of laser accelerated carbon ions can be filtered by charge exchange processes in a double foil target setup. Carbon ions at multiple charge states with an initially wide kinetic energy spectrum, from 0.1 to 18 MeV, were detected with a remarkably narrow spectral bandwidth after they had passed through an ultrathin and partially ionized foil. With our theoretical calculations, we demonstrate that this process is a consequence of the evolution of the carbon ion charge states in the second foil. We calculated the resulting spectral distribution separately for each ion species by solving the rate equations for electron loss and capture processes within a collisional radiative model. We determine how the efficiency of charge transfer processes can be manipulated by controlling the ionization degree of the transfer matter.

  18. Simulation of field-induced molecular dissociation in atom-probe tomography: Identification of a neutral emission channel

    NASA Astrophysics Data System (ADS)

    Zanuttini, David; Blum, Ivan; Rigutti, Lorenzo; Vurpillot, François; Douady, Julie; Jacquet, Emmanuelle; Anglade, Pierre-Matthieu; Gervais, Benoit

    2017-06-01

    We investigate the dynamics of dicationic metal-oxide molecules under large electric-field conditions, on the basis of ab initio calculations coupled to molecular dynamics. Applied to the case of ZnO2 + in the field of atom probe tomography (APT), our simulation reveals the dissociation into three distinct exit channels. The proportions of these channels depend critically on the field strength and on the initial molecular orientation with respect to the field. For typical field strength used in APT experiments, an efficient dissociation channel leads to emission of neutral oxygen atoms, which escape detection. The calculated composition biases and their dependence on the field strength show remarkable consistency with recent APT experiments on ZnO crystals. Our work shows that bond breaking in strong static fields may lead to significant neutral atom production, and therefore to severe elemental composition biases in measurements.

  19. A method for real-time implementation of HOG feature extraction

    NASA Astrophysics Data System (ADS)

    Luo, Hai-bo; Yu, Xin-rong; Liu, Hong-mei; Ding, Qing-hai

    2011-08-01

    Histogram of oriented gradient (HOG) is an efficient feature extraction scheme, and HOG descriptors are feature descriptors which is widely used in computer vision and image processing for the purpose of biometrics, target tracking, automatic target detection(ATD) and automatic target recognition(ATR) etc. However, computation of HOG feature extraction is unsuitable for hardware implementation since it includes complicated operations. In this paper, the optimal design method and theory frame for real-time HOG feature extraction based on FPGA were proposed. The main principle is as follows: firstly, the parallel gradient computing unit circuit based on parallel pipeline structure was designed. Secondly, the calculation of arctangent and square root operation was simplified. Finally, a histogram generator based on parallel pipeline structure was designed to calculate the histogram of each sub-region. Experimental results showed that the HOG extraction can be implemented in a pixel period by these computing units.

  20. Standard Reference Line Combined with One-Point Calibration-Free Laser-Induced Breakdown Spectroscopy (CF-LIBS) to Quantitatively Analyze Stainless and Heat Resistant Steel.

    PubMed

    Fu, Hongbo; Wang, Huadong; Jia, Junwei; Ni, Zhibo; Dong, Fengzhong

    2018-01-01

    Due to the influence of major elements' self-absorption, scarce observable spectral lines of trace elements, and relative efficiency correction of experimental system, accurate quantitative analysis with calibration-free laser-induced breakdown spectroscopy (CF-LIBS) is in fact not easy. In order to overcome these difficulties, standard reference line (SRL) combined with one-point calibration (OPC) is used to analyze six elements in three stainless-steel and five heat-resistant steel samples. The Stark broadening and Saha - Boltzmann plot of Fe are used to calculate the electron density and the plasma temperature, respectively. In the present work, we tested the original SRL method, the SRL with the OPC method, and intercept with the OPC method. The final calculation results show that the latter two methods can effectively improve the overall accuracy of quantitative analysis and the detection limits of trace elements.

  1. An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks

    PubMed Central

    Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen

    2016-01-01

    The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper. PMID:27044001

  2. A large 2D PSD for thermal neutron detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knott, R.B.; Watt, G.; Boldeman, J.W.

    1996-12-31

    A 2D PSD based on a MWPC has been constructed for a small angle neutron scattering instrument. The active area of the detector was 640 x 640 mm{sup 2}. To meet the specifications for neutron detection efficiency and spatial resolution, and to minimize parallax, the gas mixture was 190 kPa {sup 3}He plus 100 kPa CF{sub 4} and the active volume had a thickness of 30 mm. The design maximum neutron count-rate of the detector was 10{sup 5} events per second. The (calculated) neutron detection efficiency was 60% for 2{angstrom} neutrons and the (measured) neutron energy resolution on the anodemore » grid was typically 20% (fwhm). The location of a neutron detection event within the active area was determined using the wire-by-wire method: the spatial resolution (5 x 5 mm{sup 2}) was thereby defined by the wire geometry. A 16 channel charge-sensitive preamplifier/amplifier/comparator module has been developed with a channel sensitivity of 0.1 V/fC, noise linewidth of 0.4 fC (fwhm) and channel-to-channel cross-talk of less than 5%. The Proportional Counter Operating System (PCOS III) (LeCroy Corp USA) was used for event encoding. The ECL signals produced by the 16 channel modules were latched in PCOS III by a trigger pulse from the anode and the fast encoders produce a position and width for each event. The information was transferred to a UNIX workstation for accumulation and online display.« less

  3. Performance of a high-sensitivity dedicated cardiac SPECT scanner for striatal uptake quantification in the brain based on analysis of projection data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Mi-Ae; Moore, Stephen C.; McQuaid, Sarah J.

    Purpose: The authors have previously reported the advantages of high-sensitivity single-photon emission computed tomography (SPECT) systems for imaging structures located deep inside the brain. DaTscan (Isoflupane I-123) is a dopamine transporter (DaT) imaging agent that has shown potential for early detection of Parkinson disease (PD), as well as for monitoring progression of the disease. Realizing the full potential of DaTscan requires efficient estimation of striatal uptake from SPECT images. They have evaluated two SPECT systems, a conventional dual-head gamma camera with low-energy high-resolution collimators (conventional) and a dedicated high-sensitivity multidetector cardiac imaging system (dedicated) for imaging tasks related to PD.more » Methods: Cramer-Rao bounds (CRB) on precision of estimates of striatal and background activity concentrations were calculated from high-count, separate acquisitions of the compartments (right striata, left striata, background) of a striatal phantom. CRB on striatal and background activity concentration were calculated from essentially noise-free projection datasets, synthesized by scaling and summing the compartment projection datasets, for a range of total detected counts. They also calculated variances of estimates of specific-to-nonspecific binding ratios (BR) and asymmetry indices from these values using propagation of error analysis, as well as the precision of measuring changes in BR on the order of the average annual decline in early PD. Results: Under typical clinical conditions, the conventional camera detected 2 M counts while the dedicated camera detected 12 M counts. Assuming a normal BR of 5, the standard deviation of BR estimates was 0.042 and 0.021 for the conventional and dedicated system, respectively. For an 8% decrease to BR = 4.6, the signal-to-noise ratio were 6.8 (conventional) and 13.3 (dedicated); for a 5% decrease, they were 4.2 (conventional) and 8.3 (dedicated). Conclusions: This implies that PD can be detected earlier with the dedicated system than with the conventional system; therefore, earlier identification of PD progression should be possible with the high-sensitivity dedicated SPECT camera.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Jaekwang; Huang, Jingsong; Sumpter, Bobby G.

    Compared with their bulk counterparts, 2D materials can sustain much higher elastic strain at which optical quantities such as bandgaps and absorption spectra governing optoelectronic device performance can be modified with relative ease. Using first-principles density functional theory and quasiparticle GW calculations, we demonstrate how uniaxial tensile strain can be utilized to optimize the electronic and optical properties of transition metal dichalcogenide lateral (in-plane) heterostructures such as MoX 2/WX 2 (X = S, Se, Te). We find that these lateral-type heterostructures may facilitate efficient electron–hole separation for light detection/harvesting and preserve their type II characteristic up to 12% of uniaxialmore » strain. Based on the strain-dependent bandgap and band offset, we show that uniaxial tensile strain can significantly increase the power conversion efficiency of these lateral heterostructures. Our results suggest that these strain-engineered lateral heterostructures are promising for optimizing optoelectronic device performance by selectively tuning the energetics of the bandgap.« less

  5. Current-induced spin polarization in InGaAs and GaAs epilayers with varying doping densities

    NASA Astrophysics Data System (ADS)

    Luengo-Kovac, M.; Huang, S.; Del Gaudio, D.; Occena, J.; Goldman, R. S.; Raimondi, R.; Sih, V.

    2017-11-01

    The current-induced spin polarization and momentum-dependent spin-orbit field were measured in InxGa1 -xAs epilayers with varying indium concentrations and silicon doping densities. Samples with higher indium concentrations and carrier concentrations and lower mobilities were found to have larger electrical spin generation efficiencies. Furthermore, current-induced spin polarization was detected in GaAs epilayers despite the absence of measurable spin-orbit fields, indicating that the extrinsic contributions to the spin-polarization mechanism must be considered. Theoretical calculations based on a model that includes extrinsic contributions to the spin dephasing and the spin Hall effect, in addition to the intrinsic Rashba and Dresselhaus spin-orbit coupling, are found to reproduce the experimental finding that the crystal direction with the smaller net spin-orbit field has larger electrical spin generation efficiency and are used to predict how sample parameters affect the magnitude of the current-induced spin polarization.

  6. Analysis of Intrinsic Peptide Detectability via Integrated Label-Free and SRM-Based Absolute Quantitative Proteomics.

    PubMed

    Jarnuczak, Andrew F; Lee, Dave C H; Lawless, Craig; Holman, Stephen W; Eyers, Claire E; Hubbard, Simon J

    2016-09-02

    Quantitative mass spectrometry-based proteomics of complex biological samples remains challenging in part due to the variability and charge competition arising during electrospray ionization (ESI) of peptides and the subsequent transfer and detection of ions. These issues preclude direct quantification from signal intensity alone in the absence of a standard. A deeper understanding of the governing principles of peptide ionization and exploitation of the inherent ionization and detection parameters of individual peptides is thus of great value. Here, using the yeast proteome as a model system, we establish the concept of peptide F-factor as a measure of detectability, closely related to ionization efficiency. F-factor is calculated by normalizing peptide precursor ion intensity by absolute abundance of the parent protein. We investigated F-factor characteristics in different shotgun proteomics experiments, including across multiple ESI-based LC-MS platforms. We show that F-factors mirror previously observed physicochemical predictors as peptide detectability but demonstrate a nonlinear relationship between hydrophobicity and peptide detectability. Similarly, we use F-factors to show how peptide ion coelution adversely affects detectability and ionization. We suggest that F-factors have great utility for understanding peptide detectability and gas-phase ion chemistry in complex peptide mixtures, selection of surrogate peptides in targeted MS studies, and for calibration of peptide ion signal in label-free workflows. Data are available via ProteomeXchange with identifier PXD003472.

  7. A Generalized Method for the Comparable and Rigorous Calculation of the Polytropic Efficiencies of Turbocompressors

    NASA Astrophysics Data System (ADS)

    Dimitrakopoulos, Panagiotis

    2018-03-01

    The calculation of polytropic efficiencies is a very important task, especially during the development of new compression units, like compressor impellers, stages and stage groups. Such calculations are also crucial for the determination of the performance of a whole compressor. As processors and computational capacities have substantially been improved in the last years, the need for a new, rigorous, robust, accurate and at the same time standardized method merged, regarding the computation of the polytropic efficiencies, especially based on thermodynamics of real gases. The proposed method is based on the rigorous definition of the polytropic efficiency. The input consists of pressure and temperature values at the end points of the compression path (suction and discharge), for a given working fluid. The average relative error for the studied cases was 0.536 %. Thus, this high-accuracy method is proposed for efficiency calculations related with turbocompressors and their compression units, especially when they are operating at high power levels, for example in jet engines and high-power plants.

  8. Introduction on Using the FastPCR Software and the Related Java Web Tools for PCR and Oligonucleotide Assembly and Analysis.

    PubMed

    Kalendar, Ruslan; Tselykh, Timofey V; Khassenov, Bekbolat; Ramanculov, Erlan M

    2017-01-01

    This chapter introduces the FastPCR software as an integrated tool environment for PCR primer and probe design, which predicts properties of oligonucleotides based on experimental studies of the PCR efficiency. The software provides comprehensive facilities for designing primers for most PCR applications and their combinations. These include the standard PCR as well as the multiplex, long-distance, inverse, real-time, group-specific, unique, overlap extension PCR for multi-fragments assembling cloning and loop-mediated isothermal amplification (LAMP). It also contains a built-in program to design oligonucleotide sets both for long sequence assembly by ligase chain reaction and for design of amplicons that tile across a region(s) of interest. The software calculates the melting temperature for the standard and degenerate oligonucleotides including locked nucleic acid (LNA) and other modifications. It also provides analyses for a set of primers with the prediction of oligonucleotide properties, dimer and G/C-quadruplex detection, linguistic complexity as well as a primer dilution and resuspension calculator. The program consists of various bioinformatical tools for analysis of sequences with the GC or AT skew, CG% and GA% content, and the purine-pyrimidine skew. It also analyzes the linguistic sequence complexity and performs generation of random DNA sequence as well as restriction endonucleases analysis. The program allows to find or create restriction enzyme recognition sites for coding sequences and supports the clustering of sequences. It performs efficient and complete detection of various repeat types with visual display. The FastPCR software allows the sequence file batch processing that is essential for automation. The program is available for download at http://primerdigital.com/fastpcr.html , and its online version is located at http://primerdigital.com/tools/pcr.html .

  9. SU-C-213-04: Application of Depth Sensing and 3D-Printing Technique for Total Body Irradiation (TBI) Patient Measurement and Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, M; Suh, T; Research Institute of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul

    2015-06-15

    Purpose: To develop and validate an innovative method of using depth sensing cameras and 3D printing techniques for Total Body Irradiation (TBI) treatment planning and compensator fabrication. Methods: A tablet with motion tracking cameras and integrated depth sensing was used to scan a RANDOTM phantom arranged in a TBI treatment booth to detect and store the 3D surface in a point cloud (PC) format. The accuracy of the detected surface was evaluated by comparison to extracted measurements from CT scan images. The thickness, source to surface distance and off-axis distance of the phantom at different body section was measured formore » TBI treatment planning. A 2D map containing a detailed compensator design was calculated to achieve uniform dose distribution throughout the phantom. The compensator was fabricated using a 3D printer, silicone molding and tungsten powder. In vivo dosimetry measurements were performed using optically stimulated luminescent detectors (OSLDs). Results: The whole scan of the anthropomorphic phantom took approximately 30 seconds. The mean error for thickness measurements at each section of phantom compare to CT was 0.44 ± 0.268 cm. These errors resulted in approximately 2% dose error calculation and 0.4 mm tungsten thickness deviation for the compensator design. The accuracy of 3D compensator printing was within 0.2 mm. In vivo measurements for an end-to-end test showed the overall dose difference was within 3%. Conclusion: Motion cameras and depth sensing techniques proved to be an accurate and efficient tool for TBI patient measurement and treatment planning. 3D printing technique improved the efficiency and accuracy of the compensator production and ensured a more accurate treatment delivery.« less

  10. Ultrahigh-Sensitivity Piezoresistive Pressure Sensors for Detection of Tiny Pressure.

    PubMed

    Li, Hongwei; Wu, Kunjie; Xu, Zeyang; Wang, Zhongwu; Meng, Yancheng; Li, Liqiang

    2018-06-20

    High-sensitivity pressure sensors are crucial for the ultrasensitive touch technology and E-skin, especially at the tiny-pressure range below 100 Pa. However, it is highly challenging to substantially promote sensitivity beyond the current level at several to 200 kPa -1 and to improve the detection limit lower than 0.1 Pa, which is significant for the development of pressure sensors toward ultrasensitive and highly precise detection. Here, we develop an efficient strategy to greatly improve the sensitivity near to 2000 kPa -1 using short-channel coplanar device structure and sharp microstructure, which is systematically proposed for the first time and rationalized by the mathematic calculation and analysis. Significantly, benefiting from the ultrahigh sensitivity, the detection limit is improved to be as small as 0.075 Pa. The sensitivity and detection limit are both superior to the current levels and far surpass the function of human skin. Furthermore, the sensor shows fast response time (50 μs), excellent reproducibility and stability, and low power consumption. Remarkably, the sensor shows excellent detection capacity in the tiny-pressure range, including light-emitting diode switching with a pressure of 7 Pa, ringtone (2-20 Pa) recognition, and ultrasensitive (0.1 Pa) electronic glove. This work represents a performance and strategic progress in the field of pressure sensing.

  11. Absolute calibration of a multichannel plate detector for low energy O, O-, and O+

    NASA Astrophysics Data System (ADS)

    Stephen, T. M.; Peko, B. L.

    2000-03-01

    Absolute detection efficiencies of a commercial multichannel plate detector have been measured for O, O+, and O-, impacting at normal incidence for energies ranging from 30-1000 eV. In addition, the detection efficiencies for O relative to its ions are presented, as they may have a more universal application. The absolute detection efficiencies are strongly energy dependent and significant differences are observed for the various charge states at lower energies. The detection efficiencies for the different charge states appear to converge at higher energies. The strongest energy dependence is for O+; the detection efficiency varies by three orders of magnitude across the energy range studied. The weakest dependence is for O-, which varies less than one order of magnitude.

  12. Improving the Efficiency of Free Energy Calculations in the Amber Molecular Dynamics Package.

    PubMed

    Kaus, Joseph W; Pierce, Levi T; Walker, Ross C; McCammont, J Andrew

    2013-09-10

    Alchemical transformations are widely used methods to calculate free energies. Amber has traditionally included support for alchemical transformations as part of the sander molecular dynamics (MD) engine. Here we describe the implementation of a more efficient approach to alchemical transformations in the Amber MD package. Specifically we have implemented this new approach within the more computational efficient and scalable pmemd MD engine that is included with the Amber MD package. The majority of the gain in efficiency comes from the improved design of the calculation, which includes better parallel scaling and reduction in the calculation of redundant terms. This new implementation is able to reproduce results from equivalent simulations run with the existing functionality, but at 2.5 times greater computational efficiency. This new implementation is also able to run softcore simulations at the λ end states making direct calculation of free energies more accurate, compared to the extrapolation required in the existing implementation. The updated alchemical transformation functionality will be included in the next major release of Amber (scheduled for release in Q1 2014) and will be available at http://ambermd.org, under the Amber license.

  13. Improving the Efficiency of Free Energy Calculations in the Amber Molecular Dynamics Package

    PubMed Central

    Pierce, Levi T.; Walker, Ross C.; McCammont, J. Andrew

    2013-01-01

    Alchemical transformations are widely used methods to calculate free energies. Amber has traditionally included support for alchemical transformations as part of the sander molecular dynamics (MD) engine. Here we describe the implementation of a more efficient approach to alchemical transformations in the Amber MD package. Specifically we have implemented this new approach within the more computational efficient and scalable pmemd MD engine that is included with the Amber MD package. The majority of the gain in efficiency comes from the improved design of the calculation, which includes better parallel scaling and reduction in the calculation of redundant terms. This new implementation is able to reproduce results from equivalent simulations run with the existing functionality, but at 2.5 times greater computational efficiency. This new implementation is also able to run softcore simulations at the λ end states making direct calculation of free energies more accurate, compared to the extrapolation required in the existing implementation. The updated alchemical transformation functionality will be included in the next major release of Amber (scheduled for release in Q1 2014) and will be available at http://ambermd.org, under the Amber license. PMID:24185531

  14. Application of the CIEMAT-NIST method to plastic scintillation microspheres.

    PubMed

    Tarancón, A; Barrera, J; Santiago, L M; Bagán, H; García, J F

    2015-04-01

    An adaptation of the MICELLE2 code was used to apply the CIEMAT-NIST tracing method to the activity calculation for radioactive solutions of pure beta emitters of different energies using plastic scintillation microspheres (PSm) and (3)H as a tracing radionuclide. Particle quenching, very important in measurements with PSm, was computed with PENELOPE using geometries formed by a heterogeneous mixture of polystyrene microspheres and water. The results obtained with PENELOPE were adapted to be included in MICELLE2, which is capable of including the energy losses due to particle quenching in the computation of the detection efficiency. The activity calculation of (63)Ni, (14)C, (36)Cl and (90)Sr/(90)Y solutions was performed with deviations of 8.8%, 1.9%, 1.4% and 2.1%, respectively. Of the different parameters evaluated, those with the greatest impact on the activity calculation are, in order of importance, the energy of the radionuclide, the degree of quenching of the sample and the packing fraction of the geometry used in the computation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. A Deep Penetration Problem Calculation Using AETIUS:An Easy Modeling Discrete Ordinates Transport Code UsIng Unstructured Tetrahedral Mesh, Shared Memory Parallel

    NASA Astrophysics Data System (ADS)

    KIM, Jong Woon; LEE, Young-Ouk

    2017-09-01

    As computing power gets better and better, computer codes that use a deterministic method seem to be less useful than those using the Monte Carlo method. In addition, users do not like to think about space, angles, and energy discretization for deterministic codes. However, a deterministic method is still powerful in that we can obtain a solution of the flux throughout the problem, particularly as when particles can barely penetrate, such as in a deep penetration problem with small detection volumes. Recently, a new state-of-the-art discrete-ordinates code, ATTILA, was developed and has been widely used in several applications. ATTILA provides the capabilities to solve geometrically complex 3-D transport problems by using an unstructured tetrahedral mesh. Since 2009, we have been developing our own code by benchmarking ATTILA. AETIUS is a discrete ordinates code that uses an unstructured tetrahedral mesh such as ATTILA. For pre- and post- processing, Gmsh is used to generate an unstructured tetrahedral mesh by importing a CAD file (*.step) and visualizing the calculation results of AETIUS. Using a CAD tool, the geometry can be modeled very easily. In this paper, we describe a brief overview of AETIUS and provide numerical results from both AETIUS and a Monte Carlo code, MCNP5, in a deep penetration problem with small detection volumes. The results demonstrate the effectiveness and efficiency of AETIUS for such calculations.

  16. Effect of particle size distribution on the separation efficiency in liquid chromatography.

    PubMed

    Horváth, Krisztián; Lukács, Diána; Sepsey, Annamária; Felinger, Attila

    2014-09-26

    In this work, the influence of the width of particle size distribution (PSD) on chromatographic efficiency is studied. The PSD is described by lognormal distribution. A theoretical framework is developed in order to calculate heights equivalent to a theoretical plate in case of different PSDs. Our calculations demonstrate and verify that wide particle size distributions have significant effect on the separation efficiency of molecules. The differences of fully porous and core-shell phases regarding the influence of width of PSD are presented and discussed. The efficiencies of bimodal phases were also calculated. The results showed that these packings do not have any advantage over unimodal phases. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Discussion on Boiler Efficiency Correction Method with Low Temperature Economizer-Air Heater System

    NASA Astrophysics Data System (ADS)

    Ke, Liu; Xing-sen, Yang; Fan-jun, Hou; Zhi-hong, Hu

    2017-05-01

    This paper pointed out that it is wrong to take the outlet flue gas temperature of low temperature economizer as exhaust gas temperature in boiler efficiency calculation based on GB10184-1988. What’s more, this paper proposed a new correction method, which decomposed low temperature economizer-air heater system into two hypothetical parts of air preheater and pre condensed water heater and take the outlet equivalent gas temperature of air preheater as exhaust gas temperature in boiler efficiency calculation. This method makes the boiler efficiency calculation more concise, with no air heater correction. It has a positive reference value to deal with this kind of problem correctly.

  18. The physics of solid-state neutron detector materials and geometries.

    PubMed

    Caruso, A N

    2010-11-10

    Detection of neutrons, at high total efficiency, with greater resolution in kinetic energy, time and/or real-space position, is fundamental to the advance of subfields within nuclear medicine, high-energy physics, non-proliferation of special nuclear materials, astrophysics, structural biology and chemistry, magnetism and nuclear energy. Clever indirect-conversion geometries, interaction/transport calculations and modern processing methods for silicon and gallium arsenide allow for the realization of moderate- to high-efficiency neutron detectors as a result of low defect concentrations, tuned reaction product ranges, enhanced effective omnidirectional cross sections and reduced electron-hole pair recombination from more physically abrupt and electronically engineered interfaces. Conversely, semiconductors with high neutron cross sections and unique transduction mechanisms capable of achieving very high total efficiency are gaining greater recognition despite the relative immaturity of their growth, lithographic processing and electronic structure understanding. This review focuses on advances and challenges in charged-particle-based device geometries, materials and associated mechanisms for direct and indirect transduction of thermal to fast neutrons within the context of application. Calorimetry- and radioluminescence-based intermediate processes in the solid state are not included.

  19. Low-speed impacts between rubble piles modeled as collections of polyhedra, 2

    NASA Astrophysics Data System (ADS)

    Korycansky, D. G.; Asphaug, Erik

    2009-11-01

    We present the results of additional calculations involving the collisions of km-scale rubble piles. In new work, we used the Open Dynamics Engine (ODE), an open-source library for the simulation of rigid-body dynamics that incorporates a sophisticated collision-detection and resolution routine. We found that using ODE resulted in a speed-up of approximately a factor of 30 compared with previous code. In this paper we report on the results of almost 1200 separate runs, the bulk of which were carried out with 1000-2000 elements. We carried out calculations with three different combinations of the coefficients of friction η and (normal) restitution ɛ: low (η=0,ɛ=0.8), medium (η=0,ɛ=0.5), and high (η=0.5,ɛ=0.5) dissipation. For target objects of ˜1 km in radius, we found reduced critical disruption energy values QRD∗ in head-on collisions from 2 to 100 J kg -1 depending on dissipation and impactor/target mass ratio. Monodisperse objects disrupted somewhat more easily than power-law objects in general. For oblique collisions of equal-mass objects, mildly off-center collisions (b/b0=0.5) seemed to be as efficient or possibly more efficient at collisional disruption as head-on collisions. More oblique collisions were less efficient and the most oblique collisions we tried (b/b0=0.866) required up to ˜200 J kg -1 for high-dissipation power-law objects. For calculations with smaller numbers of elements (total impactor ni+targetnT=20 or 200 elements) we found that collisions were more efficient for smaller numbers of more massive elements, with QRD∗ values as low as 0.4Jkg for low-dissipation cases. We also analyzed our results in terms of the relations proposed by Stewart and Leinhardt [Stewart, S.T., Leinhardt, Z.M., 2009. Astrophys. J. 691, L133-L137] where m1/(mi+mT)=1-QR/2QRD∗ where QR is the impact kinetic energy per unit total mass mi+mT. Although there is a significant amount of scatter, our results generally bear out the suggested relation.

  20. Efficient GW calculations using eigenvalue-eigenvector decomposition of the dielectric matrix

    NASA Astrophysics Data System (ADS)

    Nguyen, Huy-Viet; Pham, T. Anh; Rocca, Dario; Galli, Giulia

    2011-03-01

    During the past 25 years, the GW method has been successfully used to compute electronic quasi-particle excitation spectra of a variety of materials. It is however a computationally intensive technique, as it involves summations over occupied and empty electronic states, to evaluate both the Green function (G) and the dielectric matrix (DM) entering the expression of the screened Coulomb interaction (W). Recent developments have shown that eigenpotentials of DMs can be efficiently calculated without any explicit evaluation of empty states. In this work, we will present a computationally efficient approach to the calculations of GW spectra by combining a representation of DMs in terms of its eigenpotentials and a recently developed iterative algorithm. As a demonstration of the efficiency of the method, we will present calculations of the vertical ionization potentials of several systems. Work was funnded by SciDAC-e DE-FC02-06ER25777.

  1. Efficient search for a face by chimpanzees (Pan troglodytes).

    PubMed

    Tomonaga, Masaki; Imura, Tomoko

    2015-07-16

    The face is quite an important stimulus category for human and nonhuman primates in their social lives. Recent advances in comparative-cognitive research clearly indicate that chimpanzees and humans process faces in a special manner; that is, using holistic or configural processing. Both species exhibit the face-inversion effect in which the inverted presentation of a face deteriorates their perception and recognition. Furthermore, recent studies have shown that humans detect human faces among non-facial objects rapidly. We report that chimpanzees detected chimpanzee faces among non-facial objects quite efficiently. This efficient search was not limited to own-species faces. They also found human adult and baby faces--but not monkey faces--efficiently. Additional testing showed that a front-view face was more readily detected than a profile, suggesting the important role of eye-to-eye contact. Chimpanzees also detected a photograph of a banana as efficiently as a face, but a further examination clearly indicated that the banana was detected mainly due to a low-level feature (i.e., color). Efficient face detection was hampered by an inverted presentation, suggesting that configural processing of faces is a critical element of efficient face detection in both species. This conclusion was supported by a simple simulation experiment using the saliency model.

  2. Efficient search for a face by chimpanzees (Pan troglodytes)

    PubMed Central

    Tomonaga, Masaki; Imura, Tomoko

    2015-01-01

    The face is quite an important stimulus category for human and nonhuman primates in their social lives. Recent advances in comparative-cognitive research clearly indicate that chimpanzees and humans process faces in a special manner; that is, using holistic or configural processing. Both species exhibit the face-inversion effect in which the inverted presentation of a face deteriorates their perception and recognition. Furthermore, recent studies have shown that humans detect human faces among non-facial objects rapidly. We report that chimpanzees detected chimpanzee faces among non-facial objects quite efficiently. This efficient search was not limited to own-species faces. They also found human adult and baby faces-but not monkey faces-efficiently. Additional testing showed that a front-view face was more readily detected than a profile, suggesting the important role of eye-to-eye contact. Chimpanzees also detected a photograph of a banana as efficiently as a face, but a further examination clearly indicated that the banana was detected mainly due to a low-level feature (i.e., color). Efficient face detection was hampered by an inverted presentation, suggesting that configural processing of faces is a critical element of efficient face detection in both species. This conclusion was supported by a simple simulation experiment using the saliency model. PMID:26180944

  3. Calculated coupling efficiency between an elliptical-core optical fiber and an optical waveguide over temperature

    NASA Technical Reports Server (NTRS)

    Tuma, Margaret L.; Weisshaar, Andreas; Li, Jian; Beheim, Glenn

    1995-01-01

    To determine the feasibility of coupling the output of a single-mode optical fiber into a single-mode rib waveguide in a temperature varying environment, a theoretical calculation of the coupling efficiency between the two was investigated. Due to the complex geometry of the rib guide, there is no analytical solution to the wave equation for the guided modes, thus, approximation and/or numerical techniques must be utilized to determine the field patterns of the guide. In this study, three solution methods were used for both the fiber and guide fields; the effective-index method (EIM), Marcatili's approximation, and a Fourier method. These methods were utilized independently to calculate the electric field profile of each component at two temperatures, 20 C and 300 C, representing a nominal and high temperature. Using the electric field profile calculated from each method, the theoretical coupling efficiency between an elliptical-core optical fiber and a rib waveguide was calculated using the overlap integral and the results were compared. It was determined that a high coupling efficiency can be achieved when the two components are aligned. The coupling efficiency was more sensitive to alignment offsets in the y direction than the x, due to the elliptical modal field profile of both components. Changes in the coupling efficiency over temperature were found to be minimal.

  4. Surface plasmon coupled chemiluminescence during adsorption of oxygen on magnesium surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagemann, Ulrich; Nienhaus, Hermann, E-mail: hermann.nienhaus@uni-due.de

    The dissociative adsorption of oxygen molecules on magnesium surfaces represents a non-adiabatic reaction exhibiting exoelectron emission, chemicurrent generation, and weak chemiluminescence. Using thin film Mg/Ag/p-Si(111) Schottky diodes with 1 nm Mg on a 10-60 nm thick Ag layer as 2π-photodetectors, the chemiluminescence is internally detected with a much larger efficiency than external methods. The chemically induced photoyield shows a maximum for a Ag film thickness of 45 nm. The enhancement is explained by surface plasmon coupled chemiluminescence, i.e., surface plasmon polaritons are effectively excited in the Ag layer by the oxidation reaction and decay radiatively leading to the observed photocurrent.more » Model calculations of the maximum absorption in attenuated total reflection geometry support the interpretation. The study demonstrates the extreme sensitivity and the practical usage of internal detection schemes for investigating surface chemiluminescence.« less

  5. Clearance detector and method for motion and distance

    DOEpatents

    Xavier, Patrick G [Albuquerque, NM

    2011-08-09

    A method for correct and efficient detection of clearances between three-dimensional bodies in computer-based simulations, where one or both of the volumes is subject to translation and/or rotations. The method conservatively determines of the size of such clearances and whether there is a collision between the bodies. Given two bodies, each of which is undergoing separate motions, the method utilizes bounding-volume hierarchy representations for the two bodies and, mappings and inverse mappings for the motions of the two bodies. The method uses the representations, mappings and direction vectors to determine the directionally furthest locations of points on the convex hulls of the volumes virtually swept by the bodies and hence the clearance between the bodies, without having to calculate the convex hulls of the bodies. The method includes clearance detection for bodies comprising convex geometrical primitives and more specific techniques for bodies comprising convex polyhedra.

  6. Enrichment of cancer cells using aptamers immobilized on a microfluidic channel

    PubMed Central

    Phillips, Joseph A.; Xu, Ye; Xia, Zheng

    2009-01-01

    This work describes the development and investigation of an aptamer modified microfluidic device that captures rare cells to achieve a rapid assay without pre-treatment of cells. To accomplish this, aptamers are first immobilized on the surface of a poly (dimethylsiloxane) microchannel, followed by pumping a mixture of cells through the device. This process permits the use of optical microscopy to measure the cell-surface density from which we calculate the percentage of cells captured as a function of cell and aptamer concentration, flow velocity, and incubation time. This aptamer-based device was demonstrated to capture target cells with > 97% purity and > 80% efficiency. Since the cell capture assay is completed within minutes and requires no pre-treatment of cells, the device promises to play a key role in the early detection and diagnosis of cancer where rare diseased cells can first be enriched and then captured for detection. PMID:19115856

  7. Identification and location of catenary insulator in complex background based on machine vision

    NASA Astrophysics Data System (ADS)

    Yao, Xiaotong; Pan, Yingli; Liu, Li; Cheng, Xiao

    2018-04-01

    It is an important premise to locate insulator precisely for fault detection. Current location algorithms for insulator under catenary checking images are not accurate, a target recognition and localization method based on binocular vision combined with SURF features is proposed. First of all, because of the location of the insulator in complex environment, using SURF features to achieve the coarse positioning of target recognition; then Using binocular vision principle to calculate the 3D coordinates of the object which has been coarsely located, realization of target object recognition and fine location; Finally, Finally, the key is to preserve the 3D coordinate of the object's center of mass, transfer to the inspection robot to control the detection position of the robot. Experimental results demonstrate that the proposed method has better recognition efficiency and accuracy, can successfully identify the target and has a define application value.

  8. Log-Linear Models for Gene Association

    PubMed Central

    Hu, Jianhua; Joshi, Adarsh; Johnson, Valen E.

    2009-01-01

    We describe a class of log-linear models for the detection of interactions in high-dimensional genomic data. This class of models leads to a Bayesian model selection algorithm that can be applied to data that have been reduced to contingency tables using ranks of observations within subjects, and discretization of these ranks within gene/network components. Many normalization issues associated with the analysis of genomic data are thereby avoided. A prior density based on Ewens’ sampling distribution is used to restrict the number of interacting components assigned high posterior probability, and the calculation of posterior model probabilities is expedited by approximations based on the likelihood ratio statistic. Simulation studies are used to evaluate the efficiency of the resulting algorithm for known interaction structures. Finally, the algorithm is validated in a microarray study for which it was possible to obtain biological confirmation of detected interactions. PMID:19655032

  9. Low complexity pixel-based halftone detection

    NASA Astrophysics Data System (ADS)

    Ok, Jiheon; Han, Seong Wook; Jarno, Mielikainen; Lee, Chulhee

    2011-10-01

    With the rapid advances of the internet and other multimedia technologies, the digital document market has been growing steadily. Since most digital images use halftone technologies, quality degradation occurs when one tries to scan and reprint them. Therefore, it is necessary to extract the halftone areas to produce high quality printing. In this paper, we propose a low complexity pixel-based halftone detection algorithm. For each pixel, we considered a surrounding block. If the block contained any flat background regions, text, thin lines, or continuous or non-homogeneous regions, the pixel was classified as a non-halftone pixel. After excluding those non-halftone pixels, the remaining pixels were considered to be halftone pixels. Finally, documents were classified as pictures or photo documents by calculating the halftone pixel ratio. The proposed algorithm proved to be memory-efficient and required low computation costs. The proposed algorithm was easily implemented using GPU.

  10. Lung counting: comparison of detector performance with a four detector array that has either metal or carbon fibre end caps, and the effect on mda calculation.

    PubMed

    Ahmed, Asm Sabbir; Hauck, Barry; Kramer, Gary H

    2012-08-01

    This study described the performance of an array of high-purity Germanium detectors, designed with two different end cap materials-steel and carbon fibre. The advantages and disadvantages of using this detector type in the estimation of the minimum detectable activity (MDA) for different energy peaks of isotope (152)Eu were illustrated. A Monte Carlo model was developed to study the detection efficiency for the detector array. A voxelised Lawrence Livermore torso phantom, equipped with lung, chest plates and overlay plates, was used to mimic a typical lung counting protocol with the array of detectors. The lung of the phantom simulated the volumetric source organ. A significantly low MDA was estimated for energy peaks at 40 keV and at a chest wall thickness of 6.64 cm.

  11. Development of an embedded instrument for autofocus and polarization alignment of polarization maintaining fiber

    NASA Astrophysics Data System (ADS)

    Feng, Di; Fang, Qimeng; Huang, Huaibo; Zhao, Zhengqi; Song, Ningfang

    2017-12-01

    The development and implementation of a practical instrument based on an embedded technique for autofocus and polarization alignment of polarization maintaining fiber is presented. For focusing efficiency and stability, an image-based focusing algorithm fully considering the image definition evaluation and the focusing search strategy was used to accomplish autofocus. For improving the alignment accuracy, various image-based algorithms of alignment detection were developed with high calculation speed and strong robustness. The instrument can be operated as a standalone device with real-time processing and convenience operations. The hardware construction, software interface, and image-based algorithms of main modules are described. Additionally, several image simulation experiments were also carried out to analyze the accuracy of the above alignment detection algorithms. Both the simulation results and experiment results indicate that the instrument can achieve the accuracy of polarization alignment <±0.1 deg.

  12. Design and Implementation of Multifunctional Automatic Drilling End Effector

    NASA Astrophysics Data System (ADS)

    Wang, Zhanxi; Qin, Xiansheng; Bai, Jing; Tan, Xiaoqun; Li, Jing

    2017-03-01

    In order to realize the automatic drilling in aircraft assembly, a drilling end effector is designed by integrating the pressure unit, drilling unit, measurement unit, control system and frame structure. In order to reduce the hole deviation, this paper proposes a vertical normal adjustment program based on 4 laser distance sensors. The actual normal direction of workpiece surface can be calculated through the sensors measurements, and then robot posture is adjusted to realize the hole deviation correction. A base detection method is proposed to detect and locate the hole automatically by using the camera and the reference hole. The experiment results show that the position accuracy of the system is less than 0.3mm, and the normal precision is less than 0.5°. The drilling end effector and robot can greatly improve the efficiency of the aircraft parts and assembly quality, and reduce the product development cycle.

  13. Cyanomethanimine Isomers in Cold Interstellar Clouds: Insights from Electronic Structure and Kinetic Calculations

    NASA Astrophysics Data System (ADS)

    Vazart, Fanny; Latouche, Camille; Skouteris, Dimitrios; Balucani, Nadia; Barone, Vincenzo

    2015-09-01

    New insights into the formation of interstellar cyanomethanimine, a species of great relevance in prebiotic chemistry, are provided by electronic structure and kinetic calculations for the reaction CN + CH2 = NH. This reaction is a facile formation route of Z,E-C-cyanomethanimine, even under the extreme conditions of density and temperature typical of cold interstellar clouds. E-C-cyanomethanimine has been recently identified in Sgr B2(N) in the Green Bank Telescope (GBT) PRIMOS survey by P. Zaleski et al. and no efficient formation routes have been envisaged so far. The rate coefficient expression for the reaction channel leading to the observed isomer E-C-cyanomethanimine is 3.15 × 10-10 × (T/300)0.152 × e(-0.0948/T). According to the present study, the more stable Z-C-cyanomethanimine isomer is formed with a slightly larger yield (4.59 × 10-10 × (T/300)0.153 × e(-0.0871/T). As the detection of E-isomer is favored due to its larger dipole moment, the missing detection of the Z-isomer can be due to the sensitivity limit of the GBT PRIMOS survey and the detection of the Z-isomer should be attempted with more sensitive instrumentation. The CN + CH2 = NH reaction can also play a role in the chemistry of the upper atmosphere of Titan where the cyanomethanimine products can contribute to the buildup of the observed nitrogen-rich organic aerosols that cover the moon.

  14. Rapid Magnetic Resonance Imaging for Diagnosing Cancer-related Low Back Pain

    PubMed Central

    Hollingworth, William; Gray, Darryl T; Martin, Brook I; Sullivan, Sean D; Deyo, Richard A; Jarvik, Jeffrey G

    2003-01-01

    OBJECTIVES This study compared the relative efficiency of lumbar x-ray and rapid magnetic resonance (MR) imaging for diagnosing cancer-related low back pain (LBP) in primary care patients. DESIGN We developed a decision model with Markov state transitions to calculate the cost per case detected and cost per quality-adjusted life year (QALY) of rapid MR imaging. Model parameters were estimated from the medical literature. The costs of x-ray and rapid MR were calculated in an activity-based costing study. SETTING AND PATIENTS A hypothetical cohort of primary care patients with LBP referred for imaging to exclude cancer as the cause of their pain. MAIN RESULTS The rapid MR strategy was more expensive due to higher initial imaging costs and larger numbers of patients requiring conventional MR and biopsy. The overall sensitivity of the rapid MR strategy was higher than that of the x-ray strategy (62% vs 55%). However, because of low pre-imaging prevalence of cancer-related LBP, this generates <1 extra case per 1,000 patients imaged. Therefore, the incremental cost per case detected using rapid MR was high ($213,927). The rapid MR strategy resulted in a small increase in quality-adjusted survival (0.00043 QALYs). The estimated incremental cost per QALY for the rapid MR strategy was $296,176. CONCLUSIONS There is currently not enough evidence to support the routine use of rapid MR to detect cancer as a cause of LBP in primary care patients. PMID:12709099

  15. Propulsive efficiency of frog swimming with different feet and swimming patterns

    PubMed Central

    Jizhuang, Fan; Wei, Zhang; Bowen, Yuan; Gangfeng, Liu

    2017-01-01

    ABSTRACT Aquatic and terrestrial animals have different swimming performances and mechanical efficiencies based on their different swimming methods. To explore propulsion in swimming frogs, this study calculated mechanical efficiencies based on data describing aquatic and terrestrial webbed-foot shapes and swimming patterns. First, a simplified frog model and dynamic equation were established, and hydrodynamic forces on the foot were computed according to computational fluid dynamic calculations. Then, a two-link mechanism was used to stand in for the diverse and complicated hind legs found in different frog species, in order to simplify the input work calculation. Joint torques were derived based on the virtual work principle to compute the efficiency of foot propulsion. Finally, two feet and swimming patterns were combined to compute propulsive efficiency. The aquatic frog demonstrated a propulsive efficiency (43.11%) between those of drag-based and lift-based propulsions, while the terrestrial frog efficiency (29.58%) fell within the range of drag-based propulsion. The results illustrate the main factor of swimming patterns for swimming performance and efficiency. PMID:28302669

  16. Evaluation of vacuum filter sock surface sample collection method for Bacillus spores from porous and non-porous surfaces.

    PubMed

    Brown, Gary S; Betty, Rita G; Brockmann, John E; Lucero, Daniel A; Souza, Caroline A; Walsh, Kathryn S; Boucher, Raymond M; Tezak, Matthew S; Wilson, Mollye C

    2007-07-01

    Vacuum filter socks were evaluated for recovery efficiency of powdered Bacillus atrophaeus spores from two non-porous surfaces, stainless steel and painted wallboard and two porous surfaces, carpet and bare concrete. Two surface coupons were positioned side-by-side and seeded with aerosolized Bacillus atrophaeus spores. One of the surfaces, a stainless steel reference coupon, was sized to fit into a sample vial for direct spore removal, while the other surface, a sample surface coupon, was sized for a vacuum collection application. Deposited spore material was directly removed from the reference coupon surface and cultured for enumeration of colony forming units (CFU), while deposited spore material was collected from the sample coupon using the vacuum filter sock method, extracted by sonication and cultured for enumeration. Recovery efficiency, which is a measure of overall transfer effectiveness from the surface to culture, was calculated as the number of CFU enumerated from the filter sock sample per unit area relative to the number of CFU enumerated from the co-located reference coupon per unit area. The observed mean filter sock recovery efficiency from stainless steel was 0.29 (SD = 0.14, n = 36), from painted wallboard was 0.25 (SD = 0.15, n = 36), from carpet was 0.28 (SD = 0.13, n = 40) and from bare concrete was 0.19 (SD = 0.14, n = 44). Vacuum filter sock recovery quantitative limits of detection were estimated at 105 CFU m(-2) from stainless steel and carpet, 120 CFU m(-2) from painted wallboard and 160 CFU m(-2) from bare concrete. The method recovery efficiency and limits of detection established in this work provide useful guidance for the planning of incident response environmental sampling for biological agents such as Bacillus anthracis.

  17. Real-time PCR detection and quantification of nine potential sources of fecal contamination by analysis of mitochondrial Cytochrome b targets

    USGS Publications Warehouse

    Schill, W.B.; Mathes, M.V.

    2008-01-01

    We designed and tested real-time PCR probe/primer sets to detect and quantify Cytochrome b sequences of mitochondrial DNA (mtDNA) from nine vertebrate species of pet (dog), farm (cow, chicken, sheep, horse, pig), wildlife (Canada goose, white-tailed deer), and human. Linear ranges of the assays were from 101 to 108 copies/??l. To formally test the performance of the assays, twenty blinded fecal suspension samples were analyzed by real-time PCR to identify the source of the feces. Sixteen of the twenty samples were correctly and unambiguously identified. Average sensitivity was calculated to be 0.850, while average specificity was found to be 0.994. One beef cow sample was not detected, but mtDNA from 11 other beef cattle of both sexes and varying physiological states was found in concentrations similar (3.45 ?? 107 copies/g) to thatfound in human feces (1.1 ?? 107 copies/g). Thus, environmental conditions and sample handling are probably important factors for successful detection of fecal mtDNA. When sewage samples were analyzed, only human mtDNA (7.2 ?? 104 copies/100 mL) was detected. With a detection threshold of 250 copies/reaction, an efficient concentration and purification method resulted in a final detection limit for human feces of 1.8 mg/100 mL water.

  18. Real-time measurements of radon activity with the Timepix-based RADONLITE and RADONPIX detectors

    NASA Astrophysics Data System (ADS)

    Caresana, M.; Garlati, L.; Murtas, F.; Romano, S.; Severino, C. T.; Silari, M.

    2014-11-01

    Radon gas is the most important source of ionizing radiation among those of natural origin. Two new systems for radon measurement based on the Timepix silicon detector were developed. The positively charged radon daughters are electrostatically collected on the surface of the Si detector and their energy spectrum measured. Pattern recognition of the tracks on the sensor and particle identification are used to determine number and energy of the alpha particles and to subtract the background, allowing for efficient radon detection. The systems include an algorithm for real-time measurement of the radon concentration and the calculation of the effective dose to the lungs.

  19. Full-Scale Incineration System Demonstration Verification Test Burns at the Naval Battalion Construction Center, Gulfport, Mississippi. Volume 3. Treatability Tests. Part 1

    DTIC Science & Technology

    1991-07-01

    for archive. b. Except where noted, includes 2,3,7,8-TCDO, 2,3,7,8-TCOF, and total PCOD /PCOF. c. Except where noted, includes acid-type semivolatiles...TCDD- 3 7C14, P5CDD- 13 C12 , H pCOD - 13 C12, CD-13 C12, and P5CDF-13C 12 were used to calculate the accuracy of recovery efficiencies. Whereas for...burns are shown in Table 17. None of these PCOD congeners were detected, including the specific analysis for 2,3,7,8-TCDD. DLVs ranged between 30.02 and

  20. Improved atmospheric effect elimination method for the roughness estimation of painted surfaces.

    PubMed

    Zhang, Ying; Xuan, Jiabin; Zhao, Huijie; Song, Ping; Zhang, Yi; Xu, Wujian

    2018-03-01

    We propose a method for eliminating the atmospheric effect in polarimetric imaging remote sensing by using polarimetric imagers to simultaneously detect ground targets and skylight, which does not need calibrated targets. In addition, calculation efficiencies are improved by the skylight division method without losing estimation accuracy. Outdoor experiments are performed to obtain the polarimetric bidirectional reflectance distribution functions of painted surfaces and skylight under different weather conditions. Finally, the roughness of the painted surfaces is estimated. We find that the estimation accuracy with the proposed method is 6% on cloudy weather, while it is 30.72% without atmospheric effect elimination.

  1. Constraining the interaction between dark sectors with future HI intensity mapping observations

    NASA Astrophysics Data System (ADS)

    Xu, Xiaodong; Ma, Yin-Zhe; Weltman, Amanda

    2018-04-01

    We study a model of interacting dark matter and dark energy, in which the two components are coupled. We calculate the predictions for the 21-cm intensity mapping power spectra, and forecast the detectability with future single-dish intensity mapping surveys (BINGO, FAST and SKA-I). Since dark energy is turned on at z ˜1 , which falls into the sensitivity range of these radio surveys, the HI intensity mapping technique is an efficient tool to constrain the interaction. By comparing with current constraints on dark sector interactions, we find that future radio surveys will produce tight and reliable constraints on the coupling parameters.

  2. Signal Statistics and Maximum Likelihood Sequence Estimation in Intensity Modulated Fiber Optic Links Containing a Single Optical Pre-amplifier.

    PubMed

    Alić, Nikola; Papen, George; Saperstein, Robert; Milstein, Laurence; Fainman, Yeshaiahu

    2005-06-13

    Exact signal statistics for fiber-optic links containing a single optical pre-amplifier are calculated and applied to sequence estimation for electronic dispersion compensation. The performance is evaluated and compared with results based on the approximate chi-square statistics. We show that detection in existing systems based on exact statistics can be improved relative to using a chi-square distribution for realistic filter shapes. In contrast, for high-spectral efficiency systems the difference between the two approaches diminishes, and performance tends to be less dependent on the exact shape of the filter used.

  3. Watching the coherence of multiple vibrational states in organic dye molecules by using supercontinuum probing photon echo spectroscopy

    NASA Astrophysics Data System (ADS)

    Yu, Guoyang; Song, Yunfei; Wang, Yang; He, Xing; Liu, Yuqiang; Liu, Weilong; Yang, Yanqiang

    2011-12-01

    A modified photon echo (PE) technique, the supercontinuum probing photon echo (SCPPE), is introduced and performed to investigate the vibrational coherence in organic dye IR780 perchlorate doped polyvinyl alcohol (PVA) film. The coherences of multiple vibrational states which belong to four vibrational modes create complex oscillations in SCPPE signal. The frequencies of vibrational modes are confirmed from the results of Raman calculation which accord fairly well with the results of Raman scattering experiment. Compared with conventional one-color PE, the SCPPE technique can realize broadband detection and make the experiment about vibrational coherence more efficient.

  4. (6)Li-loaded liquid scintillators with pulse shape discrimination.

    PubMed

    Greenwood, L R; Chellew, N R; Zarwell, G A

    1979-04-01

    Excellent pulse height and pulse shape discrimination performance has been obtained for liquid scintillators containing as much as 10 wt.% (6)Li-salicylate dissolved in a toluene-methanol solvent system using naphthalene and 9,10 diphenylanthracene as intermediate and secondary solutes. This solution has improved performance at higher (6)Li-loading than solutions in dioxane-water solvent systems, and remains stable at temperatures as low as -10 degrees C. Cells as large as 5 cm in diameter and 15.2 deep have been prepared which have a higher light output for slow neutron detection than (10)B-loaded liquids. Neutron efficiency calculations are also presented.

  5. Pharmaceuticals, hormones and bisphenol A in untreated source and finished drinking water in Ontario, Canada--occurrence and treatment efficiency.

    PubMed

    Kleywegt, Sonya; Pileggi, Vince; Yang, Paul; Hao, Chunyan; Zhao, Xiaoming; Rocks, Carline; Thach, Serei; Cheung, Patrick; Whitehead, Brian

    2011-03-15

    The Ontario Ministry of the Environment (MOE) conducted a survey in 2006 on emerging organic contaminants (EOCs) which included pharmaceuticals, hormones and bisphenol A (BPA). The survey collected 258 samples over a 16 month period from selected source waters and 17 drinking water systems (DWSs), and analyzed them for 48 EOCs using liquid chromatography-tandem mass spectrometry (LC-MS/MS) and isotope dilution mass spectrometry (IDMS) for the highest precision and accuracy of analytical data possible. 27 of the 48 target EOCs were detected in source water, finished drinking water, or both. DWSs using river and lake source water accounted for>90% detections. Of the 27 EOCs found, we also reported the first detection of two antibiotics roxithromycin and enrofloxacin in environmental samples. The most frequently detected compounds (≥ 10%) in finished drinking water were carbamazepine (CBZ), gemfibrozil (GFB), ibuprofen (IBU), and BPA; with their concentrations accurately determined by using IDMS and calculated to be 4 to 10 times lower than those measured in the source water. Comparison of plant specific data allowed us to determine removal efficiency (RE) of these four most frequently detected compounds in Ontario DWSs. The RE of CBZ was determined to be from 71 to 93% for DWSs using granulated activated carbon (GAC); and was 75% for DWSs using GAC followed by ultraviolet irradiation (UV). The observed RE of GFB was between 44 and 55% in DWSs using GAC and increased to 82% when GAC was followed by UV. The use of GAC or GAC followed by UV provided an RE improvement of BPA from 80 to 99%. These detected concentration levels are well below the predicted no effect concentration or total allowable concentration reported in the literature. Additional targeted, site specific comparative research is required to fully assess the effectiveness of Ontario DWSs to remove particular compounds of concern. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Use of Bacteriophage MS2 as an Internal Control in Viral Reverse Transcription-PCR Assays

    PubMed Central

    Dreier, Jens; Störmer, Melanie; Kleesiek, Knut

    2005-01-01

    Diagnostic systems based on reverse transcription (RT)-PCR are widely used for the detection of viral genomes in different human specimens. The application of internal controls (IC) to monitor each step of nucleic acid amplification is necessary to prevent false-negative results due to inhibition or human error. In this study, we designed various real-time RT-PCRs utilizing the coliphage MS2 replicase gene, which differ in detection format, amplicon size, and efficiency of amplification. These noncompetitive IC assays, using TaqMan, hybridization probe, or duplex scorpion probe techniques, were tested on the LightCycler and Rotorgene systems. In our approach, clinical specimens were spiked with the control virus to monitor the efficiency of extraction, reverse transcription, and amplification steps. The MS2 RT-PCR assays were applied for internal control when using a second target hepatitis C virus RNA in duplex PCR in blood donor screening. The 95% detection limit was calculated by probit analysis to 44.9 copies per PCR (range, 38.4 to 73.4). As demonstrated routinely, application of MS2 IC assays exhibits low variability and can be applied in various RT-PCR assays. MS2 phage lysates were obtained under standard laboratory conditions. The quantification of phage and template RNA was performed by plating assays to determine PFU or via real-time RT-PCR. High stability of the MS2 phage preparations stored at −20°C, 4°C, and room temperature was demonstrated. PMID:16145106

  7. Interpretation of interference signals in label free integrated interferometric biosensors

    NASA Astrophysics Data System (ADS)

    Heikkinen, Hanna; Wang, Meng; Okkonen, Matti; Hast, Jukka; Myllylä, Risto

    2006-02-01

    In the future fast, simple and reliable biosensors will be needed to detect various analytes from different biosamples. This is due to fact that the needs of traditional health care are changing. In the future homecare of patients and peoples' responsibility for their own health will increase. Also, different wellness applications need new parameters to be analysed, reducing costs of traditional health care, which are increasing rapidly. One fascinating and promising sensor type for these applications is an integrated optical interferometric immunosensor, which is manufactured using organic materials. The use of organic materials opens up enormous possibilities to develop different biochemical functions. In label free biosensors the measurement is based on detecting changes in refractive index, which typically are in the range of 10 -6-10 -8 [1]. In this research, theoretically generated interferograms are used to compare various signal processing methods. The goal is to develop an efficient method to analyse the interferogram. Different time domain signal processing methods are studied to determine the measuring resolution and efficiency of these methods. A low cost CCD -element is used in detecting the interferogram dynamics. It was found that in most of the signal processing methods the measuring resolution was mainly limited by pixel size. With calculation of Pearson's correlation coefficient, subpixel resolution was achieved which means that nanometer range optical path differences can be measured. This results in the refractive index resolution of the order of 10 -7.

  8. Time-dependent importance sampling in semiclassical initial value representation calculations for time correlation functions.

    PubMed

    Tao, Guohua; Miller, William H

    2011-07-14

    An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.

  9. Photon-number-resolving SSPDs with system detection efficiency over 50% at telecom range

    NASA Astrophysics Data System (ADS)

    Zolotov, P.; Divochiy, A.; Vakhtomin, Yu.; Moshkova, M.; Morozov, P.; Seleznev, V.; Smirnov, K.

    2018-02-01

    We used technology of making high-efficiency superconducting single-photon detectors as a basis for improvement of photon-number-resolving devices. By adding optical cavity and using an improved NbN superconducting film, we enhanced previously reported system detection efficiency at telecom range for such detectors. Our results show that implementation of optical cavity helps to develop four-section device with quantum efficiency over 50% at 1.55 µm. Performed experimental studies of detecting multi-photon optical pulses showed irregularities over defining multi-photon through single-photon quantum efficiency.

  10. Investigation of theoretical efficiency limit of hot carriers solar cells with a bulk indium nitride absorber

    NASA Astrophysics Data System (ADS)

    Aliberti, P.; Feng, Y.; Takeda, Y.; Shrestha, S. K.; Green, M. A.; Conibeer, G.

    2010-11-01

    Theoretical efficiencies of a hot carrier solar cell considering indium nitride as the absorber material have been calculated in this work. In a hot carrier solar cell highly energetic carriers are extracted from the device before thermalisation, allowing higher efficiencies in comparison to conventional solar cells. Previous reports on efficiency calculations approached the problem using two different theoretical frameworks, the particle conservation (PC) model or the impact ionization model, which are only valid in particular extreme conditions. In addition an ideal absorber material with the approximation of parabolic bands has always been considered in the past. Such assumptions give an overestimation of the efficiency limits and results can only be considered indicative. In this report the real properties of wurtzite bulk InN absorber have been taken into account for the calculation, including the actual dispersion relation and absorbance. A new hybrid model that considers particle balance and energy balance at the same time has been implemented. Effects of actual impact ionization (II) and Auger recombination (AR) lifetimes have been included in the calculations for the first time, considering the real InN band structure and thermalisation rates. It has been observed that II-AR mechanisms are useful for cell operation in particular conditions, allowing energy redistribution of hot carriers. A maximum efficiency of 43.6% has been found for 1000 suns, assuming thermalisation constants of 100 ps and ideal blackbody absorption. This value of efficiency is considerably lower than values previously calculated adopting PC or II-AR models.

  11. Intervertebral disc detection in X-ray images using faster R-CNN.

    PubMed

    Ruhan Sa; Owens, William; Wiegand, Raymond; Studin, Mark; Capoferri, Donald; Barooha, Kenneth; Greaux, Alexander; Rattray, Robert; Hutton, Adam; Cintineo, John; Chaudhary, Vipin

    2017-07-01

    Automatic identification of specific osseous landmarks on the spinal radiograph can be used to automate calculations for correcting ligament instability and injury, which affect 75% of patients injured in motor vehicle accidents. In this work, we propose to use deep learning based object detection method as the first step towards identifying landmark points in lateral lumbar X-ray images. The significant breakthrough of deep learning technology has made it a prevailing choice for perception based applications, however, the lack of large annotated training dataset has brought challenges to utilizing the technology in medical image processing field. In this work, we propose to fine tune a deep network, Faster-RCNN, a state-of-the-art deep detection network in natural image domain, using small annotated clinical datasets. In the experiment we show that, by using only 81 lateral lumbar X-Ray training images, one can achieve much better performance compared to traditional sliding window detection method on hand crafted features. Furthermore, we fine-tuned the network using 974 training images and tested on 108 images, which achieved average precision of 0.905 with average computation time of 3 second per image, which greatly outperformed traditional methods in terms of accuracy and efficiency.

  12. Remote detection of single emitters via optical waveguides

    NASA Astrophysics Data System (ADS)

    Then, Patrick; Razinskas, Gary; Feichtner, Thorsten; Haas, Philippe; Wild, Andreas; Bellini, Nicola; Osellame, Roberto; Cerullo, Giulio; Hecht, Bert

    2014-05-01

    The integration of lab-on-a-chip technologies with single-molecule detection techniques may enable new applications in analytical chemistry, biotechnology, and medicine. We describe a method based on the reciprocity theorem of electromagnetic theory to determine and optimize the detection efficiency of photons emitted by single quantum emitters through truncated dielectric waveguides of arbitrary shape positioned in their proximity. We demonstrate experimentally that detection of single quantum emitters via such waveguides is possible, confirming the predicted behavior of the detection efficiency. Our findings blaze the trail towards efficient lensless single-emitter detection compatible with large-scale optofluidic integration.

  13. Organ-specific SPECT activity calibration using 3D printed phantoms for molecular radiotherapy dosimetry.

    PubMed

    Robinson, Andrew P; Tipping, Jill; Cullen, David M; Hamilton, David; Brown, Richard; Flynn, Alex; Oldfield, Christopher; Page, Emma; Price, Emlyn; Smith, Andrew; Snee, Richard

    2016-12-01

    Patient-specific absorbed dose calculations for molecular radiotherapy require accurate activity quantification. This is commonly derived from Single-Photon Emission Computed Tomography (SPECT) imaging using a calibration factor relating detected counts to known activity in a phantom insert. A series of phantom inserts, based on the mathematical models underlying many clinical dosimetry calculations, have been produced using 3D printing techniques. SPECT/CT data for the phantom inserts has been used to calculate new organ-specific calibration factors for (99m) Tc and (177)Lu. The measured calibration factors are compared to predicted values from calculations using a Gaussian kernel. Measured SPECT calibration factors for 3D printed organs display a clear dependence on organ shape for (99m) Tc and (177)Lu. The observed variation in calibration factor is reproduced using Gaussian kernel-based calculation over two orders of magnitude change in insert volume for (99m) Tc and (177)Lu. These new organ-specific calibration factors show a 24, 11 and 8 % reduction in absorbed dose for the liver, spleen and kidneys, respectively. Non-spherical calibration factors from 3D printed phantom inserts can significantly improve the accuracy of whole organ activity quantification for molecular radiotherapy, providing a crucial step towards individualised activity quantification and patient-specific dosimetry. 3D printed inserts are found to provide a cost effective and efficient way for clinical centres to access more realistic phantom data.

  14. Simultaneous measurement of 2-dimensional H2O concentration and temperature distribution in premixed methane/air flame using TDLAS-based tomography technology

    NASA Astrophysics Data System (ADS)

    Wang, Fei; Wu, Qi; Huang, Qunxing; Zhang, Haidan; Yan, Jianhua; Cen, Kefa

    2015-07-01

    An innovative tomographic method using tunable diode laser absorption spectroscopy (TDLAS) and algebraic reconstruction technique (ART) is presented in this paper for detecting two-dimensional distribution of H2O concentration and temperature in a premixed flame. The collimated laser beam emitted from a low cost diode laser module was delicately split into 24 sub-beams passing through the flame from different angles and the acquired laser absorption signals were used to retrieve flame temperature and H2O concentration simultaneously. The efficiency of the proposed reconstruction system and the effect of measurement noise were numerically evaluated. The temperature and H2O concentration in flat methane/air premixed flames under three different equivalence ratios were experimentally measured and reconstruction results were compared with model calculations. Numerical assessments indicate that the TDLAS tomographic system is capable for temperature and H2O concentration profiles detecting even the noise strength reaches 3% of absorption signal. Experimental results under different combustion conditions are well demonstrated along the vertical direction and the distribution profiles are in good agreement with model calculation. The proposed method exhibits great potential for 2-D or 3-D combustion diagnostics including non-uniform flames.

  15. An efficient Monte Carlo-based algorithm for scatter correction in keV cone-beam CT

    NASA Astrophysics Data System (ADS)

    Poludniowski, G.; Evans, P. M.; Hansen, V. N.; Webb, S.

    2009-06-01

    A new method is proposed for scatter-correction of cone-beam CT images. A coarse reconstruction is used in initial iteration steps. Modelling of the x-ray tube spectra and detector response are included in the algorithm. Photon diffusion inside the imaging subject is calculated using the Monte Carlo method. Photon scoring at the detector is calculated using forced detection to a fixed set of node points. The scatter profiles are then obtained by linear interpolation. The algorithm is referred to as the coarse reconstruction and fixed detection (CRFD) technique. Scatter predictions are quantitatively validated against a widely used general-purpose Monte Carlo code: BEAMnrc/EGSnrc (NRCC, Canada). Agreement is excellent. The CRFD algorithm was applied to projection data acquired with a Synergy XVI CBCT unit (Elekta Limited, Crawley, UK), using RANDO and Catphan phantoms (The Phantom Laboratory, Salem NY, USA). The algorithm was shown to be effective in removing scatter-induced artefacts from CBCT images, and took as little as 2 min on a desktop PC. Image uniformity was greatly improved as was CT-number accuracy in reconstructions. This latter improvement was less marked where the expected CT-number of a material was very different to the background material in which it was embedded.

  16. Telescope aperture optimization for spacebased coherent wind lidar

    NASA Astrophysics Data System (ADS)

    Ge, Xian-ying; Zhu, Jun; Cao, Qipeng; Zhang, Yinchao; Yin, Huan; Dong, Xiaojing; Wang, Chao; Zhang, Yongchao; Zhang, Ning

    2015-08-01

    Many studies have indicated that the optimum measurement approach for winds from space is a pulsed coherent wind lidar, which is an active remote sensing tool with the characteristics that high spatial and temporal resolutions, real-time detection, high mobility, facilitated control and so on. Because of the significant eye safety, efficiency, size, and lifetime advantage, 2μm wavelength solid-state laser lidar systems have attracted much attention in spacebased wind lidar plans. In this paper, the theory of coherent detection is presented and a 2μm wavelength solid-state laser lidar system is introduced, then the ideal aperture is calculated from signal-to-noise(SNR) view at orbit 400km. However, considering real application, even if the lidar hardware is perfectly aligned, the directional jitter of laser beam, the attitude change of the lidar in the long round trip time of the light from the atmosphere and other factors can bring misalignment angle. So the influence of misalignment angle is considered and calculated, and the optimum telescope diameter(0.45m) is obtained as the misalignment angle is 4 μrad. By the analysis of the optimum aperture required for spacebased coherent wind lidar system, we try to present the design guidance for the telescope.

  17. Automatic system testing of a decision support system for insulin dosing using Google Android.

    PubMed

    Spat, Stephan; Höll, Bernhard; Petritsch, Georg; Schaupp, Lukas; Beck, Peter; Pieber, Thomas R

    2013-01-01

    Hyperglycaemia in hospitalized patients is a common and costly health care problem. The GlucoTab system is a mobile workflow and decision support system, aiming to facilitate efficient and safe glycemic control of non-critically ill patients. Being a medical device, the GlucoTab requires extensive and reproducible testing. A framework for high-volume, reproducible and automated system testing of the GlucoTab system was set up applying several Open Source tools for test automation and system time handling. The REACTION insulin titration protocol was investigated in a paper-based clinical trial (PBCT). In order to validate the GlucoTab system, data from this trial was used for simulation and system tests. In total, 1190 decision support action points were identified and simulated. Four data points (0.3%) resulted in a GlucoTab system error caused by a defective implementation. In 144 data points (12.1%), calculation errors of physicians and nurses in the PBCT were detected. The test framework was able to verify manual calculation of insulin doses and detect relatively many user errors and workflow anomalies in the PBCT data. This shows the high potential of the electronic decision support application to improve safety of implementation of an insulin titration protocol and workflow management system in clinical wards.

  18. Automatic lumbar spine measurement in CT images

    NASA Astrophysics Data System (ADS)

    Mao, Yunxiang; Zheng, Dong; Liao, Shu; Peng, Zhigang; Yan, Ruyi; Liu, Junhua; Dong, Zhongxing; Gong, Liyan; Zhou, Xiang Sean; Zhan, Yiqiang; Fei, Jun

    2017-03-01

    Accurate lumbar spine measurement in CT images provides an essential way for quantitative spinal diseases analysis such as spondylolisthesis and scoliosis. In today's clinical workflow, the measurements are manually performed by radiologists and surgeons, which is time consuming and irreproducible. Therefore, automatic and accurate lumbar spine measurement algorithm becomes highly desirable. In this study, we propose a method to automatically calculate five different lumbar spine measurements in CT images. There are three main stages of the proposed method: First, a learning based spine labeling method, which integrates both the image appearance and spine geometry information, is used to detect lumbar and sacrum vertebrae in CT images. Then, a multiatlases based image segmentation method is used to segment each lumbar vertebra and the sacrum based on the detection result. Finally, measurements are derived from the segmentation result of each vertebra. Our method has been evaluated on 138 spinal CT scans to automatically calculate five widely used clinical spine measurements. Experimental results show that our method can achieve more than 90% success rates across all the measurements. Our method also significantly improves the measurement efficiency compared to manual measurements. Besides benefiting the routine clinical diagnosis of spinal diseases, our method also enables the large scale data analytics for scientific and clinical researches.

  19. Minimum detectable gas concentration performance evaluation method for gas leak infrared imaging detection systems.

    PubMed

    Zhang, Xu; Jin, Weiqi; Li, Jiakun; Wang, Xia; Li, Shuo

    2017-04-01

    Thermal imaging technology is an effective means of detecting hazardous gas leaks. Much attention has been paid to evaluation of the performance of gas leak infrared imaging detection systems due to several potential applications. The minimum resolvable temperature difference (MRTD) and the minimum detectable temperature difference (MDTD) are commonly used as the main indicators of thermal imaging system performance. This paper establishes a minimum detectable gas concentration (MDGC) performance evaluation model based on the definition and derivation of MDTD. We proposed the direct calculation and equivalent calculation method of MDGC based on the MDTD measurement system. We build an experimental MDGC measurement system, which indicates the MDGC model can describe the detection performance of a thermal imaging system to typical gases. The direct calculation, equivalent calculation, and direct measurement results are consistent. The MDGC and the minimum resolvable gas concentration (MRGC) model can effectively describe the performance of "detection" and "spatial detail resolution" of thermal imaging systems to gas leak, respectively, and constitute the main performance indicators of gas leak detection systems.

  20. A Cross-Layer, Anomaly-Based IDS for WSN and MANET

    PubMed Central

    Amouri, Amar; Manthena, Raju

    2018-01-01

    Intrusion detection system (IDS) design for mobile adhoc networks (MANET) is a crucial component for maintaining the integrity of the network. The need for rapid deployment of IDS capability with minimal data availability for training and testing is an important requirement of such systems, especially for MANETs deployed in highly dynamic scenarios, such as battlefields. This work proposes a two-level detection scheme for detecting malicious nodes in MANETs. The first level deploys dedicated sniffers working in promiscuous mode. Each sniffer utilizes a decision-tree-based classifier that generates quantities which we refer to as correctly classified instances (CCIs) every reporting time. In the second level, the CCIs are sent to an algorithmically run supernode that calculates quantities, which we refer to as the accumulated measure of fluctuation (AMoF) of the received CCIs for each node under test (NUT). A key concept that is used in this work is that the variability of the smaller size population which represents the number of malicious nodes in the network is greater than the variance of the larger size population which represents the number of normal nodes in the network. A linear regression process is then performed in parallel with the calculation of the AMoF for fitting purposes and to set a proper threshold based on the slope of the fitted lines. As a result, the malicious nodes are efficiently and effectively separated from the normal nodes. The proposed scheme is tested for various node velocities and power levels and shows promising detection performance even at low-power levels. The results presented also apply to wireless sensor networks (WSN) and represent a novel IDS scheme for such networks. PMID:29470446

  1. A Cross-Layer, Anomaly-Based IDS for WSN and MANET.

    PubMed

    Amouri, Amar; Morgera, Salvatore D; Bencherif, Mohamed A; Manthena, Raju

    2018-02-22

    Intrusion detection system (IDS) design for mobile adhoc networks (MANET) is a crucial component for maintaining the integrity of the network. The need for rapid deployment of IDS capability with minimal data availability for training and testing is an important requirement of such systems, especially for MANETs deployed in highly dynamic scenarios, such as battlefields. This work proposes a two-level detection scheme for detecting malicious nodes in MANETs. The first level deploys dedicated sniffers working in promiscuous mode. Each sniffer utilizes a decision-tree-based classifier that generates quantities which we refer to as correctly classified instances (CCIs) every reporting time. In the second level, the CCIs are sent to an algorithmically run supernode that calculates quantities, which we refer to as the accumulated measure of fluctuation (AMoF) of the received CCIs for each node under test (NUT). A key concept that is used in this work is that the variability of the smaller size population which represents the number of malicious nodes in the network is greater than the variance of the larger size population which represents the number of normal nodes in the network. A linear regression process is then performed in parallel with the calculation of the AMoF for fitting purposes and to set a proper threshold based on the slope of the fitted lines. As a result, the malicious nodes are efficiently and effectively separated from the normal nodes. The proposed scheme is tested for various node velocities and power levels and shows promising detection performance even at low-power levels. The results presented also apply to wireless sensor networks (WSN) and represent a novel IDS scheme for such networks.

  2. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    NASA Astrophysics Data System (ADS)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  3. Towards radiation hard converter material for SiC-based fast neutron detectors

    NASA Astrophysics Data System (ADS)

    Tripathi, S.; Upadhyay, C.; Nagaraj, C. P.; Venkatesan, A.; Devan, K.

    2018-05-01

    In the present work, Geant4 Monte-Carlo simulations have been carried out to study the neutron detection efficiency of the various neutron to other charge particle (recoil proton) converter materials. The converter material is placed over Silicon Carbide (SiC) in Fast Neutron detectors (FNDs) to achieve higher neutron detection efficiency as compared to bare SiC FNDs. Hydrogenous converter material such as High-Density Polyethylene (HDPE) is preferred over other converter materials due to the virtue of its high elastic scattering reaction cross-section for fast neutron detection at room temperature. Upon interaction with fast neutrons, hydrogenous converter material generates recoil protons which liberate e-hole pairs in the active region of SiC detector to provide a detector signal. The neutron detection efficiency offered by HDPE converter is compared with several other hydrogenous materials viz., 1) Lithium Hydride (LiH), 2) Perylene, 3) PTCDA . It is found that, HDPE, though providing highest efficiency among various studied materials, cannot withstand high temperature and harsh radiation environment. On the other hand, perylene and PTCDA can sustain harsh environments, but yields low efficiency. The analysis carried out reveals that LiH is a better material for neutron to other charge particle conversion with competent efficiency and desired radiation hardness. Further, the thickness of LiH has also been optimized for various mono-energetic neutron beams and Am-Be neutron source generating a neutron fluence of 109 neutrons/cm2. The optimized thickness of LiH converter for fast neutron detection is found to be ~ 500 μm. However, the estimated efficiency for fast neutron detection is only 0.1%, which is deemed to be inadequate for reliable detection of neutrons. A sensitivity study has also been done investigating the gamma background effect on the neutron detection efficiency for various energy threshold of Low-Level Discriminator (LLD). The detection efficiency of a stacked structure concept has been explored by juxtaposing several converter-detector layers to improve the efficiency of LiH-SiC-based FNDs . It is observed that approximately tenfold efficiency improvement has been achieved—0.93% for ten layers stacked configuration vis-à-vis 0.1% of single converter-detector layer configuration. Finally, stacked detectors have also been simulated for different converter thicknesses to attain the efficiency as high as ~ 3.25% with the help of 50 stacked layers.

  4. Possible Detection of Gamma Ray Air Showers in Coincidence with BATSE Gamma Ray Bursts

    NASA Astrophysics Data System (ADS)

    Lin, Tzu-Fen

    1999-08-01

    Project GRAND presents the results of a search for coincident high-energy gamma ray events in the direction and at the time of nine Gamma Ray Bursts (GRBs) detected by BATSE. A gamma ray has a non-negligible hadron production cross section; for each gamma ray of energy of 100 GeV, there are 0.015 muons which reach detection level (Fasso & Poirier, 1999). These muons are identified and their angles are measured in stations of eight planes of proportional wire chambers (PWCs). A 50 mm steel plate above the bottom pair of planes is used to distinguish muons from electrons. The mean angular resolution is 0.26o over a ± 61o range in the XZ and YZ planes. The BATSE GRB catalogue is examined for bursts which are near zenith for Project GRAND. The geometrical acceptance is calculated for each of these events. The product is then taken of the GRB flux and GRANDÕs geometrical acceptance. The nine sources with the best combination of detection efficiency and BATSEÕs intensity are selected to be examined in the data. The most significant detection of these nine sources is at a statistical significance of +3.7s; this is also the GRB with the highest product of GRB flux and geometrical acceptance.

  5. Highly selective and sensitive method for Cu2+ detection based on chiroptical activity of L-Cysteine mediated Au nanorod assemblies.

    PubMed

    Abbasi, Shahryar; Khani, Hamzeh

    2017-11-05

    Herein, we demonstrated a simple and efficient method to detect Cu 2+ based on amplified optical activity in the chiral nanoassemblies of gold nanorods (Au NRs). L-Cysteine can induce side-by-side or end-to-end assembly of Au NRs with an evident plasmonic circular dichroism (PCD) response due to coupling between surface plasmon resonances (SPR) of Au NRs and the chiral signal of L-Cys. Because of the obvious stronger plasmonic circular dichrosim (CD) response of the side-by-side assembly compared with the end-to-end assemblies, SS assembled Au NRs was selected as a sensitive platform and used for Cu 2+ detection. In the presence of Cu 2+ , Cu 2+ can catalyze O 2 oxidation of cysteine to cystine. With an increase in Cu 2+ concentration, the L-Cysteine-mediated assembly of Au NRs decreased because of decrease in the free cysteine thiol groups, and the PCD signal decreased. Taking advantage of this method, Cu 2+ could be detected in the concentration range of 20pM-5nM. Under optimal conditions, the calculated detection limit was found to be 7pM. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Coastline detection with time series of SAR images

    NASA Astrophysics Data System (ADS)

    Ao, Dongyang; Dumitru, Octavian; Schwarz, Gottfried; Datcu, Mihai

    2017-10-01

    For maritime remote sensing, coastline detection is a vital task. With continuous coastline detection results from satellite image time series, the actual shoreline, the sea level, and environmental parameters can be observed to support coastal management and disaster warning. Established coastline detection methods are often based on SAR images and wellknown image processing approaches. These methods involve a lot of complicated data processing, which is a big challenge for remote sensing time series. Additionally, a number of SAR satellites operating with polarimetric capabilities have been launched in recent years, and many investigations of target characteristics in radar polarization have been performed. In this paper, a fast and efficient coastline detection method is proposed which comprises three steps. First, we calculate a modified correlation coefficient of two SAR images of different polarization. This coefficient differs from the traditional computation where normalization is needed. Through this modified approach, the separation between sea and land becomes more prominent. Second, we set a histogram-based threshold to distinguish between sea and land within the given image. The histogram is derived from the statistical distribution of the polarized SAR image pixel amplitudes. Third, we extract continuous coastlines using a Canny image edge detector that is rather immune to speckle noise. Finally, the individual coastlines derived from time series of .SAR images can be checked for changes.

  7. Efficient method for the calculation of mean extinction. II. Analyticity of the complex extinction efficiency of homogeneous spheroids and finite cylinders.

    PubMed

    Xing, Z F; Greenberg, J M

    1994-08-20

    The analyticity of the complex extinction efficiency is examined numerically in the size-parameter domain for homogeneous prolate and oblate spheroids and finite cylinders. The T-matrix code, which is the most efficient program available to date, is employed to calculate the individual particle-extinction efficiencies. Because of its computational limitations in the size-parameter range, a slightly modified Hilbert-transform algorithm is required to establish the analyticity numerically. The findings concerning analyticity that we reported for spheres (Astrophys. J. 399, 164-175, 1992) apply equally to these nonspherical particles.

  8. Recurrent neural network based virtual detection line

    NASA Astrophysics Data System (ADS)

    Kadikis, Roberts

    2018-04-01

    The paper proposes an efficient method for detection of moving objects in the video. The objects are detected when they cross a virtual detection line. Only the pixels of the detection line are processed, which makes the method computationally efficient. A Recurrent Neural Network processes these pixels. The machine learning approach allows one to train a model that works in different and changing outdoor conditions. Also, the same network can be trained for various detection tasks, which is demonstrated by the tests on vehicle and people counting. In addition, the paper proposes a method for semi-automatic acquisition of labeled training data. The labeling method is used to create training and testing datasets, which in turn are used to train and evaluate the accuracy and efficiency of the detection method. The method shows similar accuracy as the alternative efficient methods but provides greater adaptability and usability for different tasks.

  9. A bench-scale constructed wetland as a model to characterize benzene biodegradation processes in freshwater wetlands.

    PubMed

    Rakoczy, Jana; Remy, Benjamin; Vogt, Carsten; Richnow, Hans H

    2011-12-01

    In wetlands, a variety of biotic and abiotic processes can contribute to the removal of organic substances. Here, we used compound-specific isotope analysis (CSIA), hydrogeochemical parameters and detection of functional genes to characterize in situ biodegradation of benzene in a model constructed wetland over a period of 370 days. Despite low dissolved oxygen concentrations (<30 μM), the oxidation of ammonium to nitrate and the complete oxidation of ferrous iron pointed to a dominance of aerobic processes, suggesting efficient oxygen transfer into the sediment zone by plants. As benzene removal became highly efficient after day 231 (>98% removal), we applied CSIA to study in situ benzene degradation by indigenous microbes. Combining carbon and hydrogen isotope signatures by two-dimensional stable isotope analysis revealed that benzene was degraded aerobically, mainly via the monohydroxylation pathway. This was additionally supported by the detection of the BTEX monooxygenase gene tmoA in sediment and root samples. Calculating the extent of biodegradation from the isotope signatures demonstrated that at least 85% of benzene was degraded by this pathway and thus, only a small fraction was removed abiotically. This study shows that model wetlands can contribute to an understanding of biodegradation processes in floodplains or natural wetland systems.

  10. Two-stage sparse coding of region covariance via Log-Euclidean kernels to detect saliency.

    PubMed

    Zhang, Ying-Ying; Yang, Cai; Zhang, Ping

    2017-05-01

    In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Strong Scaling and a Scarcity of Small Earthquakes Point to an Important Role for Thermal Runaway in Intermediate-Depth Earthquake Mechanics

    NASA Astrophysics Data System (ADS)

    Barrett, S. A.; Prieto, G. A.; Beroza, G. C.

    2015-12-01

    There is strong evidence that metamorphic reactions play a role in enabling the rupture of intermediate-depth earthquakes; however, recent studies of the Bucaramanga Nest at a depth of 135-165 km under Colombia indicate that intermediate-depth seismicity shows low radiation efficiency and strong scaling of stress drop with slip/size, which suggests a dramatic weakening process, as proposed in the thermal shear instability model. Decreasing stress drop with slip and low seismic efficiency could have a measurable effect on the magnitude-frequency distribution of small earthquakes by causing them to become undetectable at substantially larger seismic moment than would be the case if stress drop were constant. We explore the population of small earthquakes in the Bucaramanga Nest using an empirical subspace detector to push the detection limit to lower magnitude. Using this approach, we find ~30,000 small, previously uncatalogued earthquakes during a 6-month period in 2013. We calculate magnitudes for these events using their relative amplitudes. Despite the additional detections, we observe a sharp deviation from a Gutenberg-Richter magnitude frequency distribution with a marked deficiency of events at the smallest magnitudes. This scarcity of small earthquakes is not easily ascribed to the detectability threshold; tests of our ability to recover small-magnitude waveforms of Bucaramanga Nest earthquakes in the continuous data indicate that we should be able to detect events reliably at magnitudes that are nearly a full magnitude unit smaller than the smallest earthquakes we observe. The implication is that nearly 100,000 events expected for a Gutenberg-Richter MFD are "missing," and that this scarcity of small earthquakes may provide new support for the thermal runaway mechanism in intermediate-depth earthquake mechanics.

  12. Autonomous rock detection on mars through region contrast

    NASA Astrophysics Data System (ADS)

    Xiao, Xueming; Cui, Hutao; Yao, Meibao; Tian, Yang

    2017-08-01

    In this paper, we present a new autonomous rock detection approach through region contrast. Unlike current state-of-art pixel-level rock segmenting methods, new method deals with this issue in region level, which will significantly reduce the computational cost. Image is firstly splitted into homogeneous regions based on intensity information and spatial layout. Considering the high-water memory constraints of onboard flight processor, only low-level features, average intensity and variation of superpixel, are measured. Region contrast is derived as the integration of intensity contrast and smoothness measurement. Rocks are then segmented from the resulting contrast map by an adaptive threshold. Since the merely intensity-based method may cause false detection in background areas with different illuminations from surroundings, a more reliable method is further proposed by introducing spatial factor and background similarity to the region contrast. Spatial factor demonstrates the locality of contrast, while background similarity calculates the probability of each subregion belonging to background. Our method is efficient in dealing with large images and only few parameters are needed. Preliminary experimental results show that our algorithm outperforms edge-based methods in various grayscale rover images.

  13. A New System to Monitor Data Analyses and Results of Physics Data Validation Between Pulses at DIII-D

    NASA Astrophysics Data System (ADS)

    Flanagan, S.; Schachter, J. M.; Schissel, D. P.

    2001-10-01

    A Data Analysis Monitoring (DAM) system has been developed to monitor between pulse physics analysis at the DIII-D National Fusion Facility. The system allows for rapid detection of discrepancies in diagnostic measurements or the results from physics analysis codes. This enables problems to be detected and possibly fixed between pulses as opposed to after the experimental run has concluded thus increasing the efficiency of experimental time. An example of a consistency check is comparing the stored energy from integrating the measured kinetic profiles to that calculated from magnetic measurements by EFIT. This new system also tracks the progress of MDSplus dispatching of software for data analysis and the loading of analyzed data into MDSplus. DAM uses a Java Servlet to receive messages, Clips to implement expert system logic, and displays its results to multiple web clients via HTML. If an error is detected by DAM, users can view more detailed information so that steps can be taken to eliminate the error for the next pulse. A demonstration of this system including a simulated DIII-D pulse cycle will be presented.

  14. AATSR Based Volcanic Ash Plume Top Height Estimation

    NASA Astrophysics Data System (ADS)

    Virtanen, Timo H.; Kolmonen, Pekka; Sogacheva, Larisa; Sundstrom, Anu-Maija; Rodriguez, Edith; de Leeuw, Gerrit

    2015-11-01

    The AATSR Correlation Method (ACM) height estimation algorithm is presented. The algorithm uses Advanced Along Track Scanning Radiometer (AATSR) satellite data to detect volcanic ash plumes and to estimate the plume top height. The height estimate is based on the stereo-viewing capability of the AATSR instrument, which allows to determine the parallax between the satellite's nadir and 55◦ forward views, and thus the corresponding height. AATSR provides an advantage compared to other stereo-view satellite instruments: with AATSR it is possible to detect ash plumes using brightness temperature difference between thermal infrared (TIR) channels centered at 11 and 12 μm. The automatic ash detection makes the algorithm efficient in processing large quantities of data: the height estimate is calculated only for the ash-flagged pixels. Besides ash plumes, the algorithm can be applied to any elevated feature with sufficient contrast to the background, such as smoke and dust plumes and clouds. The ACM algorithm can be applied to the Sea and Land Surface Temperature Radiometer (SLSTR), scheduled for launch at the end of 2015.

  15. The optimal community detection of software based on complex networks

    NASA Astrophysics Data System (ADS)

    Huang, Guoyan; Zhang, Peng; Zhang, Bing; Yin, Tengteng; Ren, Jiadong

    2016-02-01

    The community structure is important for software in terms of understanding the design patterns, controlling the development and the maintenance process. In order to detect the optimal community structure in the software network, a method Optimal Partition Software Network (OPSN) is proposed based on the dependency relationship among the software functions. First, by analyzing the information of multiple execution traces of one software, we construct Software Execution Dependency Network (SEDN). Second, based on the relationship among the function nodes in the network, we define Fault Accumulation (FA) to measure the importance of the function node and sort the nodes with measure results. Third, we select the top K(K=1,2,…) nodes as the core of the primal communities (only exist one core node). By comparing the dependency relationships between each node and the K communities, we put the node into the existing community which has the most close relationship. Finally, we calculate the modularity with different initial K to obtain the optimal division. With experiments, the method OPSN is verified to be efficient to detect the optimal community in various softwares.

  16. Detection of 2,4-dinitrotoluene by graphene oxide: first principles study

    NASA Astrophysics Data System (ADS)

    Abdollahi, Hassan; Kari, Akbar; Samaeifar, Fatemeh

    2018-05-01

    The surface of graphene oxide (GO) with different oxidation level is widely used in gas sensing applications. Otherwise, detection of 2,4-dinitrotoluene (DNT) have been extensively attend as a high explosive and environmental sources by various methods. Atomic level modelling are widely employed to explain the sensing mechanism at a microscopic level. The present work is an attempt to apply density functional theory (DFT) to investigate the structural and electronic properties of GO and adsorption of oxygen atom and hydroxyl on graphene surface. The focus is on the adsorption mechanisms of DNT molecule on the GO monolayer surface to detect DNT molecule. The calculated adsorption energy of DNT molecule on the GO surface indicates physisorption mechanism with ‑0.7 eV adsorption energy. Moreover, basis-set superposition errors correction based on off site orbitals consideration leads to ‑0.4 eV adsorption energy which it is more in the physisorption regime. Consequently, the results could shed more light to design and fabrication an efficient DNT sensor based on GO layers.

  17. Correlation dynamics and enhanced signals for the identification of serial biomolecules and DNA bases.

    PubMed

    Ahmed, Towfiq; Haraldsen, Jason T; Rehr, John J; Di Ventra, Massimiliano; Schuller, Ivan; Balatsky, Alexander V

    2014-03-28

    Nanopore-based sequencing has demonstrated a significant potential for the development of fast, accurate, and cost-efficient fingerprinting techniques for next generation molecular detection and sequencing. We propose a specific multilayered graphene-based nanopore device architecture for the recognition of single biomolecules. Molecular detection and analysis can be accomplished through the detection of transverse currents as the molecule or DNA base translocates through the nanopore. To increase the overall signal-to-noise ratio and the accuracy, we implement a new 'multi-point cross-correlation' technique for identification of DNA bases or other molecules on the single molecular level. We demonstrate that the cross-correlations between each nanopore will greatly enhance the transverse current signal for each molecule. We implement first-principles transport calculations for DNA bases surveyed across a multilayered graphene nanopore system to illustrate the advantages of the proposed geometry. A time-series analysis of the cross-correlation functions illustrates the potential of this method for enhancing the signal-to-noise ratio. This work constitutes a significant step forward in facilitating fingerprinting of single biomolecules using solid state technology.

  18. Rotation and scale invariant shape context registration for remote sensing images with background variations

    NASA Astrophysics Data System (ADS)

    Jiang, Jie; Zhang, Shumei; Cao, Shixiang

    2015-01-01

    Multitemporal remote sensing images generally suffer from background variations, which significantly disrupt traditional region feature and descriptor abstracts, especially between pre and postdisasters, making registration by local features unreliable. Because shapes hold relatively stable information, a rotation and scale invariant shape context based on multiscale edge features is proposed. A multiscale morphological operator is adapted to detect edges of shapes, and an equivalent difference of Gaussian scale space is built to detect local scale invariant feature points along the detected edges. Then, a rotation invariant shape context with improved distance discrimination serves as a feature descriptor. For a distance shape context, a self-adaptive threshold (SAT) distance division coordinate system is proposed, which improves the discriminative property of the feature descriptor in mid-long pixel distances from the central point while maintaining it in shorter ones. To achieve rotation invariance, the magnitude of Fourier transform in one-dimension is applied to calculate angle shape context. Finally, the residual error is evaluated after obtaining thin-plate spline transformation between reference and sensed images. Experimental results demonstrate the robustness, efficiency, and accuracy of this automatic algorithm.

  19. Hypergraph-based anomaly detection of high-dimensional co-occurrences.

    PubMed

    Silva, Jorge; Willett, Rebecca

    2009-03-01

    This paper addresses the problem of detecting anomalous multivariate co-occurrences using a limited number of unlabeled training observations. A novel method based on using a hypergraph representation of the data is proposed to deal with this very high-dimensional problem. Hypergraphs constitute an important extension of graphs which allow edges to connect more than two vertices simultaneously. A variational Expectation-Maximization algorithm for detecting anomalies directly on the hypergraph domain without any feature selection or dimensionality reduction is presented. The resulting estimate can be used to calculate a measure of anomalousness based on the False Discovery Rate. The algorithm has O(np) computational complexity, where n is the number of training observations and p is the number of potential participants in each co-occurrence event. This efficiency makes the method ideally suited for very high-dimensional settings, and requires no tuning, bandwidth or regularization parameters. The proposed approach is validated on both high-dimensional synthetic data and the Enron email database, where p > 75,000, and it is shown that it can outperform other state-of-the-art methods.

  20. Enhancement of the output emission efficiency of thin-film photoluminescence composite structures based on PbSe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anisimova, N. P.; Tropina, N. E., E-mail: Mazina_ne@mail.ru; Tropin, A. N.

    2010-12-15

    The opportunity to increase the output emission efficiency of PbSe-based photoluminescence structures by depositing an antireflection layer is analyzed. A model of a three-layer thin film where the central layer is formed of a composite medium is proposed to calculate the reflectance spectra of the system. In von Bruggeman's approximation of the effective medium theory, the effective permittivity of the composite layer is calculated. The model proposed in the study is used to calculate the thickness of the arsenic chalcogenide (AsS{sub 4}) antireflection layer. The optimal AsS{sub 4} layer thickness determined experimentally is close to the results of calculation, andmore » the corresponding gain in the output photoluminescence efficiency is as high as 60%.« less

  1. Evaluation of light extraction efficiency for the light-emitting diodes based on the transfer matrix formalism and ray-tracing method

    NASA Astrophysics Data System (ADS)

    Pingbo, An; Li, Wang; Hongxi, Lu; Zhiguo, Yu; Lei, Liu; Xin, Xi; Lixia, Zhao; Junxi, Wang; Jinmin, Li

    2016-06-01

    The internal quantum efficiency (IQE) of the light-emitting diodes can be calculated by the ratio of the external quantum efficiency (EQE) and the light extraction efficiency (LEE). The EQE can be measured experimentally, but the LEE is difficult to calculate due to the complicated LED structures. In this work, a model was established to calculate the LEE by combining the transfer matrix formalism and an in-plane ray tracing method. With the calculated LEE, the IQE was determined and made a good agreement with that obtained by the ABC model and temperature-dependent photoluminescence method. The proposed method makes the determination of the IQE more practical and conventional. Project supported by the National Natural Science Foundation of China (Nos.11574306, 61334009), the China International Science and Technology Cooperation Program (No. 2014DFG62280), and the National High Technology Program of China (No. 2015AA03A101).

  2. Computational efficiency for the surface renewal method

    NASA Astrophysics Data System (ADS)

    Kelley, Jason; Higgins, Chad

    2018-04-01

    Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.

  3. Comparing distinct ground-based lightning location networks covering the Netherlands

    NASA Astrophysics Data System (ADS)

    de Vos, Lotte; Leijnse, Hidde; Schmeits, Maurice; Beekhuis, Hans; Poelman, Dieter; Evers, Läslo; Smets, Pieter

    2015-04-01

    Lightning can be detected using a ground-based sensor network. The Royal Netherlands Meteorological Institute (KNMI) monitors lightning activity in the Netherlands with the so-called FLITS-system; a network combining SAFIR-type sensors. This makes use of Very High Frequency (VHF) as well as Low Frequency (LF) sensors. KNMI has recently decided to replace FLITS by data from a sub-continental network operated by Météorage which makes use of LF sensors only (KNMI Lightning Detection Network, or KLDN). KLDN is compared to the FLITS system, as well as Met Office's long-range Arrival Time Difference (ATDnet), which measures Very Low Frequency (VLF). Special focus lies on the ability to detect Cloud to Ground (CG) and Cloud to Cloud (CC) lightning in the Netherlands. Relative detection efficiency of individual flashes and lightning activity in a more general sense are calculated over a period of almost 5 years. Additionally, the detection efficiency of each system is compared to a ground-truth that is constructed from flashes that are detected by both of the other datasets. Finally, infrasound data is used as a fourth lightning data source for several case studies. Relative performance is found to vary strongly with location and time. As expected, it is found that FLITS detects significantly more CC lightning (because of the strong aptitude of VHF antennas to detect CC), though KLDN and ATDnet detect more CG lightning. We analyze statistics computed over the entire 5-year period, where we look at CG as well as total lightning (CC and CG combined). Statistics that are considered are the Probability of Detection (POD) and the so-called Lightning Activity Detection (LAD). POD is defined as the percentage of reference flashes the system detects compared to the total detections in the reference. LAD is defined as the fraction of system recordings of one or more flashes in predefined area boxes over a certain time period given the fact that the reference detects at least one flash, compared to the total recordings in the reference dataset. The reference for these statistics is taken to be either another dataset, or a dataset consisting of flashes detected by two datasets. Extreme thunderstorm case evaluation shows that the weather alert criterion for severe thunderstorm is reached by FLITS when this is not the case in KLDN and ATD, suggesting the need for KNMI to modify that weather alert criterion when using KLDN.

  4. ''Do-it-yourself'' software program calculates boiler efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1984-03-01

    An easy-to-use software package is described which runs on the IBM Personal Computer. The package calculates boiler efficiency, an important parameter of operating costs and equipment wellbeing. The program stores inputs and calculated results for 20 sets of boiler operating data, called cases. Cases can be displayed and modified on the CRT screen through multiple display pages or copied to a printer. All intermediate calculations are performed by this package. They include: steam enthalpy; water enthalpy; air humidity; gas, oil, coal, and wood heat capacity; and radiation losses.

  5. Efficiency of whole-body counter for various body size calculated by MCNP5 software.

    PubMed

    Krstic, D; Nikezic, D

    2012-11-01

    The efficiency of a whole-body counter for (137)Cs and (40)K was calculated using the MCNP5 code. The ORNL phantoms of a human body of different body sizes were applied in a sitting position in front of a detector. The aim was to investigate the dependence of efficiency on the body size (age) and the detector position with respect to the body and to estimate the accuracy of real measurements. The calculation work presented here is related to the NaI detector, which is available in the Serbian Whole-body Counter facility in Vinca Institute.

  6. Precise Distances for Main-belt Asteroids in Only Two Nights

    NASA Astrophysics Data System (ADS)

    Heinze, Aren N.; Metchev, Stanimir

    2015-10-01

    We present a method for calculating precise distances to asteroids using only two nights of data from a single location—far too little for an orbit—by exploiting the angular reflex motion of the asteroids due to Earth’s axial rotation. We refer to this as the rotational reflex velocity method. While the concept is simple and well-known, it has not been previously exploited for surveys of main belt asteroids (MBAs). We offer a mathematical development, estimates of the errors of the approximation, and a demonstration using a sample of 197 asteroids observed for two nights with a small, 0.9-m telescope. This demonstration used digital tracking to enhance detection sensitivity for faint asteroids, but our distance determination works with any detection method. Forty-eight asteroids in our sample had known orbits prior to our observations, and for these we demonstrate a mean fractional error of only 1.6% between the distances we calculate and those given in ephemerides from the Minor Planet Center. In contrast to our two-night results, distance determination by fitting approximate orbits requires observations spanning 7-10 nights. Once an asteroid’s distance is known, its absolute magnitude and size (given a statistically estimated albedo) may immediately be calculated. Our method will therefore greatly enhance the efficiency with which 4m and larger telescopes can probe the size distribution of small (e.g., 100 m) MBAs. This distribution remains poorly known, yet encodes information about the collisional evolution of the asteroid belt—and hence the history of the Solar System.

  7. Feasibility of Coupling Between a Single-Mode Elliptical-Core Fiber and a Single Mode Rib Waveguide Over Temperature. Ph.D. Thesis - Akron Univ., Aug. 1995

    NASA Technical Reports Server (NTRS)

    Tuma, Margaret L.

    1995-01-01

    To determine the feasibility of coupling the output of an optical fiber to a rib waveguide in a temperature environment ranging from 20 C to 300 C, a theoretical calculation of the coupling efficiency between the two was investigated. This is a significant problem which needs to be addressed to determine whether an integrated optic device can function in a harsh temperature environment. Because the behavior of the integrated-optic device is polarization sensitive, a polarization-preserving optic fiber, via its elliptical core, was used to couple light with a known polarization into the device. To couple light energy efficiently from an optical fiber into a channel waveguide, the design of both components should provide for well-matched electric field profiles. The rib waveguide analyzed was the light input channel of an integrated-optic pressure sensor. Due to the complex geometry of the rib waveguide, there is no analytical solution to the wave equation for the guided modes. Approximation or numerical techniques must be utilized to determine the propagation constants and field patterns of the guide. In this study, three solution methods were used to determine the field profiles of both the fiber and guide: the effective-index method (EIM), Marcatili's approximation, and a Fourier method. These methods were utilized independently to calculate the electric field profile of a rib channel waveguide and elliptical fiber at two temperatures, 20 C and 300 C. These temperatures were chosen to represent a nominal and a high temperature that the device would experience. Using the electric field profile calculated from each method, the theoretical coupling efficiency between the single-mode optical fiber and rib waveguide was calculated using the overlap integral and results of the techniques compared. Initially, perfect alignment was assumed and the coupling efficiency calculated. Then, the coupling efficiency calculation was repeated for a range of transverse offsets at both temperatures. Results of the calculation indicate a high coupling efficiency can be achieved when the two components were properly aligned. The coupling efficiency was more sensitive to alignment offsets in the y direction than the x, due to the elliptical modal profile of both components. Changes in the coupling efficiency over temperature were found to be minimal.

  8. 10 CFR Appendix P to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Pool Heaters

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... least three significant figures shall be reported. 4.3Off mode. 4.3.1Pool heaters with a seasonal off... significant figures shall be reported. 5.Calculations. 5.1Thermal efficiency. Calculate the thermal efficiency...

  9. Evaluation of jamming efficiency for the protection of a single ground object

    NASA Astrophysics Data System (ADS)

    Matuszewski, Jan

    2018-04-01

    The electronic countermeasures (ECM) include methods to completely prevent or restrict the effective use of the electromagnetic spectrum by the opponent. The most widespread means of disorganizing the operation of electronic devices is to create active and passive radio-electronic jamming. The paper presents the way of jamming efficiency calculations for protecting ground objects against the radars mounted on the airborne platforms. The basic mathematical formulas for calculating the efficiency of active radar jamming are presented. The numerical calculations for ground object protection are made for two different electronic warfare scenarios: the jammer is placed very closely and in a determined distance from the protecting object. The results of these calculations are presented in the appropriate figures showing the minimal distance of effective jamming. The realization of effective radar jamming in electronic warfare systems depends mainly on the precise knowledge of radar and the jammer's technical parameters, the distance between them, the assumed value of the degradation coefficient, the conditions of electromagnetic energy propagation and the applied jamming method. The conclusions from these calculations facilitate making a decision regarding how jamming should be conducted to achieve high efficiency during the electronic warfare training.

  10. A note on calculation of efficiency and emissions from wood and wood pellet stoves

    NASA Astrophysics Data System (ADS)

    Petrocelli, D.; Lezzi, A. M.

    2015-11-01

    In recent years, national laws and international regulations have introduced strict limits on efficiency and emissions from woody biomass appliances to promote the diffusion of models characterized by low emissions and high efficiency. The evaluation of efficiency and emissions is made during the certification process which consists in standardized tests. Standards prescribe the procedures to be followed during tests and the relations to be used to determine the mean value of efficiency and emissions. As a matter of fact these values are calculated using flue gas temperature and composition averaged over the whole test period, lasting from 1 to 6 hours. Typically, in wood appliances the fuel burning rate is not constant and this leads to a considerable variation in time of composition and flow rate of the flue gas. In this paper we show that this fact may cause significant differences between emission values calculated according to standards and those obtained integrating over the test period the instantaneous mass and energy balances. In addition, we propose some approximated relations and a method for wood stoves which supply more accurate results than those calculated according to standards. These relations can be easily implemented in a computer controlled data acquisition systems.

  11. Modeling recombination processes and predicting energy conversion efficiency of dye sensitized solar cells from first principles

    NASA Astrophysics Data System (ADS)

    Ma, Wei; Meng, Sheng

    2014-03-01

    We present a set of algorithms based on solo first principles calculations, to accurately calculate key properties of a DSC device including sunlight harvest, electron injection, electron-hole recombination, and open circuit voltages. Two series of D- π-A dyes are adopted as sample dyes. The short circuit current can be predicted by calculating the dyes' photo absorption, and the electron injection and recombination lifetime using real-time time-dependent density functional theory (TDDFT) simulations. Open circuit voltage can be reproduced by calculating energy difference between the quasi-Fermi level of electrons in the semiconductor and the electrolyte redox potential, considering the influence of electron recombination. Based on timescales obtained from real time TDDFT dynamics for excited states, the estimated power conversion efficiency of DSC fits nicely with the experiment, with deviation below 1-2%. Light harvesting efficiency, incident photon-to-electron conversion efficiency and the current-voltage characteristics can also be well reproduced. The predicted efficiency can serve as either an ideal limit for optimizing photovoltaic performance of a given dye, or a virtual device that closely mimicking the performance of a real device under different experimental settings.

  12. Increasing the volumetric efficiency of Diesel engines by intake pipes

    NASA Technical Reports Server (NTRS)

    List, Hans

    1933-01-01

    Development of a method for calculating the volumetric efficiency of piston engines with intake pipes. Application of this method to the scavenging pumps of two-stroke-cycle engines with crankcase scavenging and to four-stroke-cycle engines. The utility of the method is demonstrated by volumetric-efficiency tests of the two-stroke-cycle engines with crankcase scavenging. Its practical application to the calculation of intake pipes is illustrated by example.

  13. Evaluation of Sampling Recommendations From the Influenza Virologic Surveillance Right Size Roadmap for Idaho.

    PubMed

    Rosenthal, Mariana; Anderson, Katey; Tengelsen, Leslie; Carter, Kris; Hahn, Christine; Ball, Christopher

    2017-08-24

    The Right Size Roadmap was developed by the Association of Public Health Laboratories and the Centers for Disease Control and Prevention to improve influenza virologic surveillance efficiency. Guidelines were provided to state health departments regarding representativeness and statistical estimates of specimen numbers needed for seasonal influenza situational awareness, rare or novel influenza virus detection, and rare or novel influenza virus investigation. The aim of this study was to compare Roadmap sampling recommendations with Idaho's influenza virologic surveillance to determine implementation feasibility. We calculated the proportion of medically attended influenza-like illness (MA-ILI) from Idaho's influenza-like illness surveillance among outpatients during October 2008 to May 2014, applied data to Roadmap-provided sample size calculators, and compared calculations with actual numbers of specimens tested for influenza by the Idaho Bureau of Laboratories (IBL). We assessed representativeness among patients' tested specimens to census estimates by age, sex, and health district residence. Among outpatients surveilled, Idaho's mean annual proportion of MA-ILI was 2.30% (20,834/905,818) during a 5-year period. Thus, according to Roadmap recommendations, Idaho needs to collect 128 specimens from MA-ILI patients/week for situational awareness, 1496 influenza-positive specimens/week for detection of a rare or novel influenza virus at 0.2% prevalence, and after detection, 478 specimens/week to confirm true prevalence is ≤2% of influenza-positive samples. The mean number of respiratory specimens Idaho tested for influenza/week, excluding the 2009-2010 influenza season, ranged from 6 to 24. Various influenza virus types and subtypes were collected and specimen submission sources were representative in terms of geographic distribution, patient age range and sex, and disease severity. Insufficient numbers of respiratory specimens are submitted to IBL for influenza laboratory testing. Increased specimen submission would facilitate meeting Roadmap sample size recommendations. ©Mariana Rosenthal, Katey Anderson, Leslie Tengelsen, Kris Carter, Christine Hahn, Christopher Ball. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 24.08.2017.

  14. Evaluation of Sampling Recommendations From the Influenza Virologic Surveillance Right Size Roadmap for Idaho

    PubMed Central

    2017-01-01

    Background The Right Size Roadmap was developed by the Association of Public Health Laboratories and the Centers for Disease Control and Prevention to improve influenza virologic surveillance efficiency. Guidelines were provided to state health departments regarding representativeness and statistical estimates of specimen numbers needed for seasonal influenza situational awareness, rare or novel influenza virus detection, and rare or novel influenza virus investigation. Objective The aim of this study was to compare Roadmap sampling recommendations with Idaho’s influenza virologic surveillance to determine implementation feasibility. Methods We calculated the proportion of medically attended influenza-like illness (MA-ILI) from Idaho’s influenza-like illness surveillance among outpatients during October 2008 to May 2014, applied data to Roadmap-provided sample size calculators, and compared calculations with actual numbers of specimens tested for influenza by the Idaho Bureau of Laboratories (IBL). We assessed representativeness among patients’ tested specimens to census estimates by age, sex, and health district residence. Results Among outpatients surveilled, Idaho’s mean annual proportion of MA-ILI was 2.30% (20,834/905,818) during a 5-year period. Thus, according to Roadmap recommendations, Idaho needs to collect 128 specimens from MA-ILI patients/week for situational awareness, 1496 influenza-positive specimens/week for detection of a rare or novel influenza virus at 0.2% prevalence, and after detection, 478 specimens/week to confirm true prevalence is ≤2% of influenza-positive samples. The mean number of respiratory specimens Idaho tested for influenza/week, excluding the 2009-2010 influenza season, ranged from 6 to 24. Various influenza virus types and subtypes were collected and specimen submission sources were representative in terms of geographic distribution, patient age range and sex, and disease severity. Conclusions Insufficient numbers of respiratory specimens are submitted to IBL for influenza laboratory testing. Increased specimen submission would facilitate meeting Roadmap sample size recommendations. PMID:28838883

  15. Comparison of the response of four aerosol detectors used with ultra high pressure liquid chromatography.

    PubMed

    Hutchinson, Joseph P; Li, Jianfeng; Farrell, William; Groeber, Elizabeth; Szucs, Roman; Dicinoski, Greg; Haddad, Paul R

    2011-03-25

    The responses of four different types of aerosol detectors have been evaluated and compared to establish their potential use as a universal detector in conjunction with ultra high pressure liquid chromatography (UHPLC). Two charged-aerosol detectors, namely Corona CAD and Corona Ultra, and also two different types of light-scattering detectors (an evaporative light scattering detector, and a nano-quantity analyte detector [NQAD]) were evaluated. The responses of these detectors were systematically investigated under changing experimental and instrumental parameters, such as the mobile phase flow-rate, analyte concentration, mobile phase composition, nebulizer temperature, evaporator temperature, evaporator gas flow-rate and instrumental signal filtering after detection. It was found that these parameters exerted non-linear effects on the responses of the aerosol detectors and must therefore be considered when designing analytical separation conditions, particularly when gradient elution is performed. Identical reversed-phase gradient separations were compared on all four aerosol detectors and further compared with UV detection at 200 nm. The aerosol detectors were able to detect all 11 analytes in a test set comprising species having a variety of physicochemical properties, whilst UV detection was applicable only to those analytes containing chromophores. The reproducibility of the detector response for 11 analytes over 10 consecutive separations was found to be approximately 5% for the charged-aerosol detectors and approximately 11% for the light-scattering detectors. The tested analytes included semi-volatile species which exhibited a more variable response on the aerosol detectors. Peak efficiencies were generally better on the aerosol detectors in comparison to UV detection and particularly so for the light-scattering detectors which exhibited efficiencies of around 110,000 plates per metre. Limits of detection were calculated using different mobile phase compositions and the NQAD detector was found to be the most sensitive (LOD of 10 ng/mL), followed by the Corona CAD (76 ng/mL), then UV detection at 200 nm (178 ng/mL) using an injection volume of 25 μL. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Detection of gamma-neutron radiation by solid-state scintillation detectors. Detection of gamma-neutron radiation by novel solid-state scintillation detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryzhikov, V.; Grinyov, B.; Piven, L.

    It is known that solid-state scintillators can be used for detection of both gamma radiation and neutron flux. In the past, neutron detection efficiencies of such solid-state scintillators did not exceed 5-7%. At the same time it is known that the detection efficiency of the gamma-neutron radiation characteristic of nuclear fissionable materials is by an order of magnitude higher than the efficiency of detection of neutron fluxes alone. Thus, an important objective is the creation of detection systems that are both highly efficient in gamma-neutron detection and also capable of exhibiting high gamma suppression for use in the role ofmore » detection of neutron radiation. In this work, we present the results of our experimental and theoretical studies on the detection efficiency of fast neutrons from a {sup 239}Pu-Be source by the heavy oxide scintillators BGO, GSO, CWO and ZWO, as well as ZnSe(Te, O). The most probable mechanism of fast neutron interaction with nuclei of heavy oxide scintillators is the inelastic scattering (n, n'γ) reaction. In our work, fast neutron detection efficiencies were determined by the method of internal counting of gamma-quanta that emerge in the scintillator from (n, n''γ) reactions on scintillator nuclei with the resulting gamma energies of ∼20-300 keV. The measured efficiency of neutron detection for the scintillation crystals we considered was ∼40-50 %. The present work included a detailed analysis of detection efficiency as a function of detector and area of the working surface, as well as a search for new ways to create larger-sized detectors of lower cost. As a result of our studies, we have found an unusual dependence of fast neutron detection efficiency upon thickness of the oxide scintillators. An explanation for this anomaly may involve the competition of two factors that accompany inelastic scattering on the heavy atomic nuclei. The transformation of the energy spectrum of neutrons involved in the (n, n'γ) reactions towards lower energies and the isotropic character of scattering of the secondary neutrons may lead to the observed limitation of the length of effective interaction, since a fraction of the secondary neutrons that propagate in the forward direction are not subject to further inelastic scattering because of their substantially lower energy. At these reduced energies, it is the capture cross-section (n, γ) that becomes predominant, resulting in lower detection efficiency. Based on these results, several types of detectors have been envisioned for application in detection systems for nuclear materials. The testing results for one such detector are presented in this work. We have studied the possibility of creation of a composite detector with scintillator granules placed inside a transparent polymer material. Because of the low transparency of such a dispersed scintillator, better light collection conditions are ensured by incorporation of a light guide between the scintillator layers. This guide is made of highly transparent polymer material. The use of a high-transparency hydrogen-containing polymer material for light guides not only ensures optimum conditions of light collection in the detector, but also allows certain deceleration of neutron radiation, increasing its interaction efficiency with the composite scintillation panels; accordingly, the detector signal is increased by 5-8%. When fast neutrons interact with the scintillator material, the resulting inelastic scattering gamma-quanta emerge, having different energies and different delay times with respect to the moment of the neutron interaction with the nucleus of the scintillator material (delay times ranging from 1x10{sup -9} to 1.3x10{sup -6} s). These internally generated gamma-quanta interact with the scintillator, and the resulting scintillation light is recorded by the photo-receiver. Since neutron sources are also strong sources of low-energy gamma-radiation, the use of dispersed ZnSe(Te) scintillator material provides high gamma-radiation detection efficiency in that energy range. This new type of gamma-neutron detector is based on a 'sandwich' structure using a ZnSe composite film and light guide with a fast neutron detection efficiency of about 6%. Its high detection efficiency of low-energy gamma-radiation allows a substantial increase (by an order of magnitude) in the efficiency of detection of neutron sources and transuranic materials by means of simultaneous detection of accompanying gamma-radiation. The design and fabrication technology of this detector allows the creation of gamma-neutron detectors characterized by high sensitivity at relatively low costs (as compared with analogs using oxide scintillators) for portable inspection systems. The sandwich structure can be comprised of any number of plates, with no limitations on thickness or area.« less

  17. Reliability of Monte Carlo simulations in modeling neutron yields from a shielded fission source

    NASA Astrophysics Data System (ADS)

    McArthur, Matthew S.; Rees, Lawrence B.; Czirr, J. Bart

    2016-08-01

    Using the combination of a neutron-sensitive 6Li glass scintillator detector with a neutron-insensitive 7Li glass scintillator detector, we are able to make an accurate measurement of the capture rate of fission neutrons on 6Li. We used this detector with a 252Cf neutron source to measure the effects of both non-borated polyethylene and 5% borated polyethylene shielding on detection rates over a range of shielding thicknesses. Both of these measurements were compared with MCNP calculations to determine how well the calculations reproduced the measurements. When the source is highly shielded, the number of interactions experienced by each neutron prior to arriving at the detector is large, so it is important to compare Monte Carlo modeling with actual experimental measurements. MCNP reproduces the data fairly well, but it does generally underestimate detector efficiency both with and without polyethylene shielding. For non-borated polyethylene it underestimates the measured value by an average of 8%. This increases to an average of 11% for borated polyethylene.

  18. A Simulation and Modeling Framework for Space Situational Awareness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olivier, S S

    This paper describes the development and initial demonstration of a new, integrated modeling and simulation framework, encompassing the space situational awareness enterprise, for quantitatively assessing the benefit of specific sensor systems, technologies and data analysis techniques. The framework is based on a flexible, scalable architecture to enable efficient, physics-based simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel computer systems available, for example, at Lawrence Livermore National Laboratory. The details of the modeling and simulation framework are described, including hydrodynamic models of satellitemore » intercept and debris generation, orbital propagation algorithms, radar cross section calculations, optical brightness calculations, generic radar system models, generic optical system models, specific Space Surveillance Network models, object detection algorithms, orbit determination algorithms, and visualization tools. The use of this integrated simulation and modeling framework on a specific scenario involving space debris is demonstrated.« less

  19. Design and performance evaluation of the imaging payload for a remote sensing satellite

    NASA Astrophysics Data System (ADS)

    Abolghasemi, Mojtaba; Abbasi-Moghadam, Dariush

    2012-11-01

    In this paper an analysis method and corresponding analytical tools for design of the experimental imaging payload (IMPL) of a remote sensing satellite (SINA-1) are presented. We begin with top-level customer system performance requirements and constraints and derive the critical system and component parameters, then analyze imaging payload performance until a preliminary design that meets customer requirements. We consider system parameters and components composing the image chain for imaging payload system which includes aperture, focal length, field of view, image plane dimensions, pixel dimensions, detection quantum efficiency, and optical filter requirements. The performance analysis is accomplished by calculating the imaging payload's SNR (signal-to-noise ratio), and imaging resolution. The noise components include photon noise due to signal scene and atmospheric background, cold shield, out-of-band optical filter leakage and electronic noise. System resolution is simulated through cascaded modulation transfer functions (MTFs) and includes effects due to optics, image sampling, and system motion. Calculations results for the SINA-1 satellite are also presented.

  20. Kazakh Traditional Dance Gesture Recognition

    NASA Astrophysics Data System (ADS)

    Nussipbekov, A. K.; Amirgaliyev, E. N.; Hahn, Minsoo

    2014-04-01

    Full body gesture recognition is an important and interdisciplinary research field which is widely used in many application spheres including dance gesture recognition. The rapid growth of technology in recent years brought a lot of contribution in this domain. However it is still challenging task. In this paper we implement Kazakh traditional dance gesture recognition. We use Microsoft Kinect camera to obtain human skeleton and depth information. Then we apply tree-structured Bayesian network and Expectation Maximization algorithm with K-means clustering to calculate conditional linear Gaussians for classifying poses. And finally we use Hidden Markov Model to detect dance gestures. Our main contribution is that we extend Kinect skeleton by adding headwear as a new skeleton joint which is calculated from depth image. This novelty allows us to significantly improve the accuracy of head gesture recognition of a dancer which in turn plays considerable role in whole body gesture recognition. Experimental results show the efficiency of the proposed method and that its performance is comparable to the state-of-the-art system performances.

  1. Hydration Free Energy from Orthogonal Space Random Walk and Polarizable Force Field.

    PubMed

    Abella, Jayvee R; Cheng, Sara Y; Wang, Qiantao; Yang, Wei; Ren, Pengyu

    2014-07-08

    The orthogonal space random walk (OSRW) method has shown enhanced sampling efficiency in free energy calculations from previous studies. In this study, the implementation of OSRW in accordance with the polarizable AMOEBA force field in TINKER molecular modeling software package is discussed and subsequently applied to the hydration free energy calculation of 20 small organic molecules, among which 15 are positively charged and five are neutral. The calculated hydration free energies of these molecules are compared with the results obtained from the Bennett acceptance ratio method using the same force field, and overall an excellent agreement is obtained. The convergence and the efficiency of the OSRW are also discussed and compared with BAR. Combining enhanced sampling techniques such as OSRW with polarizable force fields is very promising for achieving both accuracy and efficiency in general free energy calculations.

  2. Apparatus and method for detecting gamma radiation

    DOEpatents

    Sigg, Raymond A.

    1994-01-01

    A high efficiency radiation detector for measuring X-ray and gamma radiation from small-volume, low-activity liquid samples with an overall uncertainty better than 0.7% (one sigma SD). The radiation detector includes a hyperpure germanium well detector, a collimator, and a reference source. The well detector monitors gamma radiation emitted by the reference source and a radioactive isotope or isotopes in a sample source. The radiation from the reference source is collimated to avoid attenuation of reference source gamma radiation by the sample. Signals from the well detector are processed and stored, and the stored data is analyzed to determine the radioactive isotope(s) content of the sample. Minor self-attenuation corrections are calculated from chemical composition data.

  3. Development of a Nondestructive Evaluation Technique for Degraded Thermal Barrier Coatings Using Microwave

    NASA Astrophysics Data System (ADS)

    Sayar, M.; Ogawa, K.; Shoji, T.

    2008-02-01

    Thermal barrier coatings have been widely used in gas turbine engines in order to protect substrate metal alloy against high temperature and to enhance turbine efficiency. Currently, there are no reliable nondestructive techniques available to monitor TBC integrity over lifetime of the coating. Hence, to detect top coating (TC) and TGO thicknesses, a microwave nondestructive technique that utilizes a rectangular waveguide was developed. The phase of the reflection coefficient at the interface of TC and waveguide varies for different TGO and TC thicknesses. Therefore, measuring the phase of the reflection coefficient enables us to accurately calculate these thicknesses. Finally, a theoretical analysis was used to evaluate the reliability of the experimental results.

  4. Optimizing microwave photodetection: input-output theory

    NASA Astrophysics Data System (ADS)

    Schöndorf, M.; Govia, L. C. G.; Vavilov, M. G.; McDermott, R.; Wilhelm, F. K.

    2018-04-01

    High fidelity microwave photon counting is an important tool for various areas from background radiation analysis in astronomy to the implementation of circuit quantum electrodynamic architectures for the realization of a scalable quantum information processor. In this work we describe a microwave photon counter coupled to a semi-infinite transmission line. We employ input-output theory to examine a continuously driven transmission line as well as traveling photon wave packets. Using analytic and numerical methods, we calculate the conditions on the system parameters necessary to optimize measurement and achieve high detection efficiency. With this we can derive a general matching condition depending on the different system rates, under which the measurement process is optimal.

  5. CYANOMETHANIMINE ISOMERS IN COLD INTERSTELLAR CLOUDS: INSIGHTS FROM ELECTRONIC STRUCTURE AND KINETIC CALCULATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vazart, Fanny; Latouche, Camille; Skouteris, Dimitrios

    2015-09-10

    New insights into the formation of interstellar cyanomethanimine, a species of great relevance in prebiotic chemistry, are provided by electronic structure and kinetic calculations for the reaction CN + CH{sub 2} = NH. This reaction is a facile formation route of Z,E-C-cyanomethanimine, even under the extreme conditions of density and temperature typical of cold interstellar clouds. E-C-cyanomethanimine has been recently identified in Sgr B2(N) in the Green Bank Telescope (GBT) PRIMOS survey by P. Zaleski et al. and no efficient formation routes have been envisaged so far. The rate coefficient expression for the reaction channel leading to the observed isomermore » E-C-cyanomethanimine is 3.15 × 10-10 × (T/300){sup 0.152} × e{sup (−0.0948/T)}. According to the present study, the more stable Z-C-cyanomethanimine isomer is formed with a slightly larger yield (4.59 × 10{sup −10} × (T/300){sup 0.153} × e{sup (−0.0871/T)}. As the detection of E-isomer is favored due to its larger dipole moment, the missing detection of the Z-isomer can be due to the sensitivity limit of the GBT PRIMOS survey and the detection of the Z-isomer should be attempted with more sensitive instrumentation. The CN + CH{sub 2} = NH reaction can also play a role in the chemistry of the upper atmosphere of Titan where the cyanomethanimine products can contribute to the buildup of the observed nitrogen-rich organic aerosols that cover the moon.« less

  6. Optical changes in cortical tissue during seizure activity using optical coherence tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ornelas, Danielle; Hasan, Md.; Gonzalez, Oscar; Krishnan, Giri; Szu, Jenny I.; Myers, Timothy; Hirota, Koji; Bazhenov, Maxim; Binder, Devin K.; Park, Boris H.

    2017-02-01

    Epilepsy is a chronic neurological disorder characterized by recurrent and unpredictable seizures. Electrophysiology has remained the gold standard of neural activity detection but its resolution and high susceptibility to noise and motion artifact limit its efficiency. Optical imaging techniques, including fMRI, intrinsic optical imaging, and diffuse optical imaging, have also been used to detect neural activity yet these techniques rely on the indirect measurement of changes in blood flow. A more direct optical imaging technique is optical coherence tomography (OCT), a label-free, high resolution, and minimally invasive imaging technique that can produce depth-resolved cross-sectional and 3D images. In this study, OCT was used to detect non-vascular depth-dependent optical changes in cortical tissue during 4-aminopyridine (4-AP) induced seizure onset. Calculations of localized optical attenuation coefficient (µ) allow for the assessment of depth-resolved volumetric optical changes in seizure induced cortical tissue. By utilizing the depth-dependency of the attenuation coefficient, we demonstrate the ability to locate and remove the optical effects of vasculature within the upper regions of the cortex on the attenuation calculations of cortical tissue in vivo. The results of this study reveal a significant depth-dependent decrease in attenuation coefficient of nonvascular cortical tissue both ex vivo and in vivo. Regions exhibiting decreased attenuation coefficient show significant temporal correlation to regions of increased electrical activity during seizure onset and progression. This study allows for a more thorough and biologically relevant analysis of the optical signature of seizure activity in vivo using OCT.

  7. Sensitive gas analysis system on a microchip and application for on-site monitoring of NH3 in a clean room.

    PubMed

    Hiki, Shinichiro; Mawatari, Kazuma; Aota, Arata; Saito, Maki; Kitamori, Takehiko

    2011-06-15

    A portable, highly sensitive, and continuous ammonia gas monitoring system was developed with a microfluidic chip. The system consists of a main unit, a gas pumping unit, and a computer which serves as an operation console. The size of the system is 45 cm width × 30 cm depth × 30 cm height, and the portable system was realized. A highly efficient and stable extraction method was developed by utilizing an annular gas/liquid laminar flow. In addition, a stable gas/liquid separation method with a PTFE membrane was developed by arranging a fluidic network in three dimensions to achieve almost zero dead volume at the gas/liquid extraction part. The extraction rate was almost 100% with a liquid flow rate of 3.5 μL/min and a gas flow rate of 100 mL/min (contact time of ~15 ms), and the concentration factor was 200 times by calculating the NH(3) concentration (w/w unit) in the gas and liquid phases. Stable phase separation and detection was sustained for more than 3 weeks in an automated operation, which was sufficient for the monitoring application. The lower limit of detection calculated based on a signal-to-noise ratio of 3 was 84 ppt, which showed good detectability for NH(3) analysis. We believe that our system is a very powerful tool for gas analysis due to the advantages of portable size, high sensitivity, and continuous monitoring, and it is particularly useful in the semiconductor field.

  8. Design of a sector bowtie nano-rectenna for optical power and infrared detection

    NASA Astrophysics Data System (ADS)

    Wang, Kai; Hu, Haifeng; Lu, Shan; Guo, Lingju; He, Tao

    2015-10-01

    We designed a sector bowtie nanoantenna integrated with a rectifier (Au-TiO x -Ti diode) for collecting infrared energy. The optical performance of the metallic bowtie nanoantenna was numerically investigated at infrared frequencies (5-30 μm) using three-dimensional frequency-domain electromagnetic field calculation software based on the finite element method. The simulation results indicate that the resonance wavelength and local field enhancement are greatly affected by the shape and size of the bowtie nanoantenna, as well as the relative permittivity and conductivity of the dielectric layer. The output current of the rectified nano-rectenna is substantially at nanoampere magnitude with an electric field intensity of 1 V/m. Moreover, the power conversion efficiency for devices with three different substrates illustrates that a substrate with a larger refractive index yields a higher efficiency and longer infrared response wavelength. Consequently, the optimized structure can provide theoretical support for the design of novel optical rectennas and fabrication of optoelectronic devices.

  9. Isotopic composition analysis and age dating of uranium samples by high resolution gamma ray spectrometry

    NASA Astrophysics Data System (ADS)

    Apostol, A. I.; Pantelica, A.; Sima, O.; Fugaru, V.

    2016-09-01

    Non-destructive methods were applied to determine the isotopic composition and the time elapsed since last chemical purification of nine uranium samples. The applied methods are based on measuring gamma and X radiations of uranium samples by high resolution low energy gamma spectrometric system with planar high purity germanium detector and low background gamma spectrometric system with coaxial high purity germanium detector. The ;Multigroup γ-ray Analysis Method for Uranium; (MGAU) code was used for the precise determination of samples' isotopic composition. The age of the samples was determined from the isotopic ratio 214Bi/234U. This ratio was calculated from the analyzed spectra of each uranium sample, using relative detection efficiency. Special attention is paid to the coincidence summing corrections that have to be taken into account when performing this type of analysis. In addition, an alternative approach for the age determination using full energy peak efficiencies obtained by Monte Carlo simulations with the GESPECOR code is described.

  10. Fast template matching with polynomials.

    PubMed

    Omachi, Shinichiro; Omachi, Masako

    2007-08-01

    Template matching is widely used for many applications in image and signal processing. This paper proposes a novel template matching algorithm, called algebraic template matching. Given a template and an input image, algebraic template matching efficiently calculates similarities between the template and the partial images of the input image, for various widths and heights. The partial image most similar to the template image is detected from the input image for any location, width, and height. In the proposed algorithm, a polynomial that approximates the template image is used to match the input image instead of the template image. The proposed algorithm is effective especially when the width and height of the template image differ from the partial image to be matched. An algorithm using the Legendre polynomial is proposed for efficient approximation of the template image. This algorithm not only reduces computational costs, but also improves the quality of the approximated image. It is shown theoretically and experimentally that the computational cost of the proposed algorithm is much smaller than the existing methods.

  11. Current-induced spin polarization in InGaAs and GaAs epilayers with varying doping densities

    DOE PAGES

    Luengo-Kovac, Marta; Huang, Simon; Del Gaudio, Davide; ...

    2017-11-16

    Here, the current-induced spin polarization and momentum-dependent spin-orbit field were measured in In xGa 1-xAs epilayers with varying indium concentrations and silicon doping densities. Samples with higher indium concentrations and carrier concentrations and lower mobilities were found to have larger electrical spin generation efficiencies. Furthermore, current-induced spin polarization was detected in GaAs epilayers despite the absence of measurable spin-orbit fields, indicating that the extrinsic contributions to the spin-polarization mechanism must be considered. Theoretical calculations based on a model that includes extrinsic contributions to the spin dephasing and the spin Hall effect, in addition to the intrinsic Rashba and Dresselhaus spin-orbitmore » coupling, are found to reproduce the experimental finding that the crystal direction with the smaller net spin-orbit field has larger electrical spin generation efficiency and are used to predict how sample parameters affect the magnitude of the current-induced spin polarization.« less

  12. Infrared photodetectors based on graphene van der Waals heterostructures

    NASA Astrophysics Data System (ADS)

    Ryzhii, V.; Ryzhii, M.; Svintsov, D.; Leiman, V.; Mitin, V.; Shur, M. S.; Otsuji, T.

    2017-08-01

    We propose and evaluate the graphene layer (GL) infrared photodetectors (GLIPs) based on the van der Waals (vdW) heterostructures with the radiation absorbing GLs. The operation of the GLIPs is associated with the electron photoexcitation from the GL valence band to the continuum states above the inter-GL barriers (either via tunneling or direct transitions to the continuum states). Using the developed device model, we calculate the photodetector characteristics as functions of the GL-vdW heterostructure parameters. We show that due to a relatively large efficiency of the electron photoexcitation and low capture efficiency of the electrons propagating over the barriers in the inter-GL layers, GLIPs should exhibit the elevated photoelectric gain and detector responsivity as well as relatively high detectivity. The possibility of high-speed operation, high conductivity, transparency of the GLIP contact layers, and the sensitivity to normally incident IR radiation provides additional potential advantages in comparison with other IR photodetectors. In particular, the proposed GLIPs can compete with unitravelling-carrier photodetectors.

  13. An efficient shutter-less non-uniformity correction method for infrared focal plane arrays

    NASA Astrophysics Data System (ADS)

    Huang, Xiyan; Sui, Xiubao; Zhao, Yao

    2017-02-01

    The non-uniformity response in infrared focal plane array (IRFPA) detectors has a bad effect on images with fixed pattern noise. At present, it is common to use shutter to prevent from radiation of target and to update the parameters of non-uniformity correction in the infrared imaging system. The use of shutter causes "freezing" image. And inevitably, there exists the problems of the instability and reliability of system, power consumption, and concealment of infrared detection. In this paper, we present an efficient shutter-less non-uniformity correction (NUC) method for infrared focal plane arrays. The infrared imaging system can use the data gaining in thermostat to calculate the incident infrared radiation by shell real-timely. And the primary output of detector except the shell radiation can be corrected by the gain coefficient. This method has been tested in real infrared imaging system, reaching high correction level, reducing fixed pattern noise, adapting wide temperature range.

  14. Quantitative fluorescence measurements performed on typical matrix molecules in matrix-assisted laser desorption/ionisation

    NASA Astrophysics Data System (ADS)

    Allwood, D. A.; Dyer, P. E.

    2000-11-01

    Fundamental photophysical parameters have been determined for several molecules that are commonly used as matrices, e.g. ferulic acid, within matrix-assisted laser desorption/ionization (MALDI) mass spectrometry. Fluorescence quantum efficiencies ( φqe), singlet decay rates ( kl), vibrationless ground-singlet transition energies and average fluorescence wavelengths have been obtained from solid and solution samples by quantitative optical measurements. This new data will assist in modelling calculations of MALDI processes and in highlighting desirable characteristics of MALDI matrices. φqe may be as high as 0.59 whilst the radiative decay rate ( kf) appears to be within the (0.8-4)×10 8 s -1 range. Interestingly, α-cyano-4-hydroxycinnamic acid (α-CHC) has a very low φqe and fast non-radiative decay rate which would imply a rapid and efficient thermalisation of electronic excitation. This is in keeping with observations that α-CHC exhibits low threshold fluences for ion detection and the low fluences at which α-CHC tends to fragment.

  15. An integrated condition-monitoring method for a milling process using reduced decomposition features

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Wu, Bo; Wang, Yan; Hu, Youmin

    2017-08-01

    Complex and non-stationary cutting chatter affects productivity and quality in the milling process. Developing an effective condition-monitoring approach is critical to accurately identify cutting chatter. In this paper, an integrated condition-monitoring method is proposed, where reduced features are used to efficiently recognize and classify machine states in the milling process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition, and Shannon power spectral entropy is calculated to extract features from the decomposed signals. Principal component analysis is adopted to reduce feature size and computational cost. With the extracted feature information, the probabilistic neural network model is used to recognize and classify the machine states, including stable, transition, and chatter states. Experimental studies are conducted, and results show that the proposed method can effectively detect cutting chatter during different milling operation conditions. This monitoring method is also efficient enough to satisfy fast machine state recognition and classification.

  16. Greatly Suppressed Shuttle Effect for Improved Lithium Sulfur Battery Performance through Short Chain Intermediates.

    PubMed

    Xu, Na; Qian, Tao; Liu, Xuejun; Liu, Jie; Chen, Yu; Yan, Chenglin

    2017-01-11

    The high solubility of long-chain lithium polysulfides and their infamous shuttle effect in lithium sulfur battery lead to rapid capacity fading along with low Coulombic efficiency. To address above issues, we propose a new strategy to suppress the shuttle effect for greatly enhanced lithium sulfur battery performance mainly through the formation of short-chain intermediates during discharging, which allows significant improvements including high capacity retention of 1022 mAh/g with 87% retention for 450 cycles. Without LiNO 3 -containing electrolytes, the excellent Coulombic efficiency of ∼99.5% for more than 500 cycles is obtained, suggesting the greatly suppressed shuttle effect. In situ UV/vis analysis of electrolyte during cycling reveals that the short-chain Li 2 S 2 and Li 2 S 3 polysulfides are detected as main intermediates, which are theoretically verified by density functional theory (DFT) calculations. Our strategy may open up a new avenue for practical application of lithium sulfur battery.

  17. Strain-engineered optoelectronic properties of 2D transition metal dichalcogenide lateral heterostructures

    DOE PAGES

    Lee, Jaekwang; Huang, Jingsong; Sumpter, Bobby G.; ...

    2017-02-17

    Compared with their bulk counterparts, 2D materials can sustain much higher elastic strain at which optical quantities such as bandgaps and absorption spectra governing optoelectronic device performance can be modified with relative ease. Using first-principles density functional theory and quasiparticle GW calculations, we demonstrate how uniaxial tensile strain can be utilized to optimize the electronic and optical properties of transition metal dichalcogenide lateral (in-plane) heterostructures such as MoX 2/WX 2 (X = S, Se, Te). We find that these lateral-type heterostructures may facilitate efficient electron–hole separation for light detection/harvesting and preserve their type II characteristic up to 12% of uniaxialmore » strain. Based on the strain-dependent bandgap and band offset, we show that uniaxial tensile strain can significantly increase the power conversion efficiency of these lateral heterostructures. Our results suggest that these strain-engineered lateral heterostructures are promising for optimizing optoelectronic device performance by selectively tuning the energetics of the bandgap.« less

  18. Current-induced spin polarization in InGaAs and GaAs epilayers with varying doping densities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luengo-Kovac, Marta; Huang, Simon; Del Gaudio, Davide

    Here, the current-induced spin polarization and momentum-dependent spin-orbit field were measured in In xGa 1-xAs epilayers with varying indium concentrations and silicon doping densities. Samples with higher indium concentrations and carrier concentrations and lower mobilities were found to have larger electrical spin generation efficiencies. Furthermore, current-induced spin polarization was detected in GaAs epilayers despite the absence of measurable spin-orbit fields, indicating that the extrinsic contributions to the spin-polarization mechanism must be considered. Theoretical calculations based on a model that includes extrinsic contributions to the spin dephasing and the spin Hall effect, in addition to the intrinsic Rashba and Dresselhaus spin-orbitmore » coupling, are found to reproduce the experimental finding that the crystal direction with the smaller net spin-orbit field has larger electrical spin generation efficiency and are used to predict how sample parameters affect the magnitude of the current-induced spin polarization.« less

  19. Exploiting Habitat and Gear Patterns for Efficient Detection of Rare and Non-native Benthos and Fish in Great Lakes Coastal ecosystems

    EPA Science Inventory

    There is at present no comprehensive early-detection monitoring for exotic species in the Great Lakes, despite their continued arrival and impacts and recognition that early detection is key to effective management. We evaluated strategies for efficient early-detection monitorin...

  20. Nanophotonic Hot Electron Solar-Blind Ultraviolet Detectors with a Metal-Oxide-Semiconductor Structure

    NASA Astrophysics Data System (ADS)

    Wang, Zhiyuan

    Solar-blind ultraviolet detection refers to photon detection specifically in the wavelength range of 200 nm to 320 nm. Without background noises from solar radiation, it has broad applications from homeland security to environmental monitoring. In this thesis, we design and fabricate a nanophotonic metal-oxide-semiconductor device for solar-blind UV detection. Instead of using semiconductors as the active absorber, we use metal Sn nano- grating structures to absorb UV photons and generate hot electrons for internal photoemission across the Sn/SiO 2 interfacial barrier, thereby generating photocurrent between metal and semiconductor region upon UV excitation. The large metal/oxide interfacial energy barrier enables solar-blind UV detection by blocking the less energetic electrons excited by visible photons. With optimized design, 85% UV absorption and hot electron excitation can be achieved within the mean free path of 20 nm from the metal/oxide interface. This feature greatly enhances hot electron transport across the interfacial barrier to generate photocurrent. Various fabrication techniques have been developed for preparing nano gratings. For nominally 20 nm-thick deposited Sn, the self- formed pseudo-periodic nanostructure help achieve 75% UV absorption from lambda=200 nm to 300 nm. With another layer of nominally 20 nm-thick Sn, similar UV absorption is maintained while conductivity is improved, which is beneficial for overall device efficiency. The Sn/SiO2/Si MOS devices show good solar-blind character while achieving 13% internal quantum efficiency for 260 nm UV with only 20 nm-thick Sn and some devices demonstrate much higher (even >100%) internal quantum efficiency. While a more accurate estimation of device effective area is needed for proving our calculation, these results indeed show a great potential for this type of hot-electron-based photodetectors and for Sn nanostructure as an effective UV absorber. The simple geometry of the self- assembled Sn nano-gratings and MOS structure make this novel type of device easy to fabricate and integrate with Si ROICs compared to existing solar-blind UV detection schemes. The presented device structure also breaks through the conventional notion that photon absorption by metal is always a loss in solid-state photodetectors, and it can potentially be extended to other active metal photonic devices.

  1. Project W-320, 241-C-106 sluicing HVAC calculations, Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, J.W.

    1998-08-07

    This supporting document has been prepared to make the FDNW calculations for Project W-320, readily retrievable. The report contains the following calculations: Exhaust airflow sizing for Tank 241-C-106; Equipment sizing and selection recirculation fan; Sizing high efficiency mist eliminator; Sizing electric heating coil; Equipment sizing and selection of recirculation condenser; Chiller skid system sizing and selection; High efficiency metal filter shielding input and flushing frequency; and Exhaust skid stack sizing and fan sizing.

  2. Computationally Efficient Multiconfigurational Reactive Molecular Dynamics

    PubMed Central

    Yamashita, Takefumi; Peng, Yuxing; Knight, Chris; Voth, Gregory A.

    2012-01-01

    It is a computationally demanding task to explicitly simulate the electronic degrees of freedom in a system to observe the chemical transformations of interest, while at the same time sampling the time and length scales required to converge statistical properties and thus reduce artifacts due to initial conditions, finite-size effects, and limited sampling. One solution that significantly reduces the computational expense consists of molecular models in which effective interactions between particles govern the dynamics of the system. If the interaction potentials in these models are developed to reproduce calculated properties from electronic structure calculations and/or ab initio molecular dynamics simulations, then one can calculate accurate properties at a fraction of the computational cost. Multiconfigurational algorithms model the system as a linear combination of several chemical bonding topologies to simulate chemical reactions, also sometimes referred to as “multistate”. These algorithms typically utilize energy and force calculations already found in popular molecular dynamics software packages, thus facilitating their implementation without significant changes to the structure of the code. However, the evaluation of energies and forces for several bonding topologies per simulation step can lead to poor computational efficiency if redundancy is not efficiently removed, particularly with respect to the calculation of long-ranged Coulombic interactions. This paper presents accurate approximations (effective long-range interaction and resulting hybrid methods) and multiple-program parallelization strategies for the efficient calculation of electrostatic interactions in reactive molecular simulations. PMID:25100924

  3. Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy

    NASA Astrophysics Data System (ADS)

    Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li

    2018-03-01

    In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.

  4. Discriminative detection and enumeration of microbial life in marine subsurface sediments.

    PubMed

    Morono, Yuki; Terada, Takeshi; Masui, Noriaki; Inagaki, Fumio

    2009-05-01

    Detection and enumeration of microbial life in natural environments provide fundamental information about the extent of the biosphere on Earth. However, it has long been difficult to evaluate the abundance of microbial cells in sedimentary habitats because non-specific binding of fluorescent dye and/or auto-fluorescence from sediment particles strongly hampers the recognition of cell-derived signals. Here, we show a highly efficient and discriminative detection and enumeration technique for microbial cells in sediments using hydrofluoric acid (HF) treatment and automated fluorescent image analysis. Washing of sediment slurries with HF significantly reduced non-biological fluorescent signals such as amorphous silica and enhanced the efficiency of cell detachment from the particles. We found that cell-derived SYBR Green I signals can be distinguished from non-biological backgrounds by dividing green fluorescence (band-pass filter: 528/38 nm (center-wavelength/bandwidth)) by red (617/73 nm) per image. A newly developed automated microscope system could take a wide range of high-resolution image in a short time, and subsequently enumerate the accurate number of cell-derived signals by the calculation of green to red fluorescence signals per image. Using our technique, we evaluated the microbial population in deep marine sediments offshore Peru and Japan down to 365 m below the seafloor, which provided objective digital images as evidence for the quantification of the prevailing microbial life. Our method is hence useful to explore the extent of sub-seafloor life in the future scientific drilling, and moreover widely applicable in the study of microbial ecology.

  5. A novel inlet system for online chemical analysis of semi-volatile submicron particulate matter

    NASA Astrophysics Data System (ADS)

    Eichler, P.; Müller, M.; D'Anna, B.; Wisthaler, A.

    2015-03-01

    We herein present a novel modular inlet system designed to be coupled to low-pressure gas analyzers for online chemical characterization of semi-volatile submicron particles. The "chemical analysis of aerosol online" (CHARON) inlet consists of a gas-phase denuder for stripping off gas-phase analytes, an aerodynamic lens for particle collimation combined with an inertial sampler for the particle-enriched flow and a thermodesorption unit for particle volatilization prior to chemical analysis. The denuder was measured to remove gas-phase organics with an efficiency > 99.999% and to transmit particles in the 100-750 nm size range with a 75-90% efficiency. The measured average particle enrichment factor in the subsampling flow from the aerodynamic lens was 25.6, which is a factor of 3 lower than the calculated theoretical optimum. We coupled the CHARON inlet to a proton-transfer-reaction time-of-flight mass spectrometer (PTR-ToF-MS) which quantitatively detects most organic analytes and ammonia. The combined CHARON-PTR-ToF-MS setup is thus capable of measuring both the organic and the ammonium fraction in submicron particles in real time. Individual organic compounds can be detected down to levels of 10-20 ng m-3. Two proof-of-principle studies were carried out for demonstrating the analytical power of this new instrumental setup: (i) oxygenated organics and their partitioning between the gas and the particulate phase were observed from the reaction of limonene with ozone and (ii) nicotine was measured in cigarette smoke particles demonstrating that selected organic target compounds can be detected in submicron particles in real time.

  6. Efficient 2-Nitrophenol Chemical Sensor Development Based on Ce2O3 Nanoparticles Decorated CNT Nanocomposites for Environmental Safety

    PubMed Central

    Hussain, Mohammad M.; Rahman, Mohammed M.; Asiri, Abdullah M.

    2016-01-01

    Ce2O3 nanoparticle decorated CNT nanocomposites (Ce2O3.CNT NCs) were prepared by a wet-chemical method in basic medium. The Ce2O3.CNT NCs were examined using FTIR, UV/Vis, Field-Emission Scanning Electron Microscopy (FESEM), X-ray electron dispersive spectroscopy (XEDS), X-ray photoelectron spectroscopy (XPS), and powder X-ray diffraction (XRD). A selective 2-nitrophenol (2-NP) sensor was developed by fabricating a thin-layer of NCs onto a flat glassy carbon electrode (GCE, surface area = 0.0316 cm2). Higher sensitivity including linear dynamic range (LDR), long-term stability, and enhanced electrochemical performances towards 2-NP were achieved by a reliable current-voltage (I-V) method. The calibration curve was found linear (R2 = 0.9030) over a wide range of 2-NP concentration (100 pM ~ 100.0 mM). Limit of detection (LOD) and sensor sensitivity were calculated based on noise to signal ratio (~3N/S) as 60 ± 0.02 pM and 1.6×10−3 μAμM-1cm-2 respectively. The Ce2O3.CNT NCs synthesized by a wet-chemical process is an excellent way of establishing nanomaterial decorated carbon materials for chemical sensor development in favor of detecting hazardous compounds in health-care and environmental fields at broad-scales. Finally, the efficiency of the proposed chemical sensors can be applied and utilized in effectively for the selective detection of toxic 2-NP component in environmental real samples with acceptable and reasonable results. PMID:27973600

  7. Efficient 2-Nitrophenol Chemical Sensor Development Based on Ce2O3 Nanoparticles Decorated CNT Nanocomposites for Environmental Safety.

    PubMed

    Hussain, Mohammad M; Rahman, Mohammed M; Asiri, Abdullah M

    2016-01-01

    Ce2O3 nanoparticle decorated CNT nanocomposites (Ce2O3.CNT NCs) were prepared by a wet-chemical method in basic medium. The Ce2O3.CNT NCs were examined using FTIR, UV/Vis, Field-Emission Scanning Electron Microscopy (FESEM), X-ray electron dispersive spectroscopy (XEDS), X-ray photoelectron spectroscopy (XPS), and powder X-ray diffraction (XRD). A selective 2-nitrophenol (2-NP) sensor was developed by fabricating a thin-layer of NCs onto a flat glassy carbon electrode (GCE, surface area = 0.0316 cm2). Higher sensitivity including linear dynamic range (LDR), long-term stability, and enhanced electrochemical performances towards 2-NP were achieved by a reliable current-voltage (I-V) method. The calibration curve was found linear (R2 = 0.9030) over a wide range of 2-NP concentration (100 pM ~ 100.0 mM). Limit of detection (LOD) and sensor sensitivity were calculated based on noise to signal ratio (~3N/S) as 60 ± 0.02 pM and 1.6×10-3 μAμM-1cm-2 respectively. The Ce2O3.CNT NCs synthesized by a wet-chemical process is an excellent way of establishing nanomaterial decorated carbon materials for chemical sensor development in favor of detecting hazardous compounds in health-care and environmental fields at broad-scales. Finally, the efficiency of the proposed chemical sensors can be applied and utilized in effectively for the selective detection of toxic 2-NP component in environmental real samples with acceptable and reasonable results.

  8. Low light CMOS contact imager with an integrated poly-acrylic emission filter for fluorescence detection.

    PubMed

    Dattner, Yonathan; Yadid-Pecht, Orly

    2010-01-01

    This study presents the fabrication of a low cost poly-acrylic acid (PAA) based emission filter integrated with a low light CMOS contact imager for fluorescence detection. The process involves the use of PAA as an adhesive for the emission filter. The poly-acrylic solution was chosen due its optical transparent properties, adhesive properties, miscibility with polar protic solvents and most importantly its bio-compatibility with a biological environment. The emission filter, also known as an absorption filter, involves dissolving an absorbing specimen in a polar protic solvent and mixing it with the PAA to uniformly bond the absorbing specimen and harden the filter. The PAA is optically transparent in solid form and therefore does not contribute to the absorbance of light in the visible spectrum. Many combinations of absorbing specimen and polar protic solvents can be derived, yielding different filter characteristics in different parts of the spectrum. We report a specific combination as a first example of implementation of our technology. The filter reported has excitation in the green spectrum and emission in the red spectrum, utilizing the increased quantum efficiency of the photo sensitive sensor array. The thickness of the filter (20 μm) was chosen by calculating the desired SNR using Beer-Lambert's law for liquids, Quantum Yield of the fluorophore and the Quantum Efficiency of the sensor array. The filters promising characteristics make it suitable for low light fluorescence detection. The filter was integrated with a fully functional low noise, low light CMOS contact imager and experimental results using fluorescence polystyrene micro-spheres are presented.

  9. Technical Note: Development and performance of a software tool for quality assurance of online replanning with a conventional Linac or MR-Linac.

    PubMed

    Chen, Guang-Pei; Ahunbay, Ergun; Li, X Allen

    2016-04-01

    To develop an integrated quality assurance (QA) software tool for online replanning capable of efficiently and automatically checking radiation treatment (RT) planning parameters and gross plan quality, verifying treatment plan data transfer from treatment planning system (TPS) to record and verify (R&V) system, performing a secondary monitor unit (MU) calculation with or without a presence of a magnetic field from MR-Linac, and validating the delivery record consistency with the plan. The software tool, named ArtQA, was developed to obtain and compare plan and treatment parameters from both the TPS and the R&V system database. The TPS data are accessed via direct file reading and the R&V data are retrieved via open database connectivity and structured query language. Plan quality is evaluated with both the logical consistency of planning parameters and the achieved dose-volume histograms. Beams in between the TPS and R&V system are matched based on geometry configurations. To consider the effect of a 1.5 T transverse magnetic field from MR-Linac in the secondary MU calculation, a method based on modified Clarkson integration algorithm was developed and tested for a series of clinical situations. ArtQA has been used in their clinic and can quickly detect inconsistencies and deviations in the entire RT planning process. With the use of the ArtQA tool, the efficiency for plan check including plan quality, data transfer, and delivery check can be improved by at least 60%. The newly developed independent MU calculation tool for MR-Linac reduces the difference between the plan and calculated MUs by 10%. The software tool ArtQA can be used to perform a comprehensive QA check from planning to delivery with conventional Linac or MR-Linac and is an essential tool for online replanning where the QA check needs to be performed rapidly.

  10. Technical Note: Development and performance of a software tool for quality assurance of online replanning with a conventional Linac or MR-Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guang-Pei, E-mail: gpchen@mcw.edu; Ahunbay, Ergun; Li, X. Allen

    Purpose: To develop an integrated quality assurance (QA) software tool for online replanning capable of efficiently and automatically checking radiation treatment (RT) planning parameters and gross plan quality, verifying treatment plan data transfer from treatment planning system (TPS) to record and verify (R&V) system, performing a secondary monitor unit (MU) calculation with or without a presence of a magnetic field from MR-Linac, and validating the delivery record consistency with the plan. Methods: The software tool, named ArtQA, was developed to obtain and compare plan and treatment parameters from both the TPS and the R&V system database. The TPS data aremore » accessed via direct file reading and the R&V data are retrieved via open database connectivity and structured query language. Plan quality is evaluated with both the logical consistency of planning parameters and the achieved dose–volume histograms. Beams in between the TPS and R&V system are matched based on geometry configurations. To consider the effect of a 1.5 T transverse magnetic field from MR-Linac in the secondary MU calculation, a method based on modified Clarkson integration algorithm was developed and tested for a series of clinical situations. Results: ArtQA has been used in their clinic and can quickly detect inconsistencies and deviations in the entire RT planning process. With the use of the ArtQA tool, the efficiency for plan check including plan quality, data transfer, and delivery check can be improved by at least 60%. The newly developed independent MU calculation tool for MR-Linac reduces the difference between the plan and calculated MUs by 10%. Conclusions: The software tool ArtQA can be used to perform a comprehensive QA check from planning to delivery with conventional Linac or MR-Linac and is an essential tool for online replanning where the QA check needs to be performed rapidly.« less

  11. Infrared Signal Detection by Upconversion Technique

    NASA Technical Reports Server (NTRS)

    Wong, Teh-Hwa; Yu, Jirong; Bai, Yingxin; Johnson, William E.

    2014-01-01

    We demonstrated up-conversion assisted detection of a 2.05-micron signal by using a bulk periodically poled Lithium niobate crystal. The 94% intrinsic up-conversion efficiency and 22.58% overall detection efficiency at pW level of 2.05-micron was achieved.

  12. Numerical Simulation of Measurements during the Reactor Physical Startup at Unit 3 of Rostov NPP

    NASA Astrophysics Data System (ADS)

    Tereshonok, V. A.; Kryakvin, L. V.; Pitilimov, V. A.; Karpov, S. A.; Kulikov, V. I.; Zhylmaganbetov, N. M.; Kavun, O. Yu.; Popykin, A. I.; Shevchenko, R. A.; Shevchenko, S. A.; Semenova, T. V.

    2017-12-01

    The results of numerical calculations and measurements of some reactor parameters during the physical startup tests at unit 3 of Rostov NPP are presented. The following parameters are considered: the critical boron acid concentration and the currents from ionization chambers (IC) during the scram system efficiency evaluation. The scram system efficiency was determined using the inverse point kinetics equation with the measured and simulated IC currents. The results of steady-state calculations of relative power distribution and efficiency of the scram system and separate groups of control rods of the control and protection system are also presented. The calculations are performed using several codes, including precision ones.

  13. Optimization of single photon detection model based on GM-APD

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Yang, Yi; Hao, Peiyu

    2017-11-01

    One hundred kilometers high precision laser ranging hopes the detector has very strong detection ability for very weak light. At present, Geiger-Mode of Avalanche Photodiode has more use. It has high sensitivity and high photoelectric conversion efficiency. Selecting and designing the detector parameters according to the system index is of great importance to the improvement of photon detection efficiency. Design optimization requires a good model. In this paper, we research the existing Poisson distribution model, and consider the important detector parameters of dark count rate, dead time, quantum efficiency and so on. We improve the optimization of detection model, select the appropriate parameters to achieve optimal photon detection efficiency. The simulation is carried out by using Matlab and compared with the actual test results. The rationality of the model is verified. It has certain reference value in engineering applications.

  14. Relationship between efficiency and predictability in stock price change

    NASA Astrophysics Data System (ADS)

    Eom, Cheoljun; Oh, Gabjin; Jung, Woo-Sung

    2008-09-01

    In this study, we evaluate the relationship between efficiency and predictability in the stock market. The efficiency, which is the issue addressed by the weak-form efficient market hypothesis, is calculated using the Hurst exponent and the approximate entropy (ApEn). The predictability corresponds to the hit-rate; this is the rate of consistency between the direction of the actual price change and that of the predicted price change, as calculated via the nearest neighbor prediction method. We determine that the Hurst exponent and the ApEn value are negatively correlated. However, predictability is positively correlated with the Hurst exponent.

  15. Guided wave propagation in metallic and resin plates loaded with water on single surface

    NASA Astrophysics Data System (ADS)

    Hayashi, Takahiro; Inoue, Daisuke

    2016-02-01

    Our previous papers reported dispersion curves for leaky Lamb waves in a water-loaded plate and wave structures for several typical modes including quasi-Scholte waves [1,2]. The calculations were carried out with a semi-analytical finite element (SAFE) method developed for leaky Lamb waves. This study presents SAFE calculations for transient guided waves including time-domain waveforms and animations of wave propagation in metallic and resin water-loaded plates. The results show that non-dispersive and non-attenuated waves propagating along the interface between the fluid and the plate are expected for effective non-destructive evaluation of such fluid-loaded plates as storage tanks and transportation pipes. We calculated transient waves in both steel and polyvinyl chloride (PVC) plates loaded with water on a single side and input dynamic loading from a point source on the other water-free surface as typical examples of metallic and resin plates. For a steel plate, there exists a non-dispersive and non-attenuated mode, called the quasi-Scholte wave, having an almost identical phase velocity to that of water. The quasi-Scholte wave has superior generation efficiency in the low frequency range due to its broad energy distribution across the plate, whereas it is localized near the plate-water interface at higher frequencies. This means that it has superior detectability of inner defects. For a PVC plate, plural non-attenuated modes exist. One of the non-attenuated modes similar to the A0 mode of the Lamb wave in the form of a group velocity dispersion curve is promising for the non-destructive evaluation of the PVC plate because it provides prominent characteristics of generation efficiency and low dispersion.

  16. Statistical power calculations for mixed pharmacokinetic study designs using a population approach.

    PubMed

    Kloprogge, Frank; Simpson, Julie A; Day, Nicholas P J; White, Nicholas J; Tarning, Joel

    2014-09-01

    Simultaneous modelling of dense and sparse pharmacokinetic data is possible with a population approach. To determine the number of individuals required to detect the effect of a covariate, simulation-based power calculation methodologies can be employed. The Monte Carlo Mapped Power method (a simulation-based power calculation methodology using the likelihood ratio test) was extended in the current study to perform sample size calculations for mixed pharmacokinetic studies (i.e. both sparse and dense data collection). A workflow guiding an easy and straightforward pharmacokinetic study design, considering also the cost-effectiveness of alternative study designs, was used in this analysis. Initially, data were simulated for a hypothetical drug and then for the anti-malarial drug, dihydroartemisinin. Two datasets (sampling design A: dense; sampling design B: sparse) were simulated using a pharmacokinetic model that included a binary covariate effect and subsequently re-estimated using (1) the same model and (2) a model not including the covariate effect in NONMEM 7.2. Power calculations were performed for varying numbers of patients with sampling designs A and B. Study designs with statistical power >80% were selected and further evaluated for cost-effectiveness. The simulation studies of the hypothetical drug and the anti-malarial drug dihydroartemisinin demonstrated that the simulation-based power calculation methodology, based on the Monte Carlo Mapped Power method, can be utilised to evaluate and determine the sample size of mixed (part sparsely and part densely sampled) study designs. The developed method can contribute to the design of robust and efficient pharmacokinetic studies.

  17. TLD efficiency calculations for heavy ions: an analytical approach

    DOE PAGES

    Boscolo, Daria; Scifoni, Emanuele; Carlino, Antonio; ...

    2015-12-18

    The use of thermoluminescent dosimeters (TLDs) in heavy charged particles’ dosimetry is limited by their non-linear dose response curve and by their response dependence on the radiation quality. Thus, in order to use TLDs with particle beams, a model that can reproduce the behavior of these detectors under different conditions is needed. Here a new, simple and completely analytical algorithm for the calculation of the relative TL-efficiency depending on the ion charge Z and energy E is presented. In addition, the detector response is evaluated starting from the single ion case, where the computed effectiveness values have been compared withmore » experimental data as well as with predictions from a different method. The main advantage of this approach is that, being fully analytical, it is computationally fast and can be efficiently integrated into treatment planning verification tools. In conclusion, the calculated efficiency values have been then implemented in the treatment planning code TRiP98 and dose calculations on a macroscopic target irradiated with an extended carbon ion field have been performed and verified against experimental data.« less

  18. How efficient are constructed wetlands in removing pharmaceuticals from untreated and treated urban wastewaters? A review.

    PubMed

    Verlicchi, Paola; Zambello, Elena

    2014-02-01

    This review presents and discusses the data from 47 peer-reviewed journal articles on the occurrence of 137 pharmaceutical compounds in the effluent from various types of constructed wetlands treating urban wastewater. We analyse the observed removal efficiencies of the investigated compounds in order to identify the type of constructed wetland that best removes those most frequently detected. The literature reviewed details experimental investigations carried out on 136 treatment plants, including free water surface systems, as well as horizontal and vertical subsurface flow beds (pilot or full-scale) acting as primary, secondary or tertiary treatments. The occurrence of selected pharmaceuticals in sediments and gravel and their uptake by common macrophytes are also presented and discussed. We analyse the main removal mechanisms for the selected compounds and investigate the influence of the main design parameters, as well as operational and environmental conditions of the treatment systems on removal efficiency. We also report on previous attempts to correlate observed removal values with the chemical structure and chemical-physical properties (mainly pKa and LogKow) of pharmaceutical compounds. We then use the literature data to calculate the average pharmaceutical mass loadings in the effluent from constructed wetlands, comparing the ability of such systems to remove selected pharmaceuticals with the corresponding conventional secondary and tertiary treatments. Finally, the environmental risk posed by pharmaceutical residues in effluents from constructed wetlands acting as secondary and tertiary treatment steps is calculated in the form of the risk quotient ratio. This approach enabled us to provide a ranking of the most critical compounds for the two scenarios, to discuss the ramifications of the adoption of constructed wetlands for removing such persistent organic compounds, and to propose avenues of future research. © 2013.

  19. PROPOSAL FOR A SIMPLE AND EFFICIENT MONTHLY QUALITY MANAGEMENT PROGRAM ASSESSING THE CONSISTENCY OF ROBOTIC IMAGE-GUIDED SMALL ANIMAL RADIATION SYSTEMS

    PubMed Central

    Brodin, N. Patrik; Guha, Chandan; Tomé, Wolfgang A.

    2015-01-01

    Modern pre-clinical radiation therapy (RT) research requires high precision and accurate dosimetry to facilitate the translation of research findings into clinical practice. Several systems are available that provide precise delivery and on-board imaging capabilities, highlighting the need for a quality management program (QMP) to ensure consistent and accurate radiation dose delivery. An ongoing, simple, and efficient QMP for image-guided robotic small animal irradiators used in pre-clinical RT research is described. Protocols were developed and implemented to assess the dose output constancy (based on the AAPM TG-61 protocol), cone-beam computed tomography (CBCT) image quality and object representation accuracy (using a custom-designed imaging phantom), CBCT-guided target localization accuracy and consistency of the CBCT-based dose calculation. To facilitate an efficient read-out and limit the user dependence of the QMP data analysis, a semi-automatic image analysis and data representation program was developed using the technical computing software MATLAB. The results of the first six months experience using the suggested QMP for a Small Animal Radiation Research Platform (SARRP) are presented, with data collected on a bi-monthly basis. The dosimetric output constancy was established to be within ±1 %, the consistency of the image resolution was within ±0.2 mm, the accuracy of CBCT-guided target localization was within ±0.5 mm, and dose calculation consistency was within ±2 s (± 3 %) per treatment beam. Based on these results, this simple quality assurance program allows for the detection of inconsistencies in dosimetric or imaging parameters that are beyond the acceptable variability for a reliable and accurate pre-clinical RT system, on a monthly or bi-monthly basis. PMID:26425981

  20. Proposal for a Simple and Efficient Monthly Quality Management Program Assessing the Consistency of Robotic Image-Guided Small Animal Radiation Systems.

    PubMed

    Brodin, N Patrik; Guha, Chandan; Tomé, Wolfgang A

    2015-11-01

    Modern pre-clinical radiation therapy (RT) research requires high precision and accurate dosimetry to facilitate the translation of research findings into clinical practice. Several systems are available that provide precise delivery and on-board imaging capabilities, highlighting the need for a quality management program (QMP) to ensure consistent and accurate radiation dose delivery. An ongoing, simple, and efficient QMP for image-guided robotic small animal irradiators used in pre-clinical RT research is described. Protocols were developed and implemented to assess the dose output constancy (based on the AAPM TG-61 protocol), cone-beam computed tomography (CBCT) image quality and object representation accuracy (using a custom-designed imaging phantom), CBCT-guided target localization accuracy and consistency of the CBCT-based dose calculation. To facilitate an efficient read-out and limit the user dependence of the QMP data analysis, a semi-automatic image analysis and data representation program was developed using the technical computing software MATLAB. The results of the first 6-mo experience using the suggested QMP for a Small Animal Radiation Research Platform (SARRP) are presented, with data collected on a bi-monthly basis. The dosimetric output constancy was established to be within ±1 %, the consistency of the image resolution was within ±0.2 mm, the accuracy of CBCT-guided target localization was within ±0.5 mm, and dose calculation consistency was within ±2 s (±3%) per treatment beam. Based on these results, this simple quality assurance program allows for the detection of inconsistencies in dosimetric or imaging parameters that are beyond the acceptable variability for a reliable and accurate pre-clinical RT system, on a monthly or bi-monthly basis.

  1. Efficient full decay inversion of MRS data with a stretched-exponential approximation of the ? distribution

    NASA Astrophysics Data System (ADS)

    Behroozmand, Ahmad A.; Auken, Esben; Fiandaca, Gianluca; Christiansen, Anders Vest; Christensen, Niels B.

    2012-08-01

    We present a new, efficient and accurate forward modelling and inversion scheme for magnetic resonance sounding (MRS) data. MRS, also called surface-nuclear magnetic resonance (surface-NMR), is the only non-invasive geophysical technique that directly detects free water in the subsurface. Based on the physical principle of NMR, protons of the water molecules in the subsurface are excited at a specific frequency, and the superposition of signals from all protons within the excited earth volume is measured to estimate the subsurface water content and other hydrological parameters. In this paper, a new inversion scheme is presented in which the entire data set is used, and multi-exponential behaviour of the NMR signal is approximated by the simple stretched-exponential approach. Compared to the mono-exponential interpretation of the decaying NMR signal, we introduce a single extra parameter, the stretching exponent, which helps describe the porosity in terms of a single relaxation time parameter, and helps to determine correct initial amplitude and relaxation time of the signal. Moreover, compared to a multi-exponential interpretation of the MRS data, the decay behaviour is approximated with considerably fewer parameters. The forward response is calculated in an efficient numerical manner in terms of magnetic field calculation, discretization and integration schemes, which allows fast computation while maintaining accuracy. A piecewise linear transmitter loop is considered for electromagnetic modelling of conductivities in the layered half-space providing electromagnetic modelling of arbitrary loop shapes. The decaying signal is integrated over time windows, called gates, which increases the signal-to-noise ratio, particularly at late times, and the data vector is described with a minimum number of samples, that is, gates. The accuracy of the forward response is investigated by comparing a MRS forward response with responses from three other approaches outlining significant differences between the three approaches. All together, a full MRS forward response is calculated in about 20 s and scales so that on 10 processors the calculation time is reduced to about 3-4 s. The proposed approach is examined through synthetic data and through a field example, which demonstrate the capability of the scheme. The results of the field example agree well the information from an in-site borehole.

  2. Hybrid dose calculation: a dose calculation algorithm for microbeam radiation therapy

    NASA Astrophysics Data System (ADS)

    Donzelli, Mattia; Bräuer-Krisch, Elke; Oelfke, Uwe; Wilkens, Jan J.; Bartzsch, Stefan

    2018-02-01

    Microbeam radiation therapy (MRT) is still a preclinical approach in radiation oncology that uses planar micrometre wide beamlets with extremely high peak doses, separated by a few hundred micrometre wide low dose regions. Abundant preclinical evidence demonstrates that MRT spares normal tissue more effectively than conventional radiation therapy, at equivalent tumour control. In order to launch first clinical trials, accurate and efficient dose calculation methods are an inevitable prerequisite. In this work a hybrid dose calculation approach is presented that is based on a combination of Monte Carlo and kernel based dose calculation. In various examples the performance of the algorithm is compared to purely Monte Carlo and purely kernel based dose calculations. The accuracy of the developed algorithm is comparable to conventional pure Monte Carlo calculations. In particular for inhomogeneous materials the hybrid dose calculation algorithm out-performs purely convolution based dose calculation approaches. It is demonstrated that the hybrid algorithm can efficiently calculate even complicated pencil beam and cross firing beam geometries. The required calculation times are substantially lower than for pure Monte Carlo calculations.

  3. An efficient method for hybrid density functional calculation with spin-orbit coupling

    NASA Astrophysics Data System (ADS)

    Wang, Maoyuan; Liu, Gui-Bin; Guo, Hong; Yao, Yugui

    2018-03-01

    In first-principles calculations, hybrid functional is often used to improve accuracy from local exchange correlation functionals. A drawback is that evaluating the hybrid functional needs significantly more computing effort. When spin-orbit coupling (SOC) is taken into account, the non-collinear spin structure increases computing effort by at least eight times. As a result, hybrid functional calculations with SOC are intractable in most cases. In this paper, we present an approximate solution to this problem by developing an efficient method based on a mixed linear combination of atomic orbital (LCAO) scheme. We demonstrate the power of this method using several examples and we show that the results compare very well with those of direct hybrid functional calculations with SOC, yet the method only requires a computing effort similar to that without SOC. The presented technique provides a good balance between computing efficiency and accuracy, and it can be extended to magnetic materials.

  4. Stress drop with constant, scale independent seismic efficiency and overshoot

    USGS Publications Warehouse

    Beeler, N.M.

    2001-01-01

    To model dissipated and radiated energy during earthquake stress drop, I calculate dynamic fault slip using a single degree of freedom spring-slider block and a laboratory-based static/kinetic fault strength relation with a dynamic stress drop proportional to effective normal stress. The model is scaled to earthquake size assuming a circular rupture; stiffness varies inversely with rupture radius, and rupture duration is proportional to radius. Calculated seismic efficiency, the ratio of radiated to total energy expended during stress drop, is in good agreement with laboratory and field observations. Predicted overshoot, a measure of how much the static stress drop exceeds the dynamic stress drop, is higher than previously published laboratory and seismic observations and fully elasto-dynamic calculations. Seismic efficiency and overshoot are constant, independent of normal stress and scale. Calculated variation of apparent stress with seismic moment resembles the observational constraints of McGarr [1999].

  5. Identification of four squid species by quantitative real-time polymerase chain reaction.

    PubMed

    Ye, Jian; Feng, Junli; Liu, Shasha; Zhang, Yanping; Jiang, Xiaona; Dai, Zhiyuan

    2016-02-01

    Squids are distributed worldwide, including many species of commercial importance, and they are often made into varieties of flavor foods. The rapid identification methods for squid species especially their processed products, however, have not been well developed. In this study, quantitative real-time PCR (qPCR) systems based on specific primers and TaqMan probes have been established for rapid and accurate identification of four common squid species (Ommastrephes bartramii, Dosidicus gigas, Illex argentinus, Todarodes pacificus) in Chinese domestic market. After analyzing mitochondrial genes reported in GenBank, the mitochondrial cytochrome b (Cytb) gene was selected for O. bartramii detection, cytochrome c oxidase subunit I (COI) gene for D. gigas and T. Pacificus detection, ATPase subunit 6 (ATPase 6) gene for I. Argentinus detection, and 12S ribosomal RNA (12S rDNA) gene for designing Ommastrephidae-specific primers and probe. As a result, all the TaqMan systems are of good performance, and efficiency of each reaction was calculated by making standard curves. This method could detect target species either in single or mixed squid specimen, and it was applied to identify 12 squid processed products successfully. Thus, it would play an important role in fulfilling labeling regulations and squid fishery control. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Comparing the utility of image algebra operations for characterizing landscape changes: the case of the Mediterranean coast.

    PubMed

    Alphan, Hakan

    2011-11-01

    The aim of this study is to compare various image algebra procedures for their efficiency in locating and identifying different types of landscape changes on the margin of a Mediterranean coastal plain, Cukurova, Turkey. Image differencing and ratioing were applied to the reflective bands of Landsat TM datasets acquired in 1984 and 2006. Normalized Difference Vegetation index (NDVI) and Principal Component Analysis (PCA) differencing were also applied. The resulting images were tested for their capacity to detect nine change phenomena, which were a priori defined in a three-level classification scheme. These change phenomena included agricultural encroachment, sand dune afforestation, coastline changes and removal/expansion of reed beds. The percentage overall accuracies of different algebra products for each phenomenon were calculated and compared. The results showed that some of the changes such as sand dune afforestation and reed bed expansion were detected with accuracies varying between 85 and 97% by the majority of the algebra operations, while some other changes such as logging could only be detected by mid-infrared (MIR) ratioing. For optimizing change detection in similar coastal landscapes, underlying causes of these changes were discussed and the guidelines for selecting band and algebra operations were provided. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Effect of steam addition on cycle performance of simple and recuperated gas turbines

    NASA Technical Reports Server (NTRS)

    Boyle, R. J.

    1979-01-01

    Results are presented for the cycle efficiency and specific power of simple and recuperated gas turbine cycles in which steam is generated and used to increase turbine flow. Calculations showed significant improvements in cycle efficiency and specific power by adding steam. The calculations were made using component efficiencies and loss assumptions typical of stationary powerplants. These results are presented for a range of operating temperatures and pressures. Relative heat exchanger size and the water use rate are also examined.

  8. Tissue or blood: which is more suitable for detection of EGFR mutations in non-small cell lung cancer?

    PubMed

    Biaoxue, Rong; Shuanying, Yang

    2018-01-01

    Many studies have evaluated the accuracy of EGFR mutation status in blood against that in tumor tissues as the reference. We conducted this systematic review and meta-analysis to assess whether blood can be used as a substitute for tumor tissue in detecting EGFR mutations. Investigations that provided data on EGFR mutation status in blood were searched in the databases of Medline, Embase, Ovid Technologies and Web of Science. The detect efficiency of EGFR mutations in paired blood and tissues was compared using a random-effects model of meta-analysis. Pooled sensitivity and specificity and diagnostic accuracy were calculated by receiver operating characteristic curve. A total of 19 studies with 2,922 individuals were involved in this meta-analysis. The pooled results showed the positive detection rate of EGFR mutations in lung cancer tissues was remarkably higher than that of paired blood samples (odds ratio [OR] = 1.47, p<0.001). The pooled sensitivity and specificity of blood were 0.65 and 0.91, respectively, and the area under the receiver operating characteristic curve was 0.89. Although blood had a better specificity for detecting EGFR mutations, the absence of blood positivity should not necessarily be construed as confirmed negativity. Patients with negative results for blood should decidedly undergo further biopsies to ascertain EGFR mutations.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Jianfei; Wang, Shijun; Turkbey, Evrim B.

    Purpose: Renal calculi are common extracolonic incidental findings on computed tomographic colonography (CTC). This work aims to develop a fully automated computer-aided diagnosis system to accurately detect renal calculi on CTC images. Methods: The authors developed a total variation (TV) flow method to reduce image noise within the kidneys while maintaining the characteristic appearance of renal calculi. Maximally stable extremal region (MSER) features were then calculated to robustly identify calculi candidates. Finally, the authors computed texture and shape features that were imported to support vector machines for calculus classification. The method was validated on a dataset of 192 patients andmore » compared to a baseline approach that detects calculi by thresholding. The authors also compared their method with the detection approaches using anisotropic diffusion and nonsmoothing. Results: At a false positive rate of 8 per patient, the sensitivities of the new method and the baseline thresholding approach were 69% and 35% (p < 1e − 3) on all calculi from 1 to 433 mm{sup 3} in the testing dataset. The sensitivities of the detection methods using anisotropic diffusion and nonsmoothing were 36% and 0%, respectively. The sensitivity of the new method increased to 90% if only larger and more clinically relevant calculi were considered. Conclusions: Experimental results demonstrated that TV-flow and MSER features are efficient means to robustly and accurately detect renal calculi on low-dose, high noise CTC images. Thus, the proposed method can potentially improve diagnosis.« less

  10. Retrieval of spheroid particle size distribution from spectral extinction data in the independent mode using PCA approach

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Lin, Jian-Zhong

    2013-01-01

    An improved anomalous diffraction approximation (ADA) method is presented for calculating the extinction efficiency of spheroids firstly. In this approach, the extinction efficiency of spheroid particles can be calculated with good accuracy and high efficiency in a wider size range by combining the Latimer method and the ADA theory, and this method can present a more general expression for calculating the extinction efficiency of spheroid particles with various complex refractive indices and aspect ratios. Meanwhile, the visible spectral extinction with varied spheroid particle size distributions and complex refractive indices is surveyed. Furthermore, a selection principle about the spectral extinction data is developed based on PCA (principle component analysis) of first derivative spectral extinction. By calculating the contribution rate of first derivative spectral extinction, the spectral extinction with more significant features can be selected as the input data, and those with less features is removed from the inversion data. In addition, we propose an improved Tikhonov iteration method to retrieve the spheroid particle size distributions in the independent mode. Simulation experiments indicate that the spheroid particle size distributions obtained with the proposed method coincide fairly well with the given distributions, and this inversion method provides a simple, reliable and efficient method to retrieve the spheroid particle size distributions from the spectral extinction data.

  11. Evaluation of the Bitterness of Traditional Chinese Medicines using an E-Tongue Coupled with a Robust Partial Least Squares Regression Method.

    PubMed

    Lin, Zhaozhou; Zhang, Qiao; Liu, Ruixin; Gao, Xiaojie; Zhang, Lu; Kang, Bingya; Shi, Junhan; Wu, Zidan; Gui, Xinjing; Li, Xuelin

    2016-01-25

    To accurately, safely, and efficiently evaluate the bitterness of Traditional Chinese Medicines (TCMs), a robust predictor was developed using robust partial least squares (RPLS) regression method based on data obtained from an electronic tongue (e-tongue) system. The data quality was verified by the Grubb's test. Moreover, potential outliers were detected based on both the standardized residual and score distance calculated for each sample. The performance of RPLS on the dataset before and after outlier detection was compared to other state-of-the-art methods including multivariate linear regression, least squares support vector machine, and the plain partial least squares regression. Both R² and root-mean-squares error (RMSE) of cross-validation (CV) were recorded for each model. With four latent variables, a robust RMSECV value of 0.3916 with bitterness values ranging from 0.63 to 4.78 were obtained for the RPLS model that was constructed based on the dataset including outliers. Meanwhile, the RMSECV, which was calculated using the models constructed by other methods, was larger than that of the RPLS model. After six outliers were excluded, the performance of all benchmark methods markedly improved, but the difference between the RPLS model constructed before and after outlier exclusion was negligible. In conclusion, the bitterness of TCM decoctions can be accurately evaluated with the RPLS model constructed using e-tongue data.

  12. A novel community detection method in bipartite networks

    NASA Astrophysics Data System (ADS)

    Zhou, Cangqi; Feng, Liang; Zhao, Qianchuan

    2018-02-01

    Community structure is a common and important feature in many complex networks, including bipartite networks, which are used as a standard model for many empirical networks comprised of two types of nodes. In this paper, we propose a two-stage method for detecting community structure in bipartite networks. Firstly, we extend the widely-used Louvain algorithm to bipartite networks. The effectiveness and efficiency of the Louvain algorithm have been proved by many applications. However, there lacks a Louvain-like algorithm specially modified for bipartite networks. Based on bipartite modularity, a measure that extends unipartite modularity and that quantifies the strength of partitions in bipartite networks, we fill the gap by developing the Bi-Louvain algorithm that iteratively groups the nodes in each part by turns. This algorithm in bipartite networks often produces a balanced network structure with equal numbers of two types of nodes. Secondly, for the balanced network yielded by the first algorithm, we use an agglomerative clustering method to further cluster the network. We demonstrate that the calculation of the gain of modularity of each aggregation, and the operation of joining two communities can be compactly calculated by matrix operations for all pairs of communities simultaneously. At last, a complete hierarchical community structure is unfolded. We apply our method to two benchmark data sets and a large-scale data set from an e-commerce company, showing that it effectively identifies community structure in bipartite networks.

  13. Licit and illicit drugs in a wastewater treatment plant in Verona, Italy.

    PubMed

    Repice, Carla; Dal Grande, Mario; Maggi, Roberto; Pedrazzani, Roberta

    2013-10-01

    The occurrence of 12 active substances among licit and illicit drugs was investigated over a 2 week period inflowing and outflowing in an activated sludge wastewater treatment plant in the city of Verona, Northern Italy. Chemical analyses were performed by means of on-line solid phase extraction coupled to high performance liquid chromatography-tandem mass spectrometry in order to minimize sample pre-treatment. Quantifiable concentrations, up to hundreds of ng/L, were detected in influent and in effluent only for carbamazepine, codeine and benzoylecgonine. Such values are in accordance with literature data, so as removal efficiencies: it was observed that there was pretty much no abatement for carbamazepine, while average removal percentages of about 60% and 90% were calculated for codeine and benzoylecgonine, respectively. These results provide useful information (also concerning some active principles never or rarely detected, up to now, such as lormetazepam) for integrated water cycle managing, also taking into account the specific characteristics of the receiving water basin. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. MEASUREMENTS OF NEUTRON SPECTRA IN 0.8-GEV AND 1.6-GEV PROTON-IRRADIATED<2 OF 2>NA THICK TARGETS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Titarenko, Y. E.; Batyaev, V. F.; Zhivun, V. M.

    2001-01-01

    Measurements of neutron spectra in W, and Na targets irradiated by 0.8 GeV and 1.6 GeV protons are presented. Measurements were made by the TOF techniques using the proton beam from ITEP U-10 synchrotron. Neutrons were detected with BICRON-511 liquid scintillator-based detectors. The neutron detection efficiency was calculated via the SCINFUL and CECIL codes. The W results are compared with the similar data obtained elsewhere. The measured neutron spectra are compared with the LAHET and CEM2k code simulations results. Attempt is made to explain some observed disagreements between experiments and simulations. The presented results are of interest both in termsmore » of nuclear data buildup and as a benchmark of the up-to-date predictive power of the simulation codes used in designing the hybrid accelerator-driven system (ADS) facilities with sodium-cooled tungsten targets.« less

  15. First-principles study of a MXene terahertz detector.

    PubMed

    Jhon, Y I; Seo, M; Jhon, Y M

    2017-12-21

    2D transition metal carbides, nitrides, and carbonitrides called MXenes have attracted increasing attention due to their outstanding properties in many fields. By performing systematic density functional theory calculations, here we show that MXenes can serve as excellent terahertz detecting materials. Giant optical absorption and extinction coefficients are observed in the terahertz range in the most popular MXene, namely, Ti 3 C 2 , which is regardless of the stacking degree. Various other optical properties have been investigated as well in the terahertz range for in-depth understanding of its optical response. We find that the thermoelectric figure of merit (ZT) of stacked Ti 3 C 2 flakes is comparable to that of carbon nanotube films. Based on excellent terahertz absorption and decent thermoelectric efficiency in MXenes, we finally suggest the promise of MXenes in terahertz detection applications, which includes terahertz bolometers and photothermoelectric detectors. Possible ZT improvements are discussed in large-scale MXene flake films and/or MXene-polymer composite films.

  16. A simple, remote, video based breathing monitor.

    PubMed

    Regev, Nir; Wulich, Dov

    2017-07-01

    Breathing monitors have become the all-important cornerstone of a wide variety of commercial and personal safety applications, ranging from elderly care to baby monitoring. Many such monitors exist in the market, some, with vital signs monitoring capabilities, but none remote. This paper presents a simple, yet efficient, real time method of extracting the subject's breathing sinus rhythm. Points of interest are detected on the subject's body, and the corresponding optical flow is estimated and tracked using the well known Lucas-Kanade algorithm on a frame by frame basis. A generalized likelihood ratio test is then utilized on each of the many interest points to detect which is moving in harmonic fashion. Finally, a spectral estimation algorithm based on Pisarenko harmonic decomposition tracks the harmonic frequency in real time, and a fusion maximum likelihood algorithm optimally estimates the breathing rate using all points considered. The results show a maximal error of 1 BPM between the true breathing rate and the algorithm's calculated rate, based on experiments on two babies and three adults.

  17. Gamma-ray pulsars: Emission zones and viewing geometries

    NASA Technical Reports Server (NTRS)

    Romani, Roger W.; Yadigaroglu, I.-A.

    1995-01-01

    There are now a half-dozen young pulsars detected in high-energy photons by the Compton Gamma-Ray Observatory (CGRO), showing a variety of emission efficiencies and pulse profiles. We present here a calculation of the pattern of high-energy emission on the sky in a model which posits gamma-ray production by charge-depleted gaps in the outer magnetosphere. This model accounts for the radio to gamma-ray pulse offsets of the known pulsars, as well as the shape of the high-energy pulse profiles. We also show that about one-third of emitting young radio pulsars will not be detected due to beaming effects, while approximately 2.5 times the number of radio-selected gamma-ray pulsars will be viewed only high energies. Finally we compute the polarization angle variation and find that the previously misunderstood optical polarization sweep of the Crab pulsar arises naturally in this picture. These results strongly support an outer magnetosphere location for the gamma-ray emission.

  18. In vitro evaluation of heat and moisture exchangers designed for spontaneously breathing tracheostomized patients.

    PubMed

    Brusasco, Claudia; Corradi, Francesco; Vargas, Maria; Bona, Margherita; Bruno, Federica; Marsili, Maria; Simonassi, Francesca; Santori, Gregorio; Severgnini, Paolo; Kacmarek, Robert M; Pelosi, Paolo

    2013-11-01

    Heat and moisture exchangers (HMEs) are commonly used in chronically tracheostomized spontaneously breathing patients, to condition inhaled air, maintain lower airway function, and minimize the viscosity of secretions. Supplemental oxygen (O2) can be added to most HMEs designed for spontaneously breathing tracheostomized patients. We tested the efficiency of 7 HMEs designed for spontaneously breathing tracheostomized patients, in a normothermic model, at different minute ventilations (VE) and supplemental O2 flows. HME efficiency was evaluated using an in vitro lung model at 2 VE (5 and 15 L/min) and 4 supplemental O2 flows (0, 3, 6, and 12 L/min). Wet and dry temperatures of the inspiratory flow were measured, and absolute humidity was calculated. In addition, HME efficiency at 0, 12, and 24 h use was evaluated, as well as resistance to flow at 0 and 24 h. The progressive increase in O2 flow from 0 to 12 L/min was associated with a reduction in temperature and absolute humidity. Under the same conditions, this effect was greater at lower VE. The HME with the best performance provided an absolute humidity of 26 mg H2O/L and a temperature of 27.8 °C. No significant changes in efficiency or resistance were detected during the 24 h evaluation. The efficiency of HMEs in terms of temperature and absolute humidity is significantly affected by O2 supplementation and V(E).

  19. Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time.

    PubMed

    Dhar, Amrit; Minin, Vladimir N

    2017-05-01

    Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences.

  20. Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time

    PubMed Central

    Dhar, Amrit

    2017-01-01

    Abstract Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences. PMID:28177780

  1. The Frequency of Snowline-Region Planets from Four Years of OGLE-MOA-Wise Second-Generation Microlensing

    NASA Technical Reports Server (NTRS)

    Shvartzvald, Y.; Maoz, D.; Udalski, A.; Sumi, T.; Friedmann, M.; Kaspi, S.; Poleski, R.; Szymanski, M. K.; Skowron, J.; Kozlowski, S.; hide

    2016-01-01

    We present a statistical analysis of the first four seasons from a second-generation microlensing survey for extrasolar planets, consisting of near-continuous time coverage of 8 deg to the 2nd power of the Galactic bulge by the Optical Gravitational Lens Experiment (OGLE), Microlensing Observations in Astrophysics (MOA), and Wise microlensing surveys. During this period, 224 microlensing events were observed by all three groups. Over 12% of the events showed a deviation from single-lens microlensing, and for approx. 1/3 of those the anomaly is likely caused by a planetary companion. For each of the 224 events, we have performed numerical ray-tracing simulations to calculate the detection efficiency of possible companions as a function of companion-to-host mass ratio and separation. Accounting for the detection efficiency, we find that 55 +34 -22%of microlensed stars host a snowline planet. Moreover, we find that Neptune-mass planets are approx.10 times more common than Jupiter-mass planets. The companion-to-host mass-ratio distribution shows a deficit at q approx. 10 (exp -2), separating the distribution into two companion populations, analogous to the stellar-companion and planet populations, seen in radial-velocity surveys around solar-like stars. Our survey, however, which probes mainly lower mass stars, suggests a minimum in the distribution in the super-Jupiter mass range, and a relatively high occurrence of brown-dwarf companions.

  2. A High-Sensitivity Potentiometric 65-nm CMOS ISFET Sensor for Rapid E. coli Screening.

    PubMed

    Jiang, Yu; Liu, Xu; Dang, Tran Chien; Huang, Xiwei; Feng, Hao; Zhang, Qing; Yu, Hao

    2018-04-01

    Foodborne bacteria, inducing outbreaks of infection or poisoning, have posed great threats to food safety. Potentiometric sensors can identify bacteria levels in food by measuring medium's pH changes. However, most of these sensors face the limitation of low sensitivity and high cost. In this paper, we developed a high-sensitivity ion-sensitive field-effect transistor sensor. It is small sized, cost-efficient, and can be massively fabricated in a standard 65-nm complementary metal-oxide-semiconductor process. A subthreshold pH-to-time-to-voltage conversion scheme was proposed to improve the sensitivity. Furthermore, design parameters, such as chemical sensing area, transistor size, and discharging time, were optimized to enhance the performance. The intrinsic sensitivity of passivation membrane was calculated as 33.2 mV/pH. It was amplified to 123.8 mV/pH with a 0.01-pH resolution, which greatly exceeded 6.3 mV/pH observed in a traditional source-follower based readout structure. The sensing system was applied to Escherichia coli (E. coli) detection with densities ranging from 14 to 140 cfu/mL. Compared to the conventional direct plate counting method (24 h), more efficient sixfold smaller screening time (4 h) was achieved to differentiate samples' E. coli levels. The demonstrated portable, time-saving, and low-cost prescreen system has great potential for food safety detection.

  3. Reprint of "Two-stage sparse coding of region covariance via Log-Euclidean kernels to detect saliency".

    PubMed

    Zhang, Ying-Ying; Yang, Cai; Zhang, Ping

    2017-08-01

    In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Traveling salesman problems with PageRank Distance on complex networks reveal community structure

    NASA Astrophysics Data System (ADS)

    Jiang, Zhongzhou; Liu, Jing; Wang, Shuai

    2016-12-01

    In this paper, we propose a new algorithm for community detection problems (CDPs) based on traveling salesman problems (TSPs), labeled as TSP-CDA. Since TSPs need to find a tour with minimum cost, cities close to each other are usually clustered in the tour. This inspired us to model CDPs as TSPs by taking each vertex as a city. Then, in the final tour, the vertices in the same community tend to cluster together, and the community structure can be obtained by cutting the tour into a couple of paths. There are two challenges. The first is to define a suitable distance between each pair of vertices which can reflect the probability that they belong to the same community. The second is to design a suitable strategy to cut the final tour into paths which can form communities. In TSP-CDA, we deal with these two challenges by defining a PageRank Distance and an automatic threshold-based cutting strategy. The PageRank Distance is designed with the intrinsic properties of CDPs in mind, and can be calculated efficiently. In the experiments, benchmark networks with 1000-10,000 nodes and varying structures are used to test the performance of TSP-CDA. A comparison is also made between TSP-CDA and two well-established community detection algorithms. The results show that TSP-CDA can find accurate community structure efficiently and outperforms the two existing algorithms.

  5. pySeismicFMM: Python based Travel Time Calculation in Regular 2D and 3D Grids in Cartesian and Geographic Coordinates using Fast Marching Method

    NASA Astrophysics Data System (ADS)

    Wilde-Piorko, M.; Polkowski, M.

    2016-12-01

    Seismic wave travel time calculation is the most common numerical operation in seismology. The most efficient is travel time calculation in 1D velocity model - for given source, receiver depths and angular distance time is calculated within fraction of a second. Unfortunately, in most cases 1D is not enough to encounter differentiating local and regional structures. Whenever possible travel time through 3D velocity model has to be calculated. It can be achieved using ray calculation or time propagation in space. While single ray path calculation is quick it is complicated to find the ray path that connects source with the receiver. Time propagation in space using Fast Marching Method seems more efficient in most cases, especially when there are multiple receivers. In this presentation final release of a Python module pySeismicFMM is presented - simple and very efficient tool for calculating travel time from sources to receivers. Calculation requires regular 2D or 3D velocity grid either in Cartesian or geographic coordinates. On desktop class computer calculation speed is 200k grid cells per second. Calculation has to be performed once for every source location and provides travel time to all receivers. pySeismicFMM is free and open source. Development of this tool is a part of authors PhD thesis. Source code of pySeismicFMM will be published before Fall Meeting. National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.

  6. Rapid Parallel Calculation of shell Element Based On GPU

    NASA Astrophysics Data System (ADS)

    Wanga, Jian Hua; Lia, Guang Yao; Lib, Sheng; Li, Guang Yao

    2010-06-01

    Long computing time bottlenecked the application of finite element. In this paper, an effective method to speed up the FEM calculation by using the existing modern graphic processing unit and programmable colored rendering tool was put forward, which devised the representation of unit information in accordance with the features of GPU, converted all the unit calculation into film rendering process, solved the simulation work of all the unit calculation of the internal force, and overcame the shortcomings of lowly parallel level appeared ever before when it run in a single computer. Studies shown that this method could improve efficiency and shorten calculating hours greatly. The results of emulation calculation about the elasticity problem of large number cells in the sheet metal proved that using the GPU parallel simulation calculation was faster than using the CPU's. It is useful and efficient to solve the project problems in this way.

  7. Sensitive Infrared Signal Detection by Upconversion Technique

    NASA Technical Reports Server (NTRS)

    Wong, Teh-Hwa; Yu, Jirong; Bai, Yingxin; Johnson, William; Chen, Songsheng; Petros, Mulugeta; Singh, Upendra N.

    2014-01-01

    We demonstrated upconversion assisted detection of a 2.05-micron signal by sum frequency generation to generate a 700-nm light using a bulk periodically poled lithium niobate crystal. The achieved 94% intrinsic upconversion efficiency and 22.58% overall detection efficiency at a pW level of 2.05 micron pave the path to detect extremely weak infrared (IR) signals for remote sensing applications.

  8. Supramolecular control over recognition and efficient detection of picric acid.

    PubMed

    Béreau, Virginie; Duhayon, Carine; Sutter, Jean-Pascal

    2014-10-18

    Bimetallic Schiff-base Al(3+) complexes bearing ester functions at the periphery of the ligands are shown to be efficient fluorescent chemosensors for picric acid detection. The prominent role of an association between the chemosensor and the picric acid in the detection process is demonstrated. The detection of picric acid in water is achieved with the sensor deposited on paper.

  9. Efficient tiled calculation of over-10-gigapixel holograms using ray-wavefront conversion.

    PubMed

    Igarashi, Shunsuke; Nakamura, Tomoya; Matsushima, Kyoji; Yamaguchi, Masahiro

    2018-04-16

    In the calculation of large-scale computer-generated holograms, an approach called "tiling," which divides the hologram plane into small rectangles, is often employed due to limitations on computational memory. However, the total amount of computational complexity severely increases with the number of divisions. In this paper, we propose an efficient method for calculating tiled large-scale holograms using ray-wavefront conversion. In experiments, the effectiveness of the proposed method was verified by comparing its calculation cost with that using the previous method. Additionally, a hologram of 128K × 128K pixels was calculated and fabricated by a laser-lithography system, and a high-quality 105 mm × 105 mm 3D image including complicated reflection and translucency was optically reconstructed.

  10. An empirical formula to calculate the full energy peak efficiency of scintillation detectors.

    PubMed

    Badawi, Mohamed S; Abd-Elzaher, Mohamed; Thabet, Abouzeid A; El-khatib, Ahmed M

    2013-04-01

    This work provides an empirical formula to calculate the FEPE for different detectors using the effective solid angle ratio derived from experimental measurements. The full energy peak efficiency (FEPE) curves of the (2″(*)2″) NaI(Tl) detector at different seven axial distances from the detector were depicted in a wide energy range from 59.53 to 1408keV using standard point sources. The distinction was based on the effects of the source energy and the source-to-detector distance. A good agreement was noticed between the measured and calculated efficiency values for the source-to-detector distances at 20, 25, 30, 35, 40, 45 and 50cm. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Calculations of 3D compressible flows using an efficient low diffusion upwind scheme

    NASA Astrophysics Data System (ADS)

    Hu, Zongjun; Zha, Gecheng

    2005-01-01

    A newly suggested E-CUSP upwind scheme is employed for the first time to calculate 3D flows of propulsion systems. The E-CUSP scheme contains the total energy in the convective vector and is fully consistent with the characteristic directions. The scheme is proved to have low diffusion and high CPU efficiency. The computed cases in this paper include a transonic nozzle with circular-to-rectangular cross-section, a transonic duct with shock wave/turbulent boundary layer interaction, and a subsonic 3D compressor cascade. The computed results agree well with the experiments. The new scheme is proved to be accurate, efficient and robust for the 3D calculations of the flows in this paper.

  12. Computer-aided detection of renal calculi from noncontrast CT images using TV-flow and MSER features.

    PubMed

    Liu, Jianfei; Wang, Shijun; Turkbey, Evrim B; Linguraru, Marius George; Yao, Jianhua; Summers, Ronald M

    2015-01-01

    Renal calculi are common extracolonic incidental findings on computed tomographic colonography (CTC). This work aims to develop a fully automated computer-aided diagnosis system to accurately detect renal calculi on CTC images. The authors developed a total variation (TV) flow method to reduce image noise within the kidneys while maintaining the characteristic appearance of renal calculi. Maximally stable extremal region (MSER) features were then calculated to robustly identify calculi candidates. Finally, the authors computed texture and shape features that were imported to support vector machines for calculus classification. The method was validated on a dataset of 192 patients and compared to a baseline approach that detects calculi by thresholding. The authors also compared their method with the detection approaches using anisotropic diffusion and nonsmoothing. At a false positive rate of 8 per patient, the sensitivities of the new method and the baseline thresholding approach were 69% and 35% (p < 1e - 3) on all calculi from 1 to 433 mm(3) in the testing dataset. The sensitivities of the detection methods using anisotropic diffusion and nonsmoothing were 36% and 0%, respectively. The sensitivity of the new method increased to 90% if only larger and more clinically relevant calculi were considered. Experimental results demonstrated that TV-flow and MSER features are efficient means to robustly and accurately detect renal calculi on low-dose, high noise CTC images. Thus, the proposed method can potentially improve diagnosis.

  13. Highly sensitive ratiometric detection of heparin and its oversulfated chondroitin sulfate contaminant by fluorescent peptidyl probe.

    PubMed

    Mehta, Pramod Kumar; Lee, Hyeri; Lee, Keun-Hyeung

    2017-05-15

    The selective and sensitive detection of heparin, an anticoagulant in clinics as well as its contaminant oversulfated chondroitin sulfate (OSCS) is of great importance. We first reported a ratiometric sensing method for heparin as well as OSCS contaminants in heparin using a fluorescent peptidyl probe (Pep1, pyrene-GSRKR) and heparin-digestive enzyme. Pep1 exhibited a highly sensitive ratiometric response to nanomolar concentration of heparin in aqueous solution over a wide pH range (2~11) and showed highly selective ratiometric response to heparin among biological competitors such as hyaluronic acid and chondroitin sulfate. Pep1 showed a linear ratiometric response to nanomolar concentrations of heparin in aqueous solutions and in human serum samples. The detection limit for heparin was calculated to be 2.46nM (R 2 =0.99) in aqueous solutions, 2.98nM (R 2 =0.98) in 1% serum samples, and 3.43nM (R 2 =0.99) in 5% serum samples. Pep1 was applied to detect the contaminated OSCS in heparin with heparinase I, II, and III, respectively. The ratiometric sensing method using Pep1 and heparinase II was highly sensitive, fast, and efficient for the detection of OSCS contaminant in heparin. Pep1 with heparinase II could detect as low as 0.0001% (w/w) of OSCS in heparin by a ratiometric response. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. New method for enhanced efficiency in detection of gravitational waves from supernovae using coherent network of detectors

    NASA Astrophysics Data System (ADS)

    Mukherjee, S.; Salazar, L.; Mittelstaedt, J.; Valdez, O.

    2017-11-01

    Supernovae in our universe are potential sources of gravitational waves (GW) that could be detected in a network of GW detectors like LIGO and Virgo. Core-collapse supernovae are rare, but the associated gravitational radiation is likely to carry profuse information about the underlying processes driving the supernovae. Calculations based on analytic models predict GW energies within the detection range of the Advanced LIGO detectors, out to tens of Mpc for certain types of signals e.g. coalescing binary neutron stars. For supernovae however, the corresponding distances are much less. Thus, methods that can improve the sensitivity of searches for GW signals from supernovae are desirable, especially in the advanced detector era. Several methods have been proposed based on various likelihood-based regulators that work on data from a network of detectors to detect burst-like signals (as is the case for signals from supernovae) from potential GW sources. To address this problem, we have developed an analysis pipeline based on a method of noise reduction known as the harmonic regeneration noise reduction (HRNR) algorithm. To demonstrate the method, sixteen supernova waveforms from the Murphy et al. 2009 catalog have been used in presence of LIGO science data. A comparative analysis is presented to show detection statistics for a standard network analysis as commonly used in GW pipelines and the same by implementing the new method in conjunction with the network. The result shows significant improvement in detection statistics.

  15. Computer-aided detection of renal calculi from noncontrast CT images using TV-flow and MSER features

    PubMed Central

    Liu, Jianfei; Wang, Shijun; Turkbey, Evrim B.; Linguraru, Marius George; Yao, Jianhua; Summers, Ronald M.

    2015-01-01

    Purpose: Renal calculi are common extracolonic incidental findings on computed tomographic colonography (CTC). This work aims to develop a fully automated computer-aided diagnosis system to accurately detect renal calculi on CTC images. Methods: The authors developed a total variation (TV) flow method to reduce image noise within the kidneys while maintaining the characteristic appearance of renal calculi. Maximally stable extremal region (MSER) features were then calculated to robustly identify calculi candidates. Finally, the authors computed texture and shape features that were imported to support vector machines for calculus classification. The method was validated on a dataset of 192 patients and compared to a baseline approach that detects calculi by thresholding. The authors also compared their method with the detection approaches using anisotropic diffusion and nonsmoothing. Results: At a false positive rate of 8 per patient, the sensitivities of the new method and the baseline thresholding approach were 69% and 35% (p < 1e − 3) on all calculi from 1 to 433 mm3 in the testing dataset. The sensitivities of the detection methods using anisotropic diffusion and nonsmoothing were 36% and 0%, respectively. The sensitivity of the new method increased to 90% if only larger and more clinically relevant calculi were considered. Conclusions: Experimental results demonstrated that TV-flow and MSER features are efficient means to robustly and accurately detect renal calculi on low-dose, high noise CTC images. Thus, the proposed method can potentially improve diagnosis. PMID:25563255

  16. Radiation anomaly detection algorithms for field-acquired gamma energy spectra

    NASA Astrophysics Data System (ADS)

    Mukhopadhyay, Sanjoy; Maurer, Richard; Wolff, Ron; Guss, Paul; Mitchell, Stephen

    2015-08-01

    The Remote Sensing Laboratory (RSL) is developing a tactical, networked radiation detection system that will be agile, reconfigurable, and capable of rapid threat assessment with high degree of fidelity and certainty. Our design is driven by the needs of users such as law enforcement personnel who must make decisions by evaluating threat signatures in urban settings. The most efficient tool available to identify the nature of the threat object is real-time gamma spectroscopic analysis, as it is fast and has a very low probability of producing false positive alarm conditions. Urban radiological searches are inherently challenged by the rapid and large spatial variation of background gamma radiation, the presence of benign radioactive materials in terms of the normally occurring radioactive materials (NORM), and shielded and/or masked threat sources. Multiple spectral anomaly detection algorithms have been developed by national laboratories and commercial vendors. For example, the Gamma Detector Response and Analysis Software (GADRAS) a one-dimensional deterministic radiation transport software capable of calculating gamma ray spectra using physics-based detector response functions was developed at Sandia National Laboratories. The nuisance-rejection spectral comparison ratio anomaly detection algorithm (or NSCRAD), developed at Pacific Northwest National Laboratory, uses spectral comparison ratios to detect deviation from benign medical and NORM radiation source and can work in spite of strong presence of NORM and or medical sources. RSL has developed its own wavelet-based gamma energy spectral anomaly detection algorithm called WAVRAD. Test results and relative merits of these different algorithms will be discussed and demonstrated.

  17. Optimization of ISOCS Parameters for Quantitative Non-Destructive Analysis of Uranium in Bulk Form

    NASA Astrophysics Data System (ADS)

    Kutniy, D.; Vanzha, S.; Mikhaylov, V.; Belkin, F.

    2011-12-01

    Quantitative calculation of the isotopic masses of fissionable U and Pu is important for forensic analysis of nuclear materials. γ-spectrometry is the most commonly applied tool for qualitative detection and analysis of key radionuclides in nuclear materials. Relative isotopic measurement of U and Pu may be obtained from γ-spectra through application of special software such as MGAU (Multi-Group Analysis for Uranium, LLNL) or FRAM (Fixed-Energy Response Function Analysis with Multiple Efficiency, LANL). If the concentration of U/Pu in the matrix is unknown, however, isotopic masses cannot be calculated. At present, active neutron interrogation is the only practical alternative for non-destructive quantification of fissionable isotopes of U and Pu. An active well coincidence counter (AWCC), an alternative for analyses of uranium materials, has the following disadvantages: 1) The detection of small quantities (≤100 g) of 235U is not possible in many models; 2) Representative standards that capture the geometry, density and chemical composition of the analyzed unknown are required for precise analysis; and 3) Specimen size is severely restricted by the size of the measuring chamber. These problems may be addressed using modified γ-spectrometry techniques based on a coaxial HPGe-detector and ISOCS software (In Situ Object Counting System software, Canberra). We present data testing a new gamma-spectrometry method uniting actinide detection with commonly utilized software, modified for application in determining the masses of the fissionable isotopes in unknown samples of nuclear materials. The ISOCS software, widely used in radiation monitoring, calculates the detector efficiency curve in a specified geometry and range of photon energies. In describing the geometry of the source-detector, it is necessary to clearly describe the distance between the source and the detector, the material and the thickness of the walls of the container, as well as material, density and chemical composition of the matrix of the specimen. Obviously, not all parameters can be characterized when measuring samples of unknown composition or uranium in bulk form. Because of this, and especially for uranium materials, the IAEA developed an ISOCS optimization procedure. The target values for the optimization are Μmatrixfixed, the matrix mass determined by weighing with a known mass container, and Εfixed, the 235U enrichment, determined by MGAU. Target values are fitted by varying the matrix density (ρ), and the concentration of uranium in the matrix of the unknown (w). For each (ρi, wi), an efficiency curve is generated, and the masses of uranium isotopes, Μ235Ui and Μ238Ui, determined using spectral activity data and known specific activities for U. Finally, fitted parameters are obtained for Μmatrixi = Μmatrixfixed ± 1σ, Εi = Εfixed ± 1σ, as well as important parameters (ρi, wi, Μ235Ui, Μ238Ui, ΜUi). We examined multiple forms of uranium (powdered, pressed, and scrap UO2 and U3O8) to test this method for its utility in accurately identifying the mass and enrichment of uranium materials, and will present the results of this research.

  18. Design Study of an Incinerator Ash Conveyor Counting System - 13323

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jaederstroem, Henrik; Bronson, Frazier

    A design study has been performed for a system that should measure the Cs-137 activity in ash from an incinerator. Radioactive ash, expected to consist of both Cs-134 and Cs-137, will be transported on a conveyor belt at 0.1 m/s. The objective of the counting system is to determine the Cs-137 activity and direct the ash to the correct stream after a diverter. The decision levels are ranging from 8000 to 400000 Bq/kg and the decision error should be as low as possible. The decision error depends on the total measurement uncertainty which depends on the counting statistics and themore » uncertainty in the efficiency of the geometry. For the low activity decision it is necessary to know the efficiency to be able to determine if the signal from the Cs-137 is above the minimum detectable activity and that it generates enough counts to reach the desired precision. For the higher activity decision the uncertainty of the efficiency needs to be understood to minimize decision errors. The total efficiency of the detector is needed to be able to determine if the detector will be able operate at the count rate at the highest expected activity. The design study that is presented in this paper describes how the objectives of the monitoring systems were obtained, the choice of detector was made and how ISOCS (In Situ Object Counting System) mathematical modeling was used to calculate the efficiency. The ISOCS uncertainty estimator (IUE) was used to determine which parameters of the ash was important to know accurately in order to minimize the uncertainty of the efficiency. The examined parameters include the height of the ash on the conveyor belt, the matrix composition and density and relative efficiency of the detector. (authors)« less

  19. Wildfire Detection using by Multi Dimensional Histogram in Boreal Forest

    NASA Astrophysics Data System (ADS)

    Honda, K.; Kimura, K.; Honma, T.

    2008-12-01

    Early detection of wildfires is an issue for reduction of damage to environment and human. There are some attempts to detect wildfires by using satellite imagery, which are mainly classified into three methods: Dozier Method(1981-), Threshold Method(1986-) and Contextual Method(1994-). However, the accuracy of these methods is not enough: some commission and omission errors are included in the detected results. In addition, it is not so easy to analyze satellite imagery with high accuracy because of insufficient ground truth data. Kudoh and Hosoi (2003) developed the detection method by using three-dimensional (3D) histogram from past fire data with the NOAA-AVHRR imagery. But their method is impractical because their method depends on their handworks to pick up past fire data from huge data. Therefore, the purpose of this study is to collect fire points as hot spots efficiently from satellite imagery and to improve the method to detect wildfires with the collected data. As our method, we collect past fire data with the Alaska Fire History data obtained by the Alaska Fire Service (AFS). We select points that are expected to be wildfires, and pick up the points inside the fire area of the AFS data. Next, we make 3D histogram with the past fire data. In this study, we use Bands 1, 21 and 32 of MODIS. We calculate the likelihood to detect wildfires with the three-dimensional histogram. As our result, we select wildfires with the 3D histogram effectively. We can detect the troidally spreading wildfire. This result shows the evidence of good wildfire detection. However, the area surrounding glacier tends to rise brightness temperature. It is a false alarm. Burnt area and bare ground are sometimes indicated as false alarms, so that it is necessary to improve this method. Additionally, we are trying various combinations of MODIS bands as the better method to detect wildfire effectively. So as to adjust our method in another area, we are applying our method to tropical forest in Kalimantan, Indonesia and around Chiang Mai, Thailand. But the ground truth data in these areas is lesser than the one in Alaska. Our method needs lots of accurate observed data to make multi-dimensional histogram in the same area. In this study, we can show the system to select wildfire data efficiently from satellite imagery. Furthermore, the development of multi-dimensional histogram from past fire data makes it possible to detect wildfires accurately.

  20. Optical Forces on Non-Spherical Nanoparticles Trapped by Optical Waveguides

    NASA Astrophysics Data System (ADS)

    Hasan Ahmed, Dewan; Sung, Hyung Jin

    2011-07-01

    Numerical simulations of a solid-core polymer waveguide structure were performed to calculate the trapping efficiencies of particles with nanoscale dimensions smaller than the wavelength of the trapping beam. A three-dimensional (3-D) finite element method was employed to calculate the electromagnetic field. The inlet and outlet boundary conditions were obtained using an eigenvalue solver to determine the guided and evanescent mode profiles. The Maxwell stress tensor was considered for the calculation of the transverse and downward trapping efficiencies. A particle at the center of the waveguide showed minimal transverse trapping efficiency and maximal downward trapping efficiency. This trend gradually reversed as the particle moved away from the center of the waveguide. Particles with larger surface areas exhibited higher trapping efficiencies and tended to be trapped near the waveguide. Particles displaced from the wave input tended to be trapped at the waveguide surface. Simulation of an ellipsoidal particle showed that the orientation of the major axis along the waveguide's lateral z-coordinate significantly influenced the trapping efficiency. The particle dimensions along the z-coordinate were more critical than the gap distance (vertical displacement from the floor of the waveguide) between the ellipsoid particle and the waveguide. The present model was validated using the available results reported in the literature for different trapping efficiencies.

  1. Alternative Beam Efficiency Calculations for a Large-aperture Multiple-frequency Microwave Radiometer (LAMMR)

    NASA Technical Reports Server (NTRS)

    Schmidt, R. F.

    1979-01-01

    The fundamental definition of beam efficiency, given in terms of a far field radiation pattern, was used to develop alternative definitions which improve accuracy, reduce the amount of calculation required, and isolate the separate factors composing beam efficiency. Well-known definitions of aperture efficiency were introduced successively to simplify the denominator of the fundamental definition. The superposition of complex vector spillover and backscattered fields was examined, and beam efficiency analysis in terms of power patterns was carried out. An extension from single to dual reflector geometries was included. It is noted that the alternative definitions are advantageous in the mathematical simulation of a radiometer system, and are not intended for the measurements discipline where fields have merged and therefore lost their identity.

  2. Measuring cost efficiency in the Nordic hospitals--a cross-sectional comparison of public hospitals in 2002.

    PubMed

    Linna, Miika; Häkkinen, Unto; Peltola, Mikko; Magnussen, Jon; Anthun, Kjartan S; Kittelsen, Sverre; Roed, Annette; Olsen, Kim; Medin, Emma; Rehnberg, Clas

    2010-12-01

    The aim of this study was to compare the performance of hospital care in four Nordic countries: Norway, Finland, Sweden and Denmark. Using national discharge registries and cost data from hospitals, cost efficiency in the production of somatic hospital care was calculated for public hospitals. Data were collected using harmonized definitions of inputs and outputs for 184 hospitals and data envelopment analysis was used to calculate Farrell efficiency estimates for the year 2002. Results suggest that there were marked differences in the average hospital efficiency between Nordic countries. In 2002, average efficiency was markedly higher in Finland compared to Norway and Sweden. This study found differences in cost efficiency that cannot be explained by input prices or differences in coding practices. More analysis is needed to reveal the causes of large efficiency disparities between Nordic hospitals.

  3. Phytoextraction of arsenic-contaminated soil with Pteris vittata in Henan Province, China: comprehensive evaluation of remediation efficiency correcting for atmospheric depositions.

    PubMed

    Lei, Mei; Wan, Xiaoming; Guo, Guanghui; Yang, Junxing; Chen, Tongbin

    2018-01-01

    Research on the appropriate method for evaluating phytoremediation efficiency is limited. A 2-year field experiment was conducted to investigate phytoremediation efficiency using the hyperaccumulator Pteris vittata on an arsenic (As)-contaminated site. The remediation efficiency was evaluated through the removal rate of As in soils and extraction rate of heavy metals in plants. After 2 years of remediation, the concentration of total As in soils decreased from 16.27 mg kg -1 in 2012 to 14.58 mg kg -1 in 2014. The total remediation efficiency of As was 10.39% in terms of the removal rate of heavy metals calculated for soils, whereas the remediation efficiency calculated from As uptake by P. vittata was 16.09%. Such a discrepancy aroused further consideration on the potential input of As. A large amount of As was brought in by atmospheric emissions, which possibly biased the calculation of remediation efficiency. In fact, considering also the atmospheric depositions of As, the corrected removal rate of As from soil was 16.57%. Therefore, the results of this work suggest that (i) when evaluating the phytoextraction efficiency, the whole input and output cycle of the element of interest in the targeted ecosystem must be considered, and (ii) P. vittata has the potential to be used to remediate As-contaminated soils in Henan Province, China.

  4. Monopole search below the Parker limit with the MACRO detector at Gran Sasso

    NASA Technical Reports Server (NTRS)

    Tarle, G.

    1985-01-01

    The MACRO detector approved for the Gran Sasso Underground Laboratory in Italy will be the first capable of performing a definitive search for super-massive grand unified theory (GUT) monopoles at a level significantly below the Parker flux limit of 10 to the minus 15th power square centimeters Sr(-1) 5(-1). GUT monopoles will move at very low velocities (V approx. 0.001 c) relative to the Earth and a multifaceted detection technique is required to assume their unambiguous identification. Calculations of scintillator response to slow monopoles and measurements of scintillation efficiency for low energy protons have shown that bare monopoles and electrically charged monopoles moving at velocities as low as 5 x .0001 c will produce detectable scintillation signals. The time-of-flight between two thick (25 cm) liquid scintillation layers separated by 4.3m will be used in conjunction with waveform digitization of signals of extended duration in each thick scintillator to provide a redundant signature for slow penetrating particles. Limited streamer tubes filled with He and n-pentane will detect bare monopoles with velocities as low as 1 x 0.0001 c by exploiting monopole induced level mixing and the Penning effect.

  5. Inertial Sensor-Based Motion Analysis of Lower Limbs for Rehabilitation Treatments

    PubMed Central

    Sun, Tongyang; Duan, Lihong; Wang, Yulong

    2017-01-01

    The hemiplegic rehabilitation state diagnosing performed by therapists can be biased due to their subjective experience, which may deteriorate the rehabilitation effect. In order to improve this situation, a quantitative evaluation is proposed. Though many motion analysis systems are available, they are too complicated for practical application by therapists. In this paper, a method for detecting the motion of human lower limbs including all degrees of freedom (DOFs) via the inertial sensors is proposed, which permits analyzing the patient's motion ability. This method is applicable to arbitrary walking directions and tracks of persons under study, and its results are unbiased, as compared to therapist qualitative estimations. Using the simplified mathematical model of a human body, the rotation angles for each lower limb joint are calculated from the input signals acquired by the inertial sensors. Finally, the rotation angle versus joint displacement curves are constructed, and the estimated values of joint motion angle and motion ability are obtained. The experimental verification of the proposed motion detection and analysis method was performed, which proved that it can efficiently detect the differences between motion behaviors of disabled and healthy persons and provide a reliable quantitative evaluation of the rehabilitation state. PMID:29065575

  6. An online sleep apnea detection method based on recurrence quantification analysis.

    PubMed

    Nguyen, Hoa Dinh; Wilkins, Brek A; Cheng, Qi; Benjamin, Bruce Allen

    2014-07-01

    This paper introduces an online sleep apnea detection method based on heart rate complexity as measured by recurrence quantification analysis (RQA) statistics of heart rate variability (HRV) data. RQA statistics can capture nonlinear dynamics of a complex cardiorespiratory system during obstructive sleep apnea. In order to obtain a more robust measurement of the nonstationarity of the cardiorespiratory system, we use different fixed amount of neighbor thresholdings for recurrence plot calculation. We integrate a feature selection algorithm based on conditional mutual information to select the most informative RQA features for classification, and hence, to speed up the real-time classification process without degrading the performance of the system. Two types of binary classifiers, i.e., support vector machine and neural network, are used to differentiate apnea from normal sleep. A soft decision fusion rule is developed to combine the results of these classifiers in order to improve the classification performance of the whole system. Experimental results show that our proposed method achieves better classification results compared with the previous recurrence analysis-based approach. We also show that our method is flexible and a strong candidate for a real efficient sleep apnea detection system.

  7. An energy- and depth-dependent model for x-ray imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallas, Brandon D.; Boswell, Jonathan S.; Badano, Aldo

    In this paper, we model an x-ray imaging system, paying special attention to the energy- and depth-dependent characteristics of the inputs and interactions: x rays are polychromatic, interaction depth and conversion to optical photons is energy-dependent, optical scattering and the collection efficiency depend on the depth of interaction. The model we construct is a random function of the point process that begins with the distribution of x rays incident on the phosphor and ends with optical photons being detected by the active area of detector pixels to form an image. We show how the point-process representation can be used tomore » calculate the characteristic statistics of the model. We then simulate a Gd{sub 2}O{sub 2}S:Tb phosphor, estimate its characteristic statistics, and proceed with a signal-detection experiment to investigate the impact of the pixel fill factor on detecting spherical calcifications (the signal). The two extremes possible from this experiment are that SNR{sup 2} does not change with fill factor or changes in proportion to fill factor. In our results, the impact of fill factor is between these extremes, and depends on the diameter of the signal.« less

  8. High efficiency microfluidic beta detector for pharmacokinetic studies in small animals

    NASA Astrophysics Data System (ADS)

    Convert, Laurence; Girard-Baril, Frédérique; Renaudin, Alan; Grondin, Étienne; Jaouad, Abdelatif; Aimez, Vincent; Charette, Paul; Lecomte, Roger

    2011-10-01

    New radiotracers are continuously being developed to improve diagnostic efficiency using Single Photon Emission Computed Tomography (SPECT) or Positron Emission Tomography (PET). The characterization of their pharmacokinetics requires blood radioactivity monitoring over time during the scan and is very challenging in small animals because of the low volume of blood available. In this work, a prototype microfluidic blood counter made of a microchannel atop a silicon substrate containing PIN photodiodes is proposed to improve beta detection efficiency in a small volume by eliminating unnecessary interfaces between fluid and detector. A flat rectangular-shaped epoxy channel, 36 μm×1.26 mm cross section and 31.5 mm in length, was microfabricated over a die containing an array of 2×2 mm 2 PIN photodiodes, leaving only a few micrometers of epoxy floor layer between the fluid and the photodiode sensitive surface. This geometry leads to a quasi 2D source, optimizing geometrical detection efficiency that was estimated at 41% using solid angle calculation. CV- IV measurements were made at each fabrication step to confirm that the microchannel components had no significant effects on the diodes' electrical characteristics. The chip was wire-bonded to a PCB and connected to charge sensitive preamplifier and amplifier modules for pulse shaping. Energy spectra recorded for different isotopes showed continuous beta distribution for PET isotopes and monoenergetic conversion electron peaks for 99mTc. Absolute sensitivity was determined for the most popular PET and SPECT radioisotopes and ranged from 26% to 33% for PET tracers ( 18F, 13N, 11C, 68Ga) and more than 2% for 99mTc. Input functions were successfully simulated with 18F, confirming the setup's suitability for pharmacokinetic modeling of PET and SPECT radiotracers in animal experiments. By using standard materials and procedures, the fabrication process is well suited to on-chip microfluidic functionality, allowing full characterization of new radiotracers.

  9. SU-E-J-199: A Software Tool for Quality Assurance of Online Replanning with MR-Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, G; Ahunbay, E; Li, X

    2015-06-15

    Purpose: To develop a quality assurance software tool, ArtQA, capable of automatically checking radiation treatment plan parameters, verifying plan data transfer from treatment planning system (TPS) to record and verify (R&V) system, performing a secondary MU calculation considering the effect of magnetic field from MR-Linac, and verifying the delivery and plan consistency, for online replanning. Methods: ArtQA was developed by creating interfaces to TPS (e.g., Monaco, Elekta), R&V system (Mosaiq, Elekta), and secondary MU calculation system. The tool obtains plan parameters from the TPS via direct file reading, and retrieves plan data both transferred from TPS and recorded during themore » actual delivery in the R&V system database via open database connectivity and structured query language. By comparing beam/plan datasets in different systems, ArtQA detects and outputs discrepancies between TPS, R&V system and secondary MU calculation system, and delivery. To consider the effect of 1.5T transverse magnetic field from MR-Linac in the secondary MU calculation, a method based on modified Clarkson integration algorithm was developed and tested for a series of clinical situations. Results: ArtQA is capable of automatically checking plan integrity and logic consistency, detecting plan data transfer errors, performing secondary MU calculations with or without a transverse magnetic field, and verifying treatment delivery. The tool is efficient and effective for pre- and post-treatment QA checks of all available treatment parameters that may be impractical with the commonly-used visual inspection. Conclusion: The software tool ArtQA can be used for quick and automatic pre- and post-treatment QA check, eliminating human error associated with visual inspection. While this tool is developed for online replanning to be used on MR-Linac, where the QA needs to be performed rapidly as the patient is lying on the table waiting for the treatment, ArtQA can be used as a general QA tool in radiation oncology practice. This work is partially supported by Elekta Inc.« less

  10. SU-F-T-376: The Efficiency of Calculating Photonuclear Reaction On High-Energy Photon Therapy by Monte Carlo Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hirayama, S; Fujibuchi, T

    Purpose: Secondary-neutrons having harmful influences to a human body are generated by photonuclear reaction on high-energy photon therapy. Their characteristics are not known in detail since the calculation to evaluate them takes very long time. PHITS(Particle and Heavy Ion Transport code System) Monte Carlo code since versions 2.80 has the new parameter “pnimul” raising the probability of occurring photonuclear reaction forcibly to make the efficiency of calculation. We investigated the optimum value of “pnimul” on high-energy photon therapy. Methods: The geometry of accelerator head based on the specification of a Varian Clinac 21EX was used for PHITS ver. 2.80. Themore » phantom (30 cm * 30 cm * 30 cm) filled the composition defined by ICRU(International Commission on Radiation Units) was placed at source-surface distance 100 cm. We calculated the neutron energy spectra in the surface of ICRU phantom with “pnimal” setting 1, 10, 100, 1000, 10000 and compared the total calculation time and the behavior of photon using PDD(Percentage Depth Dose) and OCR(Off-Center Ratio). Next, the cutoff energy of photon, electron and positron were investigated for the calculation efficiency with 4, 5, 6 and 7 MeV. Results: The calculation total time until the errors of neutron fluence become within 1% decreased as increasing “pnimul”. PDD and OCR showed no differences by the parameter. The calculation time setting the cutoff energy like 4, 5, 6 and 7 MeV decreased as increasing the cutoff energy. However, the errors of photon become within 1% did not decrease by the cutoff energy. Conclusion: The optimum values of “pnimul” and the cutoff energy were investigated on high-energy photon therapy. It is suggest that using the optimum “pnimul” makes the calculation efficiency. The study of the cutoff energy need more investigation.« less

  11. Prospective assessment of early fetal loss using an immunoenzymometric screening assay for detection of urinary human chorionic gonadotropin.

    PubMed

    Taylor, C A; Overstreet, J W; Samuels, S J; Boyers, S P; Canfield, R E; O'Connor, J F; Hanson, F W; Lasley, B L

    1992-06-01

    To develop an economical, nonradiometric immunoenzymometric assay (IEMA) for the detection of urinary human chorionic gonadotropin (hCG) in studies of early fetal loss. To be effective, the IEMA must have a sensitivity equal to the standard immunoradiometric assay (IRMA) and sufficient specificity to eliminate the need for screening most nonconceptive cycles with the expensive and labor-intensive IRMA. Two different assays were used to measure hCG in daily early morning urine samples from potential conceptive cycles. Women undergoing donor artificial insemination (AI) were evaluated in a prospective study. Ninety-two women volunteers were selected on the basis of apparent normal reproductive health. Artificial insemination with nonfrozen donor semen was performed by cervical cup twice each menstrual cycle at 48-hour intervals, and daily urine samples were self-collected throughout the menstrual cycle. An IEMA was developed to detect urinary hCG using the same antibodies as in the standard IRMA; a study was designed to determine whether this nonradiometric assay could successfully detect the early fetal loss that was detected by the IRMA. Of 224 menstrual cycles analyzed by both assays, a total of six early fetal losses were detected by the IRMA. When the tentative screening rule was set to allow all six of these losses and 95% of future losses to be detected by the IEMA, an additional 34 false-positive results were detected by the IEMA. The specificity of the IEMA with this rule was calculated to be 84%. An IEMA based on the same antibodies used for the standard IRMA can serve as an efficient screening assay for the detection of early fetal loss. When the IEMA is used in this manner, nearly 80% of screened menstrual cycles can be eliminated without further testing by the IRMA.

  12. Abrupt skin lesion border cutoff measurement for malignancy detection in dermoscopy images.

    PubMed

    Kaya, Sertan; Bayraktar, Mustafa; Kockara, Sinan; Mete, Mutlu; Halic, Tansel; Field, Halle E; Wong, Henry K

    2016-10-06

    Automated skin lesion border examination and analysis techniques have become an important field of research for distinguishing malignant pigmented lesions from benign lesions. An abrupt pigment pattern cutoff at the periphery of a skin lesion is one of the most important dermoscopic features for detection of neoplastic behavior. In current clinical setting, the lesion is divided into a virtual pie with eight sections. Each section is examined by a dermatologist for abrupt cutoff and scored accordingly, which can be tedious and subjective. This study introduces a novel approach to objectively quantify abruptness of pigment patterns along the lesion periphery. In the proposed approach, first, the skin lesion border is detected by the density based lesion border detection method. Second, the detected border is gradually scaled through vector operations. Then, along gradually scaled borders, pigment pattern homogeneities are calculated at different scales. Through this process, statistical texture features are extracted. Moreover, different color spaces are examined for the efficacy of texture analysis. The proposed method has been tested and validated on 100 (31 melanoma, 69 benign) dermoscopy images. Analyzed results indicate that proposed method is efficient on malignancy detection. More specifically, we obtained specificity of 0.96 and sensitivity of 0.86 for malignancy detection in a certain color space. The F-measure, harmonic mean of recall and precision, of the framework is reported as 0.87. The use of texture homogeneity along the periphery of the lesion border is an effective method to detect malignancy of the skin lesion in dermoscopy images. Among different color spaces tested, RGB color space's blue color channel is the most informative color channel to detect malignancy for skin lesions. That is followed by YCbCr color spaces Cr channel, and Cr is closely followed by the green color channel of RGB color space.

  13. A method of evaluating efficiency during space-suited work in a neutral buoyancy environment

    NASA Technical Reports Server (NTRS)

    Greenisen, Michael C.; West, Phillip; Newton, Frederick K.; Gilbert, John H.; Squires, William G.

    1991-01-01

    The purpose was to investigate efficiency as related to the work transmission and the metabolic cost of various extravehicular activity (EVA) tasks during simulated microgravity (whole body water immersion) using three space suits. Two new prototype space station suits, AX-5 and MKIII, are pressurized at 57.2 kPa and were tested concurrently with the operationally used 29.6 kPa shuttle suit. Four male astronauts were asked to perform a fatigue trial on four upper extremity exercises during which metabolic rate and work output were measured and efficiency was calculated in each suit. The activities were selected to simulate actual EVA tasks. The test article was an underwater dynamometry system to which the astronauts were secured by foot restraints. All metabolic data was acquired, calculated, and stored using a computerized indirect calorimetry system connected to the suit ventilation/gas supply control console. During the efficiency testing, steady state metabolic rate could be evaluated as well as work transmitted to the dynamometer. Mechanical efficiency could then be calculated for each astronaut in each suit performing each movement.

  14. Model Energy Efficiency Program Impact Evaluation Guide

    EPA Pesticide Factsheets

    Find guidance on model approaches for calculating energy, demand, and emissions savings resulting from energy efficiency programs. It describes several standard approaches that can be used in order to make these programs more efficient.

  15. Sparsity-weighted outlier FLOODing (OFLOOD) method: Efficient rare event sampling method using sparsity of distribution.

    PubMed

    Harada, Ryuhei; Nakamura, Tomotake; Shigeta, Yasuteru

    2016-03-30

    As an extension of the Outlier FLOODing (OFLOOD) method [Harada et al., J. Comput. Chem. 2015, 36, 763], the sparsity of the outliers defined by a hierarchical clustering algorithm, FlexDice, was considered to achieve an efficient conformational search as sparsity-weighted "OFLOOD." In OFLOOD, FlexDice detects areas of sparse distribution as outliers. The outliers are regarded as candidates that have high potential to promote conformational transitions and are employed as initial structures for conformational resampling by restarting molecular dynamics simulations. When detecting outliers, FlexDice defines a rank in the hierarchy for each outlier, which relates to sparsity in the distribution. In this study, we define a lower rank (first ranked), a medium rank (second ranked), and the highest rank (third ranked) outliers, respectively. For instance, the first-ranked outliers are located in a given conformational space away from the clusters (highly sparse distribution), whereas those with the third-ranked outliers are nearby the clusters (a moderately sparse distribution). To achieve the conformational search efficiently, resampling from the outliers with a given rank is performed. As demonstrations, this method was applied to several model systems: Alanine dipeptide, Met-enkephalin, Trp-cage, T4 lysozyme, and glutamine binding protein. In each demonstration, the present method successfully reproduced transitions among metastable states. In particular, the first-ranked OFLOOD highly accelerated the exploration of conformational space by expanding the edges. In contrast, the third-ranked OFLOOD reproduced local transitions among neighboring metastable states intensively. For quantitatively evaluations of sampled snapshots, free energy calculations were performed with a combination of umbrella samplings, providing rigorous landscapes of the biomolecules. © 2015 Wiley Periodicals, Inc.

  16. Performance Analysis of a Pole and Tree Trunk Detection Method for Mobile Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Lehtomäki, M.; Jaakkola, A.; Hyyppä, J.; Kukko, A.; Kaartinen, H.

    2011-09-01

    Dense point clouds can be collected efficiently from large areas using mobile laser scanning (MLS) technology. Accurate MLS data can be used for detailed 3D modelling of the road surface and objects around it. The 3D models can be utilised, for example, in street planning and maintenance and noise modelling. Utility poles, traffic signs, and lamp posts can be considered an important part of road infrastructure. Poles and trees stand out from the environment and should be included in realistic 3D models. Detection of narrow vertical objects, such as poles and tree trunks, from MLS data was studied. MLS produces huge amounts of data and, therefore, processing methods should be as automatic as possible and for the methods to be practical, the algorithms should run in an acceptable time. The automatic pole detection method tested in this study is based on first finding point clusters that are good candidates for poles and then separating poles and tree trunks from other clusters using features calculated from the clusters and by applying a mask that acts as a model of a pole. The method achieved detection rates of 77.7% and 69.7% in the field tests while 81.0% and 86.5% of the detected targets were correct. Pole-like targets that were surrounded by other objects, such as tree trunks that were inside branches, were the most difficult to detect. Most of the false detections came from wall structures, which could be corrected in further processing.

  17. Evaluation of a Commercial Multiplex PCR for Rapid Detection of Multi Drug Resistant Gram Negative Infections

    PubMed Central

    Chavada, Ruchir; Maley, Michael

    2015-01-01

    Introduction: Community and healthcare associated infections caused by multi-drug resistant gram negative organisms (MDR GN) represent a worldwide threat. Nucleic Acid Detection tests are becoming more common for their detection; however they can be expensive requiring specialised equipment and local expertise. This study was done to evaluate the utility of a commercial multiplex tandem (MT) PCR for detection of MDR GN. Methods: The study was done on stored laboratory MDR GN isolates from sterile and non-sterile specimens (n=126, out of stored 567 organisms). Laboratory validation of the MT PCR was done to evaluate sensitivity, specificity and agreement with the current phenotypic methods used in the laboratory. Amplicon sequencing was also done on selected isolates for assessing performance characteristics. Workflow and cost implications of the MT PCR were evaluated. Results: The sensitivity and specificity of the MT PCR were calculated to be 95% and 96.7% respectively. Agreement with the phenotypic methods was 80%. Major lack of agreement was seen in detection of AmpC beta lactamase in enterobacteriaceae and carbapenemase in non-fermenters. Agreement of the MT PCR with another multiplex PCR was found to be 87%. Amplicon sequencing confirmed the genotype detected by MT PCR in 94.2 % of cases tested. Time to result was faster for the MT PCR but cost per test was higher. Conclusion: This study shows that with carefully chosen targets for detection of resistance genes in MDR GN, rapid and efficient identification is possible. MT PCR was sensitive and specific and likely more accurate than phenotypic methods. PMID:26464612

  18. Stable-isotope-labeled Histone Peptide Library for Histone Post-translational Modification and Variant Quantification by Mass Spectrometry *

    PubMed Central

    Lin, Shu; Wein, Samuel; Gonzales-Cope, Michelle; Otte, Gabriel L.; Yuan, Zuo-Fei; Afjehi-Sadat, Leila; Maile, Tobias; Berger, Shelley L.; Rush, John; Lill, Jennie R.; Arnott, David; Garcia, Benjamin A.

    2014-01-01

    To facilitate accurate histone variant and post-translational modification (PTM) quantification via mass spectrometry, we present a library of 93 synthetic peptides using Protein-Aqua™ technology. The library contains 55 peptides representing different modified forms from histone H3 peptides, 23 peptides representing H4 peptides, 5 peptides representing canonical H2A peptides, 8 peptides representing H2A.Z peptides, and peptides for both macroH2A and H2A.X. The PTMs on these peptides include lysine mono- (me1), di- (me2), and tri-methylation (me3); lysine acetylation; arginine me1; serine/threonine phosphorylation; and N-terminal acetylation. The library was subjected to chemical derivatization with propionic anhydride, a widely employed protocol for histone peptide quantification. Subsequently, the detection efficiencies were quantified using mass spectrometry extracted ion chromatograms. The library yields a wide spectrum of detection efficiencies, with more than 1700-fold difference between the peptides with the lowest and highest efficiencies. In this paper, we describe the impact of different modifications on peptide detection efficiencies and provide a resource to correct for detection biases among the 93 histone peptides. In brief, there is no correlation between detection efficiency and molecular weight, hydrophobicity, basicity, or modification type. The same types of modifications may have very different effects on detection efficiencies depending on their positions within a peptide. We also observed antagonistic effects between modifications. In a study of mouse trophoblast stem cells, we utilized the detection efficiencies of the peptide library to correct for histone PTM/variant quantification. For most histone peptides examined, the corrected data did not change the biological conclusions but did alter the relative abundance of these peptides. For a low-abundant histone H2A variant, macroH2A, the corrected data led to a different conclusion than the uncorrected data. The peptide library and detection efficiencies presented here may serve as a resource to facilitate studies in the epigenetics and proteomics fields. PMID:25000943

  19. Recent advances in quantitative analysis of fluid interfaces in multiphase fluid flow measured by synchrotron-based x-ray microtomography

    NASA Astrophysics Data System (ADS)

    Schlueter, S.; Sheppard, A.; Wildenschild, D.

    2013-12-01

    Imaging of fluid interfaces in three-dimensional porous media via x-ray microtomography is an efficient means to test thermodynamically derived predictions on the relationship between capillary pressure, fluid saturation and specific interfacial area (Pc-Sw-Anw) in partially saturated porous media. Various experimental studies exist to date that validate the uniqueness of the Pc-Sw-Anw relationship under static conditions and with current technological progress direct imaging of moving interfaces under dynamic conditions is also becoming available. Image acquisition and subsequent image processing currently involves many steps each prone to operator bias, like merging different scans of the same sample obtained at different beam energies into a single image or the generation of isosurfaces from the segmented multiphase image on which the interface properties are usually calculated. We demonstrate that with recent advancements in (i) image enhancement methods, (ii) multiphase segmentation methods and (iii) methods of structural analysis we can considerably decrease the time and cost of image acquisition and the uncertainty associated with the measurement of interfacial properties. In particular, we highlight three notorious problems in multiphase image processing and provide efficient solutions for each: (i) Due to noise, partial volume effects, and imbalanced volume fractions, automated histogram-based threshold detection methods frequently fail. However, these impairments can be mitigated with modern denoising methods, special treatment of gray value edges and adaptive histogram equilization, such that most of the standard methods for threshold detection (Otsu, fuzzy c-means, minimum error, maximum entropy) coincide at the same set of values. (ii) Partial volume effects due to blur may produce apparent water films around solid surfaces that alter the specific fluid-fluid interfacial area (Anw) considerably. In a synthetic test image some local segmentation methods like Bayesian Markov random field, converging active contours and watershed segmentation reduced the error in Anw associated with apparent water films from 21% to 6-11%. (iii) The generation of isosurfaces from the segmented data usually requires a lot of postprocessing in order to smooth the surface and check for consistency errors. This can be avoided by calculating specific interfacial areas directly on the segmented voxel image by means of Minkowski functionals which is highly efficient and less error prone.

  20. Effect of a dual inlet channel on cell loading in microfluidics.

    PubMed

    Yun, Hoyoung; Kim, Kisoo; Lee, Won Gu

    2014-11-01

    Unwanted sedimentation and attachment of a number of cells onto the bottom channel often occur on relatively large-scale inlets of conventional microfluidic channels as a result of gravity and fluid shear. Phenomena such as sedimentation have become recognized problems that can be overcome by performing microfluidic experiments properly, such as by calculating a meaningful output efficiency with respect to real input. Here, we present a dual-inlet design method for reducing cell loss at the inlet of channels by adding a new " upstream inlet " to a single main inlet design. The simple addition of an upstream inlet can create a vertically layered sheath flow prior to the main inlet for cell loading. The bottom layer flow plays a critical role in preventing the cells from attaching to the bottom of the channel entrance, resulting in a low possibility of cell sedimentation at the main channel entrance. To provide proof-of-concept validation, we applied our design to a microfabricated flow cytometer system (μFCS) and compared the cell counting efficiency of the proposed μFCS with that of the previous single-inlet μFCS and conventional FCS. We used human white blood cells and fluorescent microspheres to quantitatively evaluate the rate of cell sedimentation in the main inlet and to measure fluorescence sensitivity at the detection zone of the flow cytometer microchip. Generating a sheath flow as the bottom layer was meaningfully used to reduce the depth of field as well as the relative deviation of targets in the z-direction (compared to the x-y flow plane), leading to an increased counting sensitivity of fluorescent detection signals. Counting results using fluorescent microspheres showed both a 40% reduction in the rate of sedimentation and a 2-fold higher sensitivity in comparison with the single-inlet μFCS. The results of CD4(+) T-cell counting also showed that the proposed design results in a 25% decrease in the rate of cell sedimentation and a 28% increase in sensitivity when compared to the single-inlet μFCS. This method is simple and easy to use in design, yet requires no additional time or cost in fabrication. Furthermore, we expect that this approach could potentially be helpful for calculating exact cell loading and counting efficiency for a small input number of cells, such as primary cells and rare cells, in microfluidic channel applications.

  1. Calculation of multiphoton ionization processes

    NASA Technical Reports Server (NTRS)

    Chang, T. N.; Poe, R. T.

    1976-01-01

    We propose an accurate and efficient procedure in the calculation of multiphoton ionization processes. In addition to the calculational advantage, this procedure also enables us to study the relative contributions of the resonant and nonresonant intermediate states.

  2. Toolkits and Libraries for Deep Learning.

    PubMed

    Erickson, Bradley J; Korfiatis, Panagiotis; Akkus, Zeynettin; Kline, Timothy; Philbrick, Kenneth

    2017-08-01

    Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data. In this paper, we will describe some of the libraries and tools that are available to aid in the construction and efficient execution of deep learning as applied to medical images.

  3. A SEARCH FOR MAGNESIUM IN EUROPA'S ATMOSPHERE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoerst, S. M.; Brown, M. E., E-mail: sarah.horst@colorado.edu

    Europa's tenuous atmosphere results from sputtering of the surface. The trace element composition of its atmosphere is therefore related to the composition of Europa's surface. Magnesium salts are often invoked to explain Galileo Near Infrared Mapping Spectrometer spectra of Europa's surface, thus magnesium may be present in Europa's atmosphere. We have searched for magnesium emission in the Hubble Space Telescope Faint Object Spectrograph archival spectra of Europa's atmosphere. Magnesium was not detected and we calculate an upper limit on the magnesium column abundance. This upper limit indicates that either Europa's surface is depleted in magnesium relative to sodium and potassium,more » or magnesium is not sputtered as efficiently resulting in a relative depletion in its atmosphere.« less

  4. Apparatus and method for detecting gamma radiation

    DOEpatents

    Sigg, R.A.

    1994-12-13

    A high efficiency radiation detector is disclosed for measuring X-ray and gamma radiation from small-volume, low-activity liquid samples with an overall uncertainty better than 0.7% (one sigma SD). The radiation detector includes a hyperpure germanium well detector, a collimator, and a reference source. The well detector monitors gamma radiation emitted by the reference source and a radioactive isotope or isotopes in a sample source. The radiation from the reference source is collimated to avoid attenuation of reference source gamma radiation by the sample. Signals from the well detector are processed and stored, and the stored data is analyzed to determine the radioactive isotope(s) content of the sample. Minor self-attenuation corrections are calculated from chemical composition data. 4 figures.

  5. Pattern Discovery and Change Detection of Online Music Query Streams

    NASA Astrophysics Data System (ADS)

    Li, Hua-Fu

    In this paper, an efficient stream mining algorithm, called FTP-stream (Frequent Temporal Pattern mining of streams), is proposed to find the frequent temporal patterns over melody sequence streams. In the framework of our proposed algorithm, an effective bit-sequence representation is used to reduce the time and memory needed to slide the windows. The FTP-stream algorithm can calculate the support threshold in only a single pass based on the concept of bit-sequence representation. It takes the advantage of "left" and "and" operations of the representation. Experiments show that the proposed algorithm only scans the music query stream once, and runs significant faster and consumes less memory than existing algorithms, such as SWFI-stream and Moment.

  6. An Invisible Text Watermarking Algorithm using Image Watermark

    NASA Astrophysics Data System (ADS)

    Jalil, Zunera; Mirza, Anwar M.

    Copyright protection of digital contents is very necessary in today's digital world with efficient communication mediums as internet. Text is the dominant part of the internet contents and there are very limited techniques available for text protection. This paper presents a novel algorithm for protection of plain text, which embeds the logo image of the copyright owner in the text and this logo can be extracted from the text later to prove ownership. The algorithm is robust against content-preserving modifications and at the same time, is capable of detecting malicious tampering. Experimental results demonstrate the effectiveness of the algorithm against tampering attacks by calculating normalized hamming distances. The results are also compared with a recent work in this domain

  7. Ranking of options of real estate use by expert assessments mathematical processing

    NASA Astrophysics Data System (ADS)

    Lepikhina, O. Yu; Skachkova, M. E.; Mihaelyan, T. A.

    2018-05-01

    The article is devoted to the development of the real estate assessment concept. In conditions of multivariate using of the real estate method based on calculating, the integral indicator of each variant’s efficiency is proposed. In order to calculate weights of criteria of the efficiency expert method, Analytic hierarchy process and its mathematical support are used. The method allows fulfilling ranking of alternative types of real estate use in dependence of their efficiency. The method was applied for one of the land parcels located on Primorsky district in Saint Petersburg.

  8. Numerical modelling of high efficiency InAs/GaAs intermediate band solar cell

    NASA Astrophysics Data System (ADS)

    Imran, Ali; Jiang, Jianliang; Eric, Debora; Yousaf, Muhammad

    2018-01-01

    Quantum Dots (QDs) intermediate band solar cells (IBSC) are the most attractive candidates for the next generation of photovoltaic applications. In this paper, theoretical model of InAs/GaAs device has been proposed, where we have calculated the effect of variation in the thickness of intrinsic and IB layer on the efficiency of the solar cell using detailed balance theory. IB energies has been optimized for different IB layers thickness. Maximum efficiency 46.6% is calculated for IB material under maximum optical concentration.

  9. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    PubMed

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P < 0.00005). Sample size calculations were reported in 41% of trials. The odds of reporting a sample size calculation (compared to not reporting one) increased until 2005 and then declined (Equation is included in full-text article.). Sample sizes in back pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  10. Simulation of the real efficiencies of high-efficiency silicon solar cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sachenko, A. V., E-mail: sach@isp.kiev.ua; Skrebtii, A. I.; Korkishko, R. M.

    The temperature dependences of the efficiency η of high-efficiency solar cells based on silicon are calculated. It is shown that the temperature coefficient of decreasing η with increasing temperature decreases as the surface recombination rate decreases. The photoconversion efficiency of high-efficiency silicon-based solar cells operating under natural (field) conditions is simulated. Their operating temperature is determined self-consistently by simultaneously solving the photocurrent, photovoltage, and energy-balance equations. Radiative and convective cooling mechanisms are taken into account. It is shown that the operating temperature of solar cells is higher than the ambient temperature even at very high convection coefficients (~300 W/m{sup 2}more » K). Accordingly, the photoconversion efficiency in this case is lower than when the temperature of the solar cells is equal to the ambient temperature. The calculated dependences for the open-circuit voltage and the photoconversion efficiency of high-quality silicon solar cells under concentrated illumination are discussed taking into account the actual temperature of the solar cells.« less

  11. OPTICAL correlation identification technology applied in underwater laser imaging target identification

    NASA Astrophysics Data System (ADS)

    Yao, Guang-tao; Zhang, Xiao-hui; Ge, Wei-long

    2012-01-01

    The underwater laser imaging detection is an effective method of detecting short distance target underwater as an important complement of sonar detection. With the development of underwater laser imaging technology and underwater vehicle technology, the underwater automatic target identification has gotten more and more attention, and is a research difficulty in the area of underwater optical imaging information processing. Today, underwater automatic target identification based on optical imaging is usually realized with the method of digital circuit software programming. The algorithm realization and control of this method is very flexible. However, the optical imaging information is 2D image even 3D image, the amount of imaging processing information is abundant, so the electronic hardware with pure digital algorithm will need long identification time and is hard to meet the demands of real-time identification. If adopt computer parallel processing, the identification speed can be improved, but it will increase complexity, size and power consumption. This paper attempts to apply optical correlation identification technology to realize underwater automatic target identification. The optics correlation identification technology utilizes the Fourier transform characteristic of Fourier lens which can accomplish Fourier transform of image information in the level of nanosecond, and optical space interconnection calculation has the features of parallel, high speed, large capacity and high resolution, combines the flexibility of calculation and control of digital circuit method to realize optoelectronic hybrid identification mode. We reduce theoretical formulation of correlation identification and analyze the principle of optical correlation identification, and write MATLAB simulation program. We adopt single frame image obtained in underwater range gating laser imaging to identify, and through identifying and locating the different positions of target, we can improve the speed and orientation efficiency of target identification effectively, and validate the feasibility of this method primarily.

  12. Efficient Personalized Mispronunciation Detection of Taiwanese-Accented English Speech Based on Unsupervised Model Adaptation and Dynamic Sentence Selection

    ERIC Educational Resources Information Center

    Wu, Chung-Hsien; Su, Hung-Yu; Liu, Chao-Hong

    2013-01-01

    This study presents an efficient approach to personalized mispronunciation detection of Taiwanese-accented English. The main goal of this study was to detect frequently occurring mispronunciation patterns of Taiwanese-accented English instead of scoring English pronunciations directly. The proposed approach quickly identifies personalized…

  13. Shape Engineering Boosts Magnetic Mesoporous Silica Nanoparticle-Based Isolation and Detection of Circulating Tumor Cells.

    PubMed

    Chang, Zhi-Min; Wang, Zheng; Shao, Dan; Yue, Juan; Xing, Hao; Li, Li; Ge, Mingfeng; Li, Mingqiang; Yan, Huize; Hu, Hanze; Xu, Qiaobing; Dong, Wen-Fei

    2018-04-04

    Magnetic mesoporous silica nanoparticles (M-MSNs) are attractive candidates for the immunomagnetic isolation and detection of circulating tumor cells (CTCs). Understanding of the interactions between the effects of the shape of M-MSNs and CTCs is crucial to maximize the binding capacity and capture efficiency as well as to facilitate the sensitivity and efficiency of detection. In this work, fluorescent M-MSNs were rationally designed with sphere and rod morphologies while retaining their robust fluorescence and uniform surface functionality. After conjugation with the antibody of epithelial cell adhesion molecule (EpCAM), both of the differently shaped M-MSNs-EpCAM obtained achieved efficient enrichment of CTCs and fluorescent-based detection. Importantly, rodlike M-MSNs exhibited faster immunomagnetic isolation as well as better performance in the isolation and detection of CTCs in spiked cells and real clinical blood samples than those of their spherelike counterparts. Our results showed that shape engineering contributes positively toward immunomagnetic isolation, which might open new avenues to the rational design of magnetic-fluorescent nanoprobes for the sensitive and efficient isolation and detection of CTCs.

  14. Efficient cooperative compressive spectrum sensing by identifying multi-candidate and exploiting deterministic matrix

    NASA Astrophysics Data System (ADS)

    Li, Jia; Wang, Qiang; Yan, Wenjie; Shen, Yi

    2015-12-01

    Cooperative spectrum sensing exploits the spatial diversity to improve the detection of occupied channels in cognitive radio networks (CRNs). Cooperative compressive spectrum sensing (CCSS) utilizing the sparsity of channel occupancy further improves the efficiency by reducing the number of reports without degrading detection performance. In this paper, we firstly and mainly propose the referred multi-candidate orthogonal matrix matching pursuit (MOMMP) algorithms to efficiently and effectively detect occupied channels at fusion center (FC), where multi-candidate identification and orthogonal projection are utilized to respectively reduce the number of required iterations and improve the probability of exact identification. Secondly, two common but different approaches based on threshold and Gaussian distribution are introduced to realize the multi-candidate identification. Moreover, to improve the detection accuracy and energy efficiency, we propose the matrix construction based on shrinkage and gradient descent (MCSGD) algorithm to provide a deterministic filter coefficient matrix of low t-average coherence. Finally, several numerical simulations validate that our proposals provide satisfactory performance with higher probability of detection, lower probability of false alarm and less detection time.

  15. Efficient Load Balancing and Data Remapping for Adaptive Grid Calculations

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid; Biswas, Rupak

    1997-01-01

    Mesh adaption is a powerful tool for efficient unstructured- grid computations but causes load imbalance among processors on a parallel machine. We present a novel method to dynamically balance the processor workloads with a global view. This paper presents, for the first time, the implementation and integration of all major components within our dynamic load balancing strategy for adaptive grid calculations. Mesh adaption, repartitioning, processor assignment, and remapping are critical components of the framework that must be accomplished rapidly and efficiently so as not to cause a significant overhead to the numerical simulation. Previous results indicated that mesh repartitioning and data remapping are potential bottlenecks for performing large-scale scientific calculations. We resolve these issues and demonstrate that our framework remains viable on a large number of processors.

  16. Leak detection by mass balance effective for Norman Wells line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liou, J.C.P.

    Mass-balance calculations for leak detection have been shown as effective as a leading software system, in a comparison based on a major Canadian crude-oil pipeline. The calculations and NovaCorp`s Leakstop software each detected 4% (approximately) or greater leaks on Interprovincial Pipe Line (IPL) Inc.`s Norman Wells pipeline. Insufficient data exist to assess performances of the two methods for leaks smaller than 4%. Pipeline leak detection using such software-based systems are common. Their effectiveness is measured by how small and how quickly a leak can be detected. Algorithms used and measurement uncertainties determine leak detectability.

  17. Guidelines to indirectly measure and enhance detection efficiency of stationary PIT tag interrogation systems in streams

    USGS Publications Warehouse

    Connolly, Patrick J.; Wolf, Keith; O'Neal, Jennifer S.

    2010-01-01

    With increasing use of passive integrated transponder (PIT) tags and reliance on stationary PIT tag interrogation systems to monitor fish populations, guidelines are offered to inform users how best to use limited funding and human resources to create functional systems that maximize a desired level of detection and precision. The estimators of detection efficiency and their variability as described by Connolly et al. (2008) are explored over a span of likely performance metrics. These estimators were developed to estimate detection efficiency without relying on a known number of fish passing the system. I present graphical displays of the results derived from these estimators to show the potential efficiency and precision to be gained by adding an array or by increasing the number of PIT-tagged fish expected to move past an interrogation system.

  18. Effect of thermal annealing on carrier localization and efficiency of spin detection in GaAsSb epilayers grown on InP

    NASA Astrophysics Data System (ADS)

    Zhang, Bin; Chen, Cheng; Han, Junbo; Jin, Chuan; Chen, Jianxin; Wang, Xingjun

    2018-04-01

    The effect of the thermal annealing on the optical and spin properties in GaAs0.44Sb0.56 epilayers grown on InP was investigated via photoreflectance, power-dependent and time-resolved photoluminescence spectroscopy as well as optical orientation measurement. The carrier's localization and the optical spin detection efficiency increase with an increase of annealing temperature up to 600 °C. The enhancement of the spin detection efficiency is attributed to both the shortening of the electron lifetime and the prolonging of the spin lifetime as a result of the enhanced carriers' localization induced by the annealing process. Our results provided an approach to enhance spin detection efficiency of GaAsSb with its PL emission in the 1.55 μm region.

  19. Evaluation of Fish Movements, Migration Patterns, and Population Abundance with Streamwidth PIT Tag Interrogation Systems, Final Report 2002.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zydlewski, Gayle; Winter, Christiane; McClanahan, Dee

    2003-02-01

    Two remote Streamwidth PIT tag Interrogation systems (SPIs) were operated continuously for over one year to test the feasibility of these systems for generating movement, migration, survival and smolt production estimates for salmonids. A total of 1,588 juvenile (< 100 mm FL) naturally produced salmonids (7 coho salmon, 482 cutthroat trout, and 1,099 steelhead) were PIT tagged above the upstream-most SPI (9 sites approximately 1 linear km each) in Fall 2001. Age at tagging for wild caught cutthroat and steelhead was 1 year. SPIs were operating before any PIT tagged fish were released in the creek. Over 390,000 detections weremore » recorded from October 2001 to 31 July 2002. Efficiencies were site dependent, but overall detection efficiency for the creek was 97% with 95% confidence intervals of 91-100%. PIT tag detection efficiency ranged from 55-100% depending on the SPI and varied throughout the year with average efficiencies of 73% and 89%. SPI efficiency of PIT tag detection was not completely dependent on electronics noise levels or environmental conditions. Fish from all tagging locations were detected at the SPIs. Steelhead and cutthroat trout were primarily detected moving in the Spring (April-June) coincident with the anticipated smolt migration. Steelhead were also detected moving past SPIs at lower numbers in the Fall and Winter. Travel time between SPIs (downstream movement) was highly dependent on time of year. Travel time in the Spring was significantly faster (34.4 {+-} 7.0 hours) for all species than during any other time of year (763.1 {+-} 267.0 hours). Steelhead and cutthroat migrating in the Spring were the same age as those that did not migrate in the Spring. Peak of steelhead migration recorded at the two SPIs was 5/11 and 5/12 and the peak in the screw trap was recorded on 5/17. Steelhead smolt production estimates using SPIs (3,802 with 95% confidence intervals of 3,440 - 4,245) was similar to those using more standard screw trap methods (approximately 5,400). All species used the faster moving/deeper section of the creek at both SPIs. A backpack PIT tag detector was also developed and used as another remote 'recapture' for additional accuracy in estimating population survival and recapture probability. This unit was used at an approximate efficiency of 24% to survey the creek after the Spring migration. Twenty-five individual fish were re-located. All PIT tag data were used to calculate survival and recapture probabilities using the Cormack-Jolly-Seber population model. Survival for steelhead was high and recapture probability depended greatly on season. Probability of recapture was highest in Spring (29.5%) and relatively low in all other seasons (< 7% in Fall, Winter, and Summer). Wild steelhead PIT tagged in the field and returned to the laboratory had a tag retention rate of 97.6%. A laboratory study was designed to determine the effects of 3-sized PIT tags (12 mm, 20 mm, and 23 mm) on survival and growth of individuals. Survival from surgical implantation of 23 mm PIT tags was > 98% for fish (coho salmon and steelhead). Retention of 23 mm PIT tags was 100% for coho salmon and 89% for steelhead. For both coho and steelhead, growth rates during the first month were affected by tagging, but by the end of 2 months growth effects equalized for all tag sizes. Life history characteristics quantified with SPI techniques are comparable to standard techniques. For example, peaks of Spring migration for steelhead and cutthroat were amazingly similar to those reported from the screw trap. These techniques will enable application of less laborious methods which are more accurate at estimating life history parameters.« less

  20. Evaluation of Fish Movements, Migration Patterns and Populations Abundance with Streamwidth PIT Tag Interrogation Systems, Final Report 2002.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zydlewski, Gayle B.; Casey, Sean

    2003-02-01

    Two remote Streamwidth PIT tag Interrogation systems (SPIs) were operated continuously for over one year to test the feasibility of these systems for generating movement, migration, survival and smolt production estimates for salmonids. A total of 1,588 juvenile (< 100 mm FL) naturally produced salmonids (7 coho salmon, 482 cutthroat trout, and 1,099 steelhead) were PIT tagged above the upstream-most SPI (9 sites approximately 1 linear km each) in Fall 2001. Age at tagging for wild caught cutthroat and steelhead was 1 year. SPIs were operating before any PIT tagged fish were released in the creek. Over 390,000 detections weremore » recorded from October 2001 to 31 July 2002. Efficiencies were site dependent, but overall detection efficiency for the creek was 97% with 95% confidence intervals of 91-100%. PIT tag detection efficiency ranged from 55-100% depending on the SPI and varied throughout the year with average efficiencies of 73% and 89%. SPI efficiency of PIT tag detection was not completely dependent on electronics noise levels or environmental conditions. Fish from all tagging locations were detected at the SPIs. Steelhead and cutthroat trout were primarily detected moving in the Spring (April-June) coincident with the anticipated smolt migration. Steelhead were also detected moving past SPIs at lower numbers in the Fall and Winter. Travel time between SPIs (downstream movement) was highly dependent on time of year. Travel time in the Spring was significantly faster (34.4 {+-} 7.0 hours) for all species than during any other time of year (763.1 {+-} 267.0 hours). Steelhead and cutthroat migrating in the Spring were the same age as those that did not migrate in the Spring. Peak of steelhead migration recorded at the two SPIs was 5/11 and 5/12 and the peak in the screw trap was recorded on 5/17. Steelhead smolt production estimates using SPIs (3,802 with 95% confidence intervals of 3,440-4,245) was similar to those using more standard screw trap methods (approximately 5,400). All species used the faster moving/deeper section of the creek at both SPIs. A backpack PIT tag detector was also developed and used as another remote ''recapture'' for additional accuracy in estimating population survival and recapture probability. This unit was used at an approximate efficiency of 24% to survey the creek after the Spring migration. Twenty-five individual fish were re-located. All PIT tag data were used to calculate survival and recapture probabilities using the Cormack-Jolly-Seber population model. Survival for steelhead was high and recapture probability depended greatly on season. Probability of recapture was highest in Spring (29.5%) and relatively low in all other seasons (< 7% in Fall, Winter, and Summer). Wild steelhead PIT tagged in the field and returned to the laboratory had a tag retention rate of 97.6%. A laboratory study was designed to determine the effects of 3-sized PIT tags (12 mm, 20 mm, and 23 mm) on survival and growth of individuals. Survival from surgical implantation of 23 mm PIT tags was > 98% for fish (coho salmon and steelhead). Retention of 23 mm PIT tags was 100% for coho salmon and 89% for steelhead. For both coho and steelhead, growth rates during the first month were affected by tagging, but by the end of 2 months growth effects equalized for all tag sizes. Life history characteristics quantified with SPI techniques are comparable to standard techniques. For example, peaks of Spring migration for steelhead and cutthroat were amazingly similar to those reported from the screw trap. These techniques will enable application of less laborious methods which are more accurate at estimating life history parameters.« less

Top