Sample records for source test method

  1. Ultrasensitive surveillance of sensors and processes

    DOEpatents

    Wegerich, Stephan W.; Jarman, Kristin K.; Gross, Kenneth C.

    2001-01-01

    A method and apparatus for monitoring a source of data for determining an operating state of a working system. The method includes determining a sensor (or source of data) arrangement associated with monitoring the source of data for a system, activating a method for performing a sequential probability ratio test if the data source includes a single data (sensor) source, activating a second method for performing a regression sequential possibility ratio testing procedure if the arrangement includes a pair of sensors (data sources) with signals which are linearly or non-linearly related; activating a third method for performing a bounded angle ratio test procedure if the sensor arrangement includes multiple sensors and utilizing at least one of the first, second and third methods to accumulate sensor signals and determining the operating state of the system.

  2. Ultrasensitive surveillance of sensors and processes

    DOEpatents

    Wegerich, Stephan W.; Jarman, Kristin K.; Gross, Kenneth C.

    1999-01-01

    A method and apparatus for monitoring a source of data for determining an operating state of a working system. The method includes determining a sensor (or source of data) arrangement associated with monitoring the source of data for a system, activating a method for performing a sequential probability ratio test if the data source includes a single data (sensor) source, activating a second method for performing a regression sequential possibility ratio testing procedure if the arrangement includes a pair of sensors (data sources) with signals which are linearly or non-linearly related; activating a third method for performing a bounded angle ratio test procedure if the sensor arrangement includes multiple sensors and utilizing at least one of the first, second and third methods to accumulate sensor signals and determining the operating state of the system.

  3. 40 CFR Appendix A to Part 63 - Test Methods Pollutant Measurement Methods From Various Waste Media

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 15 2013-07-01 2013-07-01 false Test Methods Pollutant Measurement... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. A Appendix A to Part 63—Test Methods Pollutant... analyte spiking? 13.0How do I conduct tests at similar sources? Optional Requirements 14.0How do I use and...

  4. 40 CFR Appendix A to Part 63 - Test Methods Pollutant Measurement Methods From Various Waste Media

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 15 2014-07-01 2014-07-01 false Test Methods Pollutant Measurement... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. A Appendix A to Part 63—Test Methods Pollutant... analyte spiking? 13.0How do I conduct tests at similar sources? Optional Requirements 14.0How do I use and...

  5. 40 CFR Appendix A to Part 63 - Test Methods Pollutant Measurement Methods From Various Waste Media

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 14 2011-07-01 2011-07-01 false Test Methods Pollutant Measurement... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. A Appendix A to Part 63—Test Methods Pollutant... analyte spiking? 13.0How do I conduct tests at similar sources? Optional Requirements 14.0How do I use and...

  6. 40 CFR Appendix A to Part 63 - Test Methods Pollutant Measurement Methods From Various Waste Media

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 15 2012-07-01 2012-07-01 false Test Methods Pollutant Measurement... POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) Pt. 63, App. A Appendix A to Part 63—Test Methods Pollutant... analyte spiking? 13.0How do I conduct tests at similar sources? Optional Requirements 14.0How do I use and...

  7. Method for non-destructive testing

    DOEpatents

    Akers, Douglas W [Idaho Falls, ID

    2011-08-30

    Non-destructive testing method may include providing a source material that emits positrons in response to bombardment of the source material with photons. The source material is exposed to photons. The source material is positioned adjacent the specimen, the specimen being exposed to at least some of the positrons emitted by the source material. Annihilation gamma rays emitted by the specimen are detected.

  8. 49 CFR Appendix B to Part 238 - Test Methods and Performance Criteria for the Flammability and Smoke Emission Characteristics of...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Flammability of Flexible Cellular Materials Using a Radiant Heat Energy Source. (v) ASTM E 119-00a, Standard... Method for Surface Flammability of Materials Using a Radiant Heat Energy Source. (vii) ASTM E 648-00, Standard Test Method for Critical Radiant Flux of Floor-Covering Systems Using a Radiant Heat Energy Source...

  9. 49 CFR Appendix B to Part 238 - Test Methods and Performance Criteria for the Flammability and Smoke Emission Characteristics of...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Flammability of Flexible Cellular Materials Using a Radiant Heat Energy Source. (v) ASTM E 119-00a, Standard... Method for Surface Flammability of Materials Using a Radiant Heat Energy Source. (vii) ASTM E 648-00, Standard Test Method for Critical Radiant Flux of Floor-Covering Systems Using a Radiant Heat Energy Source...

  10. 49 CFR Appendix B to Part 238 - Test Methods and Performance Criteria for the Flammability and Smoke Emission Characteristics of...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Flammability of Flexible Cellular Materials Using a Radiant Heat Energy Source. (v) ASTM E 119-00a, Standard... Method for Surface Flammability of Materials Using a Radiant Heat Energy Source. (vii) ASTM E 648-00, Standard Test Method for Critical Radiant Flux of Floor-Covering Systems Using a Radiant Heat Energy Source...

  11. 49 CFR Appendix B to Part 238 - Test Methods and Performance Criteria for the Flammability and Smoke Emission Characteristics of...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Flammability of Flexible Cellular Materials Using a Radiant Heat Energy Source. (v) ASTM E 119-00a, Standard... Method for Surface Flammability of Materials Using a Radiant Heat Energy Source. (vii) ASTM E 648-00, Standard Test Method for Critical Radiant Flux of Floor-Covering Systems Using a Radiant Heat Energy Source...

  12. A multiwave range test for obstacle reconstructions with unknown physical properties

    NASA Astrophysics Data System (ADS)

    Potthast, Roland; Schulz, Jochen

    2007-08-01

    We develop a new multiwave version of the range test for shape reconstruction in inverse scattering theory. The range test [R. Potthast, et al., A `range test' for determining scatterers with unknown physical properties, Inverse Problems 19(3) (2003) 533-547] has originally been proposed to obtain knowledge about an unknown scatterer when the far field pattern for only one plane wave is given. Here, we extend the method to the case of multiple waves and show that the full shape of the unknown scatterer can be reconstructed. We further will clarify the relation between the range test methods, the potential method [A. Kirsch, R. Kress, On an integral equation of the first kind in inverse acoustic scattering, in: Inverse Problems (Oberwolfach, 1986), Internationale Schriftenreihe zur Numerischen Mathematik, vol. 77, Birkhauser, Basel, 1986, pp. 93-102] and the singular sources method [R. Potthast, Point sources and multipoles in inverse scattering theory, Habilitation Thesis, Gottingen, 1999]. In particular, we propose a new version of the Kirsch-Kress method using the range test and a new approach to the singular sources method based on the range test and potential method. Numerical examples of reconstructions for all four methods are provided.

  13. Development of fire test methods for airplane interior materials

    NASA Technical Reports Server (NTRS)

    Tustin, E. A.

    1978-01-01

    Fire tests were conducted in a 737 airplane fuselage at NASA-JSC to characterize jet fuel fires in open steel pans (simulating post-crash fire sources and a ruptured airplane fuselage) and to characterize fires in some common combustibles (simulating in-flight fire sources). Design post-crash and in-flight fire source selections were based on these data. Large panels of airplane interior materials were exposed to closely-controlled large scale heating simulations of the two design fire sources in a Boeing fire test facility utilizing a surplused 707 fuselage section. Small samples of the same airplane materials were tested by several laboratory fire test methods. Large scale and laboratory scale data were examined for correlative factors. Published data for dangerous hazard levels in a fire environment were used as the basis for developing a method to select the most desirable material where trade-offs in heat, smoke and gaseous toxicant evolution must be considered.

  14. Testing contamination source identification methods for water distribution networks

    DOE PAGES

    Seth, Arpan; Klise, Katherine A.; Siirola, John D.; ...

    2016-04-01

    In the event of contamination in a water distribution network (WDN), source identification (SI) methods that analyze sensor data can be used to identify the source location(s). Knowledge of the source location and characteristics are important to inform contamination control and cleanup operations. Various SI strategies that have been developed by researchers differ in their underlying assumptions and solution techniques. The following manuscript presents a systematic procedure for testing and evaluating SI methods. The performance of these SI methods is affected by various factors including the size of WDN model, measurement error, modeling error, time and number of contaminant injections,more » and time and number of measurements. This paper includes test cases that vary these factors and evaluates three SI methods on the basis of accuracy and specificity. The tests are used to review and compare these different SI methods, highlighting their strengths in handling various identification scenarios. These SI methods and a testing framework that includes the test cases and analysis tools presented in this paper have been integrated into EPA’s Water Security Toolkit (WST), a suite of software tools to help researchers and others in the water industry evaluate and plan various response strategies in case of a contamination incident. Lastly, a set of recommendations are made for users to consider when working with different categories of SI methods.« less

  15. Research on the calibration methods of the luminance parameter of radiation luminance meters

    NASA Astrophysics Data System (ADS)

    Cheng, Weihai; Huang, Biyong; Lin, Fangsheng; Li, Tiecheng; Yin, Dejin; Lai, Lei

    2017-10-01

    This paper introduces standard diffusion reflection white plate method and integrating sphere standard luminance source method to calibrate the luminance parameter. The paper compares the effects of calibration results by using these two methods through principle analysis and experimental verification. After using two methods to calibrate the same radiation luminance meter, the data obtained verifies the testing results of the two methods are both reliable. The results show that the display value using standard white plate method has fewer errors and better reproducibility. However, standard luminance source method is more convenient and suitable for on-site calibration. Moreover, standard luminance source method has wider range and can test the linear performance of the instruments.

  16. HVAC SYSTEMS AS EMISSION SOURCES AFFECTING INDOOR AIR QUALITY: A CRITICAL REVIEW

    EPA Science Inventory

    The study evaluates heating, ventilating, and air-conditioning (HVAC) systems as contaminant emission sources that affect indoor air quality (IAQ). Various literature sources and methods for characterizing HVAC emission sources are reviewed. Available methods include in situ test...

  17. Testing Historical Skills.

    ERIC Educational Resources Information Center

    Baillie, Ray

    1980-01-01

    Outlines methods for including skill testing in teacher-made history tests. Focuses on distinguishing fact and fiction, evaluating the reliability of a source, distinguishing between primary and secondary sources, recognizing statements which support generalizations, testing with media, mapping geo-politics, and applying knowledge to new…

  18. Detection of nuclear testing from surface concentration measurements: Analysis of radioxenon from the February 2013 underground test in North Korea

    DOE PAGES

    Kurzeja, R. J.; Buckley, R. L.; Werth, D. W.; ...

    2017-12-28

    A method is outlined and tested to detect low level nuclear or chemical sources from time series of concentration measurements. The method uses a mesoscale atmospheric model to simulate the concentration signature from a known or suspected source at a receptor which is then regressed successively against segments of the measurement series to create time series of metrics that measure the goodness of fit between the signatures and the measurement segments. The method was applied to radioxenon data from the Comprehensive Test Ban Treaty (CTBT) collection site in Ussuriysk, Russia (RN58) after the Democratic People's Republic of Korea (North Korea)more » underground nuclear test on February 12, 2013 near Punggye. The metrics were found to be a good screening tool to locate data segments with a strong likelihood of origin from Punggye, especially when multiplied together to a determine the joint probability. Metrics from RN58 were also used to find the probability that activity measured in February and April of 2013 originated from the Feb 12 test. A detailed analysis of an RN58 data segment from April 3/4, 2013 was also carried out for a grid of source locations around Punggye and identified Punggye as the most likely point of origin. Thus, the results support the strong possibility that radioxenon was emitted from the test site at various times in April and was detected intermittently at RN58, depending on the wind direction. The method does not locate unsuspected sources, but instead, evaluates the probability of a source at a specified location. However, it can be extended to include a set of suspected sources. Extension of the method to higher resolution data sets, arbitrary sampling, and time-varying sources is discussed along with a path to evaluate uncertainty in the calculated probabilities.« less

  19. Detection of nuclear testing from surface concentration measurements: Analysis of radioxenon from the February 2013 underground test in North Korea

    NASA Astrophysics Data System (ADS)

    Kurzeja, R. J.; Buckley, R. L.; Werth, D. W.; Chiswell, S. R.

    2018-03-01

    A method is outlined and tested to detect low level nuclear or chemical sources from time series of concentration measurements. The method uses a mesoscale atmospheric model to simulate the concentration signature from a known or suspected source at a receptor which is then regressed successively against segments of the measurement series to create time series of metrics that measure the goodness of fit between the signatures and the measurement segments. The method was applied to radioxenon data from the Comprehensive Test Ban Treaty (CTBT) collection site in Ussuriysk, Russia (RN58) after the Democratic People's Republic of Korea (North Korea) underground nuclear test on February 12, 2013 near Punggye. The metrics were found to be a good screening tool to locate data segments with a strong likelihood of origin from Punggye, especially when multiplied together to a determine the joint probability. Metrics from RN58 were also used to find the probability that activity measured in February and April of 2013 originated from the Feb 12 test. A detailed analysis of an RN58 data segment from April 3/4, 2013 was also carried out for a grid of source locations around Punggye and identified Punggye as the most likely point of origin. Thus, the results support the strong possibility that radioxenon was emitted from the test site at various times in April and was detected intermittently at RN58, depending on the wind direction. The method does not locate unsuspected sources, but instead, evaluates the probability of a source at a specified location. However, it can be extended to include a set of suspected sources. Extension of the method to higher resolution data sets, arbitrary sampling, and time-varying sources is discussed along with a path to evaluate uncertainty in the calculated probabilities.

  20. Detection of nuclear testing from surface concentration measurements: Analysis of radioxenon from the February 2013 underground test in North Korea

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurzeja, R. J.; Buckley, R. L.; Werth, D. W.

    A method is outlined and tested to detect low level nuclear or chemical sources from time series of concentration measurements. The method uses a mesoscale atmospheric model to simulate the concentration signature from a known or suspected source at a receptor which is then regressed successively against segments of the measurement series to create time series of metrics that measure the goodness of fit between the signatures and the measurement segments. The method was applied to radioxenon data from the Comprehensive Test Ban Treaty (CTBT) collection site in Ussuriysk, Russia (RN58) after the Democratic People's Republic of Korea (North Korea)more » underground nuclear test on February 12, 2013 near Punggye. The metrics were found to be a good screening tool to locate data segments with a strong likelihood of origin from Punggye, especially when multiplied together to a determine the joint probability. Metrics from RN58 were also used to find the probability that activity measured in February and April of 2013 originated from the Feb 12 test. A detailed analysis of an RN58 data segment from April 3/4, 2013 was also carried out for a grid of source locations around Punggye and identified Punggye as the most likely point of origin. Thus, the results support the strong possibility that radioxenon was emitted from the test site at various times in April and was detected intermittently at RN58, depending on the wind direction. The method does not locate unsuspected sources, but instead, evaluates the probability of a source at a specified location. However, it can be extended to include a set of suspected sources. Extension of the method to higher resolution data sets, arbitrary sampling, and time-varying sources is discussed along with a path to evaluate uncertainty in the calculated probabilities.« less

  1. Methodical principles of recognition different source types in an acoustic-emission testing of metal objects

    NASA Astrophysics Data System (ADS)

    Bobrov, A. L.

    2017-08-01

    This paper presents issues of identification of various AE sources in order to increase the information value of AE method. This task is especially relevant for complex objects, when factors that affect an acoustic path on an object of testing significantly affect parameters of signals recorded by sensor. Correlation criteria, sensitive to type of AE source in metal objects is determined in the article.

  2. 10 CFR 36.59 - Detection of leaking sources.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Detection of leaking sources. 36.59 Section 36.59 Energy... Irradiators § 36.59 Detection of leaking sources. (a) Each dry-source-storage sealed source must be tested for leakage at intervals not to exceed 6 months using a leak test kit or method approved by the Commission or...

  3. Integration of relational and textual biomedical sources. A pilot experiment using a semi-automated method for logical schema acquisition.

    PubMed

    García-Remesal, M; Maojo, V; Billhardt, H; Crespo, J

    2010-01-01

    Bringing together structured and text-based sources is an exciting challenge for biomedical informaticians, since most relevant biomedical sources belong to one of these categories. In this paper we evaluate the feasibility of integrating relational and text-based biomedical sources using: i) an original logical schema acquisition method for textual databases developed by the authors, and ii) OntoFusion, a system originally designed by the authors for the integration of relational sources. We conducted an integration experiment involving a test set of seven differently structured sources covering the domain of genetic diseases. We used our logical schema acquisition method to generate schemas for all textual sources. The sources were integrated using the methods and tools provided by OntoFusion. The integration was validated using a test set of 500 queries. A panel of experts answered a questionnaire to evaluate i) the quality of the extracted schemas, ii) the query processing performance of the integrated set of sources, and iii) the relevance of the retrieved results. The results of the survey show that our method extracts coherent and representative logical schemas. Experts' feedback on the performance of the integrated system and the relevance of the retrieved results was also positive. Regarding the validation of the integration, the system successfully provided correct results for all queries in the test set. The results of the experiment suggest that text-based sources including a logical schema can be regarded as equivalent to structured databases. Using our method, previous research and existing tools designed for the integration of structured databases can be reused - possibly subject to minor modifications - to integrate differently structured sources.

  4. Lessons learned in preparing method 29 filters for compliance testing audits.

    PubMed

    Martz, R F; McCartney, J E; Bursey, J T; Riley, C E

    2000-01-01

    Companies conducting compliance testing are required to analyze audit samples at the time they collect and analyze the stack samples if audit samples are available. Eastern Research Group (ERG) provides technical support to the EPA's Emission Measurements Center's Stationary Source Audit Program (SSAP) for developing, preparing, and distributing performance evaluation samples and audit materials. These audit samples are requested via the regulatory Agency and include spiked audit materials for EPA Method 29-Metals Emissions from Stationary Sources, as well as other methods. To provide appropriate audit materials to federal, state, tribal, and local governments, as well as agencies performing environmental activities and conducting emission compliance tests, ERG has recently performed testing of blank filter materials and preparation of spiked filters for EPA Method 29. For sampling stationary sources using an EPA Method 29 sampling train, the use of filters without organic binders containing less than 1.3 microg/in.2 of each of the metals to be measured is required. Risk Assessment testing imposes even stricter requirements for clean filter background levels. Three vendor sources of quartz fiber filters were evaluated for background contamination to ensure that audit samples would be prepared using filters with the lowest metal background levels. A procedure was developed to test new filters, and a cleaning procedure was evaluated to see if a greater level of cleanliness could be achieved using an acid rinse with new filters. Background levels for filters supplied by different vendors and within lots of filters from the same vendor showed a wide variation, confirmed through contact with several analytical laboratories that frequently perform EPA Method 29 analyses. It has been necessary to repeat more than one compliance test because of suspect metals background contamination levels. An acid cleaning step produced improvement in contamination level, but the difference was not significant for most of the Method 29 target metals. As a result of our studies, we conclude: Filters for Method 29 testing should be purchased in lots as large as possible. Testing firms should pre-screen new boxes and/or new lots of filters used for Method 29 testing. Random analysis of three filters (top, middle, bottom of the box) from a new box of vendor filters before allowing them to be used in field tests is a prudent approach. A box of filters from a given vendor should be screened, and filters from this screened box should be used both for testing and as field blanks in each test scenario to provide the level of quality assurance required for stationary source testing.

  5. USE OF BACTEROIDES PCR-BASED METHODS TO EXAMINE FECAL CONTAMINATION SOURCES IN TROPICAL COASTAL WATERS

    EPA Science Inventory

    Several library independent Microbial Source Tracking methods have been developed to rapidly determine the source of fecal contamination. Thus far, none of these methods have been tested in tropical marine waters. In this study, we used a Bacteroides 16S rDNA PCR-based...

  6. Evaluation of Polymerase Chain Reaction for Detecting Coliform Bacteria in Drinking Water Sources.

    PubMed

    Isfahani, Bahram Nasr; Fazeli, Hossein; Babaie, Zeinab; Poursina, Farkhondeh; Moghim, Sharareh; Rouzbahani, Meisam

    2017-01-01

    Coliform bacteria are used as indicator organisms for detecting fecal pollution in water. Traditional methods including microbial culture tests in lactose-containing media and enzyme-based tests for the detection of β-galactosidase; however, these methods are time-consuming and less specific. The aim of this study was to evaluate polymerase chain reaction (PCR) for detecting coliform. Totally, 100 of water samples from Isfahan drinking water source were collected. Coliform bacteria and Escherichia coli were detected in drinking water using LacZ and LamB genes in PCR method performed in comparison with biochemical tests for all samples. Using phenotyping, 80 coliform isolates were found. The results of the biochemical tests illustrated 78.7% coliform bacteria and 21.2% E. coli . PCR results for LacZ and LamB genes were 67.5% and 17.5%, respectively. The PCR method was shown to be an effective, sensitive, and rapid method for detecting coliform and E. coli in drinking water from the Isfahan drinking water sources.

  7. Force Limited Vibration Testing: Computation C2 for Real Load and Probabilistic Source

    NASA Astrophysics Data System (ADS)

    Wijker, J. J.; de Boer, A.; Ellenbroek, M. H. M.

    2014-06-01

    To prevent over-testing of the test-item during random vibration testing Scharton proposed and discussed the force limited random vibration testing (FLVT) in a number of publications, in which the factor C2 is besides the random vibration specification, the total mass and the turnover frequency of the load(test item), a very important parameter. A number of computational methods to estimate C2 are described in the literature, i.e. the simple and the complex two degrees of freedom system, STDFS and CTDFS, respectively. Both the STDFS and the CTDFS describe in a very reduced (simplified) manner the load and the source (adjacent structure to test item transferring the excitation forces, i.e. spacecraft supporting an instrument).The motivation of this work is to establish a method for the computation of a realistic value of C2 to perform a representative random vibration test based on force limitation, when the adjacent structure (source) description is more or less unknown. Marchand formulated a conservative estimation of C2 based on maximum modal effective mass and damping of the test item (load) , when no description of the supporting structure (source) is available [13].Marchand discussed the formal description of getting C 2 , using the maximum PSD of the acceleration and maximum PSD of the force, both at the interface between load and source, in combination with the apparent mass and total mass of the the load. This method is very convenient to compute the factor C 2 . However, finite element models are needed to compute the spectra of the PSD of both the acceleration and force at the interface between load and source.Stevens presented the coupled systems modal approach (CSMA), where simplified asparagus patch models (parallel-oscillator representation) of load and source are connected, consisting of modal effective masses and the spring stiffnesses associated with the natural frequencies. When the random acceleration vibration specification is given the CMSA method is suitable to compute the valueof the parameter C 2 .When no mathematical model of the source can be made available, estimations of the value C2 can be find in literature.In this paper a probabilistic mathematical representation of the unknown source is proposed, such that the asparagus patch model of the source can be approximated. The computation of the value C2 can be done in conjunction with the CMSA method, knowing the apparent mass of the load and the random acceleration specification at the interface between load and source, respectively.Strength & stiffness design rules for spacecraft, instrumentation, units, etc. will be practiced, as mentioned in ECSS Standards and Handbooks, Launch Vehicle User's manuals, papers, books , etc. A probabilistic description of the design parameters is foreseen.As an example a simple experiment has been worked out.

  8. Detection of human and animal sources of pollution by microbial and chemical methods

    USDA-ARS?s Scientific Manuscript database

    A multi-indicator approach comprising Enterococcus, bacterial source tracking (BST), and sterol analysis was tested for pollution source identification. Fecal contamination was detected in 100% of surface water sites tested. Enterococcus faecium was the dominant species in aged litter samples from p...

  9. Tsunami Simulation Method Assimilating Ocean Bottom Pressure Data Near a Tsunami Source Region

    NASA Astrophysics Data System (ADS)

    Tanioka, Yuichiro

    2018-02-01

    A new method was developed to reproduce the tsunami height distribution in and around the source area, at a certain time, from a large number of ocean bottom pressure sensors, without information on an earthquake source. A dense cabled observation network called S-NET, which consists of 150 ocean bottom pressure sensors, was installed recently along a wide portion of the seafloor off Kanto, Tohoku, and Hokkaido in Japan. However, in the source area, the ocean bottom pressure sensors cannot observe directly an initial ocean surface displacement. Therefore, we developed the new method. The method was tested and functioned well for a synthetic tsunami from a simple rectangular fault with an ocean bottom pressure sensor network using 10 arc-min, or 20 km, intervals. For a test case that is more realistic, ocean bottom pressure sensors with 15 arc-min intervals along the north-south direction and sensors with 30 arc-min intervals along the east-west direction were used. In the test case, the method also functioned well enough to reproduce the tsunami height field in general. These results indicated that the method could be used for tsunami early warning by estimating the tsunami height field just after a great earthquake without the need for earthquake source information.

  10. Evaluation of Polymerase Chain Reaction for Detecting Coliform Bacteria in Drinking Water Sources

    PubMed Central

    Isfahani, Bahram Nasr; Fazeli, Hossein; Babaie, Zeinab; Poursina, Farkhondeh; Moghim, Sharareh; Rouzbahani, Meisam

    2017-01-01

    Background: Coliform bacteria are used as indicator organisms for detecting fecal pollution in water. Traditional methods including microbial culture tests in lactose-containing media and enzyme-based tests for the detection of β-galactosidase; however, these methods are time-consuming and less specific. The aim of this study was to evaluate polymerase chain reaction (PCR) for detecting coliform. Materials and Methods: Totally, 100 of water samples from Isfahan drinking water source were collected. Coliform bacteria and Escherichia coli were detected in drinking water using LacZ and LamB genes in PCR method performed in comparison with biochemical tests for all samples. Results: Using phenotyping, 80 coliform isolates were found. The results of the biochemical tests illustrated 78.7% coliform bacteria and 21.2% E. coli. PCR results for LacZ and LamB genes were 67.5% and 17.5%, respectively. Conclusion: The PCR method was shown to be an effective, sensitive, and rapid method for detecting coliform and E. coli in drinking water from the Isfahan drinking water sources. PMID:29142893

  11. Experimental evaluation of the ring focus test for X-ray telescopes using AXAF's technology mirror assembly, MSFC CDDF Project No. H20

    NASA Technical Reports Server (NTRS)

    Zissa, D. E.; Korsch, D.

    1986-01-01

    A test method particularly suited for X-ray telescopes was evaluated experimentally. The method makes use of a focused ring formed by an annular aperture when using a point source at a finite distance. This would supplement measurements of the best focus image which is blurred when the test source is at a finite distance. The telescope used was the Technology Mirror Assembly of the Advanced X-ray Astrophysis Facility (AXAF) program. Observed ring image defects could be related to the azimuthal location of their sources in the telescope even though in this case the predicted sharp ring was obscured by scattering, finite source size, and residual figure errors.

  12. Use of Very Weak Radiation Sources to Determine Aircraft Runway Position

    NASA Technical Reports Server (NTRS)

    Drinkwater, Fred J., III; Kibort, Bernard R.

    1965-01-01

    Various methods of providing runway information in the cockpit during the take-off and landing roll have been proposed. The most reliable method has been to use runway distance markers when visible. Flight tests were used to evaluate the feasibility of using weak radio-active sources to trigger a runway distance counter in the cockpit. The results of these tests indicate that a weak radioactive source would provide a reliable signal by which this indicator could be operated.

  13. 40 CFR 60.154 - Test methods and procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Sewage Treatment Plants § 60.154 Test methods and procedures. (a) In conducting the performance tests required in § 60.8...

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seth, Arpan; Klise, Katherine A.; Siirola, John D.

    In the event of contamination in a water distribution network (WDN), source identification (SI) methods that analyze sensor data can be used to identify the source location(s). Knowledge of the source location and characteristics are important to inform contamination control and cleanup operations. Various SI strategies that have been developed by researchers differ in their underlying assumptions and solution techniques. The following manuscript presents a systematic procedure for testing and evaluating SI methods. The performance of these SI methods is affected by various factors including the size of WDN model, measurement error, modeling error, time and number of contaminant injections,more » and time and number of measurements. This paper includes test cases that vary these factors and evaluates three SI methods on the basis of accuracy and specificity. The tests are used to review and compare these different SI methods, highlighting their strengths in handling various identification scenarios. These SI methods and a testing framework that includes the test cases and analysis tools presented in this paper have been integrated into EPA’s Water Security Toolkit (WST), a suite of software tools to help researchers and others in the water industry evaluate and plan various response strategies in case of a contamination incident. Lastly, a set of recommendations are made for users to consider when working with different categories of SI methods.« less

  15. The Psychology Experiment Building Language (PEBL) and PEBL Test Battery

    PubMed Central

    Mueller, Shane T.; Piper, Brian J.

    2014-01-01

    Background We briefly describe the Psychology Experiment Building Language (PEBL), an open source software system for designing and running psychological experiments. New Method We describe the PEBL test battery, a set of approximately 70 behavioral tests which can be freely used, shared, and modified. Included is a comprehensive set of past research upon which tests in the battery are based. Results We report the results of benchmark tests that establish the timing precision of PEBL. Comparison with Existing Method We consider alternatives to the PEBL system and battery tests. Conclusions We conclude with a discussion of the ethical factors involved in the open source testing movement. PMID:24269254

  16. 40 CFR 60.93 - Test methods and procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false Test methods and procedures. 60.93... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Hot Mix Asphalt Facilities § 60.93 Test methods and procedures. (a) In conducting the performance tests required in § 60.8...

  17. FECAL SOURCE TRACKING BY ANTIBIOTIC RESISTANCE ANALYSIS ON A WATERSHED EXHIBITING LOW RESISTANCE

    EPA Science Inventory

    The ongoing development of microbial source tracking has made it possible to identify contamination sources with varying accuracy, depending on the method used. The purpose of this study was done to test the efficiency of the antibiotic resistance analysis (ARA) method under low ...

  18. Dynamic radioactive particle source

    DOEpatents

    Moore, Murray E; Gauss, Adam Benjamin; Justus, Alan Lawrence

    2012-06-26

    A method and apparatus for providing a timed, synchronized dynamic alpha or beta particle source for testing the response of continuous air monitors (CAMs) for airborne alpha or beta emitters is provided. The method includes providing a radioactive source; placing the radioactive source inside the detection volume of a CAM; and introducing an alpha or beta-emitting isotope while the CAM is in a normal functioning mode.

  19. Full-Scale Turbofan Engine Noise-Source Separation Using a Four-Signal Method

    NASA Technical Reports Server (NTRS)

    Hultgren, Lennart S.; Arechiga, Rene O.

    2016-01-01

    Contributions from the combustor to the overall propulsion noise of civilian transport aircraft are starting to become important due to turbofan design trends and expected advances in mitigation of other noise sources. During on-ground, static-engine acoustic tests, combustor noise is generally sub-dominant to other engine noise sources because of the absence of in-flight effects. Consequently, noise-source separation techniques are needed to extract combustor-noise information from the total noise signature in order to further progress. A novel four-signal source-separation method is applied to data from a static, full-scale engine test and compared to previous methods. The new method is, in a sense, a combination of two- and three-signal techniques and represents an attempt to alleviate some of the weaknesses of each of those approaches. This work is supported by the NASA Advanced Air Vehicles Program, Advanced Air Transport Technology Project, Aircraft Noise Reduction Subproject and the NASA Glenn Faculty Fellowship Program.

  20. Noise source and reactor stability estimation in a boiling water reactor using a multivariate autoregressive model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kanemoto, S.; Andoh, Y.; Sandoz, S.A.

    1984-10-01

    A method for evaluating reactor stability in boiling water reactors has been developed. The method is based on multivariate autoregressive (M-AR) modeling of steady-state neutron and process noise signals. In this method, two kinds of power spectral densities (PSDs) for the measured neutron signal and the corresponding noise source signal are separately identified by the M-AR modeling. The closed- and open-loop stability parameters are evaluated from these PSDs. The method is applied to actual plant noise data that were measured together with artificial perturbation test data. Stability parameters identified from noise data are compared to those from perturbation test data,more » and it is shown that both results are in good agreement. In addition to these stability estimations, driving noise sources for the neutron signal are evaluated by the M-AR modeling. Contributions from void, core flow, and pressure noise sources are quantitatively evaluated, and the void noise source is shown to be the most dominant.« less

  1. Method for Smoke Spread Testing of Large Premises

    NASA Astrophysics Data System (ADS)

    Walmerdahl, P.; Werling, P.

    2001-11-01

    A method for performing non-destructive smoke spread tests has been developed, tested and applied to several existing buildings. Burning methanol in different size steel trays cooled by water generates the heat source. Several tray sizes are available to cover fire sources up to nearly 1MW. The smoke is supplied by means of a suitable number of smoke generators that produce a smoke, which can be described as a non-toxic aerosol. The advantage of the method is that it provides a means for performing non-destructive tests in already existing buildings and other installations for the purpose of evaluating the functionality and design of the active fire protection measures such as smoke extraction systems, etc. In the report, the method is described in detail and experimental data from the try-out of the method are also presented in addition to a discussion on applicability and flexibility of the method.

  2. The Chandra Source Catalog: Source Variability

    NASA Astrophysics Data System (ADS)

    Nowak, Michael; Rots, A. H.; McCollough, M. L.; Primini, F. A.; Glotfelty, K. J.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-01-01

    The Chandra Source Catalog (CSC) contains fields of view that have been studied with individual, uninterrupted observations that span integration times ranging from 1 ksec to 160 ksec, and a large number of which have received (multiple) repeat observations days to years later. The CSC thus offers an unprecedented look at the variability of the X-ray sky over a broad range of time scales, and across a wide diversity of variable X-ray sources: stars in the local galactic neighborhood, galactic and extragalactic X-ray binaries, Active Galactic Nuclei, etc. Here we describe the methods used to identify and quantify source variability within a single observation, and the methods used to assess the variability of a source when detected in multiple, individual observations. Three tests are used to detect source variability within a single observation: the Kolmogorov-Smirnov test and its variant, the Kuiper test, and a Bayesian approach originally suggested by Gregory and Loredo. The latter test not only provides an indicator of variability, but is also used to create a best estimate of the variable lightcurve shape. We assess the performance of these tests via simulation of statistically stationary, variable processes with arbitrary input power spectral densities (here we concentrate on results of red noise simulations) at variety of mean count rates and fractional root mean square variabilities relevant to CSC sources. We also assess the false positive rate via simulations of constant sources whose sole source of fluctuation is Poisson noise. We compare these simulations to a preliminary assessment of the variability found in real CSC sources, and estimate the variability sensitivities of the CSC.

  3. The Chandra Source Catalog: Source Variability

    NASA Astrophysics Data System (ADS)

    Nowak, Michael; Rots, A. H.; McCollough, M. L.; Primini, F. A.; Glotfelty, K. J.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Evans, I.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    The Chandra Source Catalog (CSC) contains fields of view that have been studied with individual, uninterrupted observations that span integration times ranging from 1 ksec to 160 ksec, and a large number of which have received (multiple) repeat observations days to years later. The CSC thus offers an unprecedented look at the variability of the X-ray sky over a broad range of time scales, and across a wide diversity of variable X-ray sources: stars in the local galactic neighborhood, galactic and extragalactic X-ray binaries, Active Galactic Nuclei, etc. Here we describe the methods used to identify and quantify source variability within a single observation, and the methods used to assess the variability of a source when detected in multiple, individual observations. Three tests are used to detect source variability within a single observation: the Kolmogorov-Smirnov test and its variant, the Kuiper test, and a Bayesian approach originally suggested by Gregory and Loredo. The latter test not only provides an indicator of variability, but is also used to create a best estimate of the variable lightcurve shape. We assess the performance of these tests via simulation of statistically stationary, variable processes with arbitrary input power spectral densities (here we concentrate on results of red noise simulations) at variety of mean count rates and fractional root mean square variabilities relevant to CSC sources. We also assess the false positive rate via simulations of constant sources whose sole source of fluctuation is Poisson noise. We compare these simulations to an assessment of the variability found in real CSC sources, and estimate the variability sensitivities of the CSC.

  4. 10 CFR 32.57 - Calibration or reference sources containing americium-241 or radium-226: Requirements for license...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... design; (3) Details of the method of incorporation and binding of the americium-241 or radium-226 in the source; (4) Procedures for and results of prototype testing of sources, which are designed to contain... additional information, including experimental studies and tests, required by the Commission to facilitate a...

  5. 10 CFR 32.57 - Calibration or reference sources containing americium-241 or radium-226: Requirements for license...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... design; (3) Details of the method of incorporation and binding of the americium-241 or radium-226 in the source; (4) Procedures for and results of prototype testing of sources, which are designed to contain... additional information, including experimental studies and tests, required by the Commission to facilitate a...

  6. 10 CFR 32.57 - Calibration or reference sources containing americium-241 or radium-226: Requirements for license...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... design; (3) Details of the method of incorporation and binding of the americium-241 or radium-226 in the source; (4) Procedures for and results of prototype testing of sources, which are designed to contain... additional information, including experimental studies and tests, required by the Commission to facilitate a...

  7. 10 CFR 70.39 - Specific licenses for the manufacture or initial transfer of calibration or reference sources.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and design; (iii) Details of the method of incorporation and binding of the plutonium in the source; (iv) Procedures for and results of prototype testing of sources, which are designed to contain more... experimental studies and tests, required by the Commission to facilitate a determination of the safety of the...

  8. 10 CFR 32.57 - Calibration or reference sources containing americium-241 or radium-226: Requirements for license...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... design; (3) Details of the method of incorporation and binding of the americium-241 or radium-226 in the source; (4) Procedures for and results of prototype testing of sources, which are designed to contain... additional information, including experimental studies and tests, required by the Commission to facilitate a...

  9. 10 CFR 32.57 - Calibration or reference sources containing americium-241 or radium-226: Requirements for license...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... design; (3) Details of the method of incorporation and binding of the americium-241 or radium-226 in the source; (4) Procedures for and results of prototype testing of sources, which are designed to contain... additional information, including experimental studies and tests, required by the Commission to facilitate a...

  10. An Examination of Teachers' Ratings of Lesson Plans Using Digital Primary Sources

    ERIC Educational Resources Information Center

    Milman, Natalie B.; Bondie, Rhonda

    2012-01-01

    This mixed method study examined teachers' ratings of 37 field-tested social studies lesson plans that incorporated digital primary sources through a grant from the Library of Congress Teaching with Primary Sources program for K-12 teachers. Each lesson, available in an online teaching materials collection, was field-tested and reviewed by at…

  11. The Earthquake Source Inversion Validation (SIV) - Project: Summary, Status, Outlook

    NASA Astrophysics Data System (ADS)

    Mai, P. M.

    2017-12-01

    Finite-fault earthquake source inversions infer the (time-dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, this kinematic source inversion is ill-posed and returns non-unique solutions, as seen for instance in multiple source models for the same earthquake, obtained by different research teams, that often exhibit remarkable dissimilarities. To address the uncertainties in earthquake-source inversions and to understand strengths and weaknesses of various methods, the Source Inversion Validation (SIV) project developed a set of forward-modeling exercises and inversion benchmarks. Several research teams then use these validation exercises to test their codes and methods, but also to develop and benchmark new approaches. In this presentation I will summarize the SIV strategy, the existing benchmark exercises and corresponding results. Using various waveform-misfit criteria and newly developed statistical comparison tools to quantify source-model (dis)similarities, the SIV platforms is able to rank solutions and identify particularly promising source inversion approaches. Existing SIV exercises (with related data and descriptions) and all computational tools remain available via the open online collaboration platform; additional exercises and benchmark tests will be uploaded once they are fully developed. I encourage source modelers to use the SIV benchmarks for developing and testing new methods. The SIV efforts have already led to several promising new techniques for tackling the earthquake-source imaging problem. I expect that future SIV benchmarks will provide further innovations and insights into earthquake source kinematics that will ultimately help to better understand the dynamics of the rupture process.

  12. 40 CFR 60.725 - Test methods and procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Industrial Surface Coating: Surface Coating of Plastic Parts for Business Machines § 60.725 Test methods and...

  13. 40 CFR 60.725 - Test methods and procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Industrial Surface Coating: Surface Coating of Plastic Parts for Business Machines § 60.725 Test methods and...

  14. Dataset for Testing Contamination Source Identification Methods for Water Distribution Networks

    EPA Pesticide Factsheets

    This dataset includes the results of a simulation study using the source inversion techniques available in the Water Security Toolkit. The data was created to test the different techniques for accuracy, specificity, false positive rate, and false negative rate. The tests examined different parameters including measurement error, modeling error, injection characteristics, time horizon, network size, and sensor placement. The water distribution system network models that were used in the study are also included in the dataset. This dataset is associated with the following publication:Seth, A., K. Klise, J. Siirola, T. Haxton , and C. Laird. Testing Contamination Source Identification Methods for Water Distribution Networks. Journal of Environmental Division, Proceedings of American Society of Civil Engineers. American Society of Civil Engineers (ASCE), Reston, VA, USA, ., (2016).

  15. A new multi-spectral feature level image fusion method for human interpretation

    NASA Astrophysics Data System (ADS)

    Leviner, Marom; Maltz, Masha

    2009-03-01

    Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in a three-task experiment using MSSF against two established methods: averaging and principle components analysis (PCA), and against its two source bands, visible and infrared. The three tasks that we studied were: (1) simple target detection, (2) spatial orientation, and (3) camouflaged target detection. MSSF proved superior to the other fusion methods in all three tests; MSSF also outperformed the source images in the spatial orientation and camouflaged target detection tasks. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.

  16. Impedance Eduction in Ducts with Higher-Order Modes and Flow

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.; Jones, Michael G.

    2009-01-01

    An impedance eduction technique, previously validated for ducts with plane waves at the source and duct termination planes, has been extended to support higher-order modes at these locations. Inputs for this method are the acoustic pressures along the source and duct termination planes, and along a microphone array located in a wall either adjacent or opposite to the test liner. A second impedance eduction technique is then presented that eliminates the need for the microphone array. The integrity of both methods is tested using three sound sources, six Mach numbers, and six selected frequencies. Results are presented for both a hardwall and a test liner (with known impedance) consisting of a perforated plate bonded to a honeycomb core. The primary conclusion of the study is that the second method performs well in the presence of higher-order modes and flow. However, the first method performs poorly when most of the microphones are located near acoustic pressure nulls. The negative effects of the acoustic pressure nulls can be mitigated by a judicious choice of the mode structure in the sound source. The paper closes by using the first impedance eduction method to design a rectangular array of 32 microphones for accurate impedance eduction in the NASA LaRC Curved Duct Test Rig in the presence of expected measurement uncertainties, higher order modes, and mean flow.

  17. Bioassay selection, experimental design and quality control/assurance for use in effluent assessment and control.

    PubMed

    Johnson, Ian; Hutchings, Matt; Benstead, Rachel; Thain, John; Whitehouse, Paul

    2004-07-01

    In the UK Direct Toxicity Assessment Programme, carried out in 1998-2000, a series of internationally recognised short-term toxicity test methods for algae, invertebrates and fishes, and rapid methods (ECLOX and Microtox) were used extensively. Abbreviated versions of conventional tests (algal growth inhibition tests, Daphnia magna immobilisation test and the oyster embryo-larval development test) were valuable for toxicity screening of effluent discharges and the identification of causes and sources of toxicity. Rapid methods based on chemiluminescence and bioluminescence were not generally useful in this programme, but may have a role where the rapid test has been shown to be an acceptable surrogate for a standardised test method. A range of quality assurance and control measures were identified. Requirements for quality control/assurance are most stringent when deriving data for characterising the toxic hazards of effluents and monitoring compliance against a toxicity reduction target. Lower quality control/assurance requirements can be applied to discharge screening and the identification of causes and sources of toxicity.

  18. 40 CFR 63.7142 - What are the requirements for claiming area source status?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... test must be repeated. (v) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... (3) ASTM Method D6735-01, Standard Test Method for Measurement of Gaseous Chlorides and Fluorides...)(3)(i) through (vi) of this section are followed. (i) A test must include three or more runs in which...

  19. 40 CFR 63.7142 - What are the requirements for claiming area source status?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... test must be repeated. (v) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... (3) ASTM Method D6735-01, Standard Test Method for Measurement of Gaseous Chlorides and Fluorides...)(3)(i) through (vi) of this section are followed. (i) A test must include three or more runs in which...

  20. 40 CFR 63.7142 - What are the requirements for claiming area source status?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... test must be repeated. (v) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... (3) ASTM Method D6735-01, Standard Test Method for Measurement of Gaseous Chlorides and Fluorides...)(3)(i) through (vi) of this section are followed. (i) A test must include three or more runs in which...

  1. 40 CFR 63.7142 - What are the requirements for claiming area source status?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... test must be repeated. (v) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... (3) ASTM Method D6735-01, Standard Test Method for Measurement of Gaseous Chlorides and Fluorides...)(3)(i) through (vi) of this section are followed. (i) A test must include three or more runs in which...

  2. 40 CFR 63.7142 - What are the requirements for claiming area source status?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... test must be repeated. (v) The post-test analyte spike procedure of section 11.2.7 of ASTM Method D6735... (3) ASTM Method D6735-01, Standard Test Method for Measurement of Gaseous Chlorides and Fluorides...)(3)(i) through (vi) of this section are followed. (i) A test must include three or more runs in which...

  3. Aerofoil broadband and tonal noise modelling using stochastic sound sources and incorporated large scale fluctuations

    NASA Astrophysics Data System (ADS)

    Proskurov, S.; Darbyshire, O. R.; Karabasov, S. A.

    2017-12-01

    The present work discusses modifications to the stochastic Fast Random Particle Mesh (FRPM) method featuring both tonal and broadband noise sources. The technique relies on the combination of incorporated vortex-shedding resolved flow available from Unsteady Reynolds-Averaged Navier-Stokes (URANS) simulation with the fine-scale turbulence FRPM solution generated via the stochastic velocity fluctuations in the context of vortex sound theory. In contrast to the existing literature, our method encompasses a unified treatment for broadband and tonal acoustic noise sources at the source level, thus, accounting for linear source interference as well as possible non-linear source interaction effects. When sound sources are determined, for the sound propagation, Acoustic Perturbation Equations (APE-4) are solved in the time-domain. Results of the method's application for two aerofoil benchmark cases, with both sharp and blunt trailing edges are presented. In each case, the importance of individual linear and non-linear noise sources was investigated. Several new key features related to the unsteady implementation of the method were tested and brought into the equation. Encouraging results have been obtained for benchmark test cases using the new technique which is believed to be potentially applicable to other airframe noise problems where both tonal and broadband parts are important.

  4. On Blockage Corrections for Two-dimensional Wind Tunnel Tests Using the Wall-pressure Signature Method

    NASA Technical Reports Server (NTRS)

    Allmaras, S. R.

    1986-01-01

    The Wall-Pressure Signature Method for correcting low-speed wind tunnel data to free-air conditions has been revised and improved for two-dimensional tests of bluff bodies. The method uses experimentally measured tunnel wall pressures to approximately reconstruct the flow field about the body with potential sources and sinks. With the use of these sources and sinks, the measured drag and tunnel dynamic pressure are corrected for blockage effects. Good agreement is obtained with simpler methods for cases in which the blockage corrections were about 10% of the nominal drag values.

  5. Comparison of Different Measurement Technologies for the In-Flight Assessment of Radiated Acoustic Intensity

    NASA Technical Reports Server (NTRS)

    Klos, Jacob; Palumbo, Daniel L.; Buehrle, Ralph D.; Williams, Earl G.; Valdivia, Nicolas; Herdic, Peter C.; Sklanka, Bernard

    2005-01-01

    A series of tests was planned and conducted in the Interior Noise Test Facility at Boeing Field, on the NASA Aries 757 flight research aircraft, and in the Structural Acoustic Loads and Transmission Facility at NASA Langley Research Center. These tests were designed to answer several questions concerning the use of array methods in flight. One focus of the tests was determining whether and to what extent array methods could be used to identify the effects of an acoustical treatment applied to a limited portion of an aircraft fuselage. Another focus of the tests was to verify that the arrays could be used to localize and quantify a known source purposely placed in front of the arrays. Thus the issues related to backside sources and flanking paths present in the complicated sound field were addressed during these tests. These issues were addressed through the use of reference transducers, both accelerometers mounted to the fuselage and microphones in the cabin, that were used to correlate the pressure holograms. measured by the microphone arrays using either SVD methods or partial coherence methods. This correlation analysis accepts only energy that is coherent with the sources sensed by the reference transducers, allowing a noise control engineer to only identify and study those vibratory sources of interest. The remainder of this paper will present a detailed description of the test setups that were used in this test sequence and typical results of the NAH/IBEM analysis used to reconstruct the sound fields. Also, a comparison of data obtained in the laboratory environments and during flights of the 757 aircraft will be made.

  6. Non-destructive testing method and apparatus

    DOEpatents

    Akers, Douglas W [Idaho Falls, ID

    2011-10-04

    Non-destructive testing apparatus may comprise a photon source and a source material that emits positrons in response to bombardment of the source material with photons. The source material is positionable adjacent the photon source and a specimen so that when the source material is positioned adjacent the photon source it is exposed to photons produced thereby. When the source material is positioned adjacent the specimen, the specimen is exposed to at least some of the positrons emitted by the source material. A detector system positioned adjacent the specimen detects annihilation gamma rays emitted by the specimen. Another embodiment comprises a neutron source and a source material that emits positrons in response to neutron bombardment.

  7. Qualification tests for {sup 192}Ir sealed sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iancso, Georgeta, E-mail: georgetaiancso@yahoo.com; Iliescu, Elena, E-mail: georgetaiancso@yahoo.com; Iancu, Rodica, E-mail: georgetaiancso@yahoo.com

    This paper describes the results of qualification tests for {sup 192}Ir sealed sources, available in Testing and Nuclear Expertise Laboratory of National Institute for Physics and Nuclear Engineering 'Horia Hulubei' (I.F.I.N.-HH), Romania. These sources had to be produced in I.F.I.N.-HH and were tested in order to obtain the authorization from The National Commission for Nuclear Activities Control (CNCAN). The sources are used for gammagraphy procedures or in gammadefectoscopy equipments. Tests, measurement methods and equipments used, comply with CNCAN, AIEA and International Quality Standards and regulations. The qualification tests are: 1. Radiological tests and measurements: dose equivalent rate at 1 m;more » tightness; dose equivalent rate at the surface of the transport and storage container; external unfixed contamination of the container surface. 2. Mechanical and climatic tests: thermal shock; external pressure; mechanic shock; vibrations; boring; thermal conditions for storage and transportation. Passing all tests, it was obtained the Radiological Security Authorization for producing the {sup 192}Ir sealed sources. Now IFIN-HH can meet many demands for this sealed sources, as the only manufacturer in Romania.« less

  8. 40 CFR Appendix A-4 to Part 60 - Test Methods 6 through 10B

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... sources Method 6A—Determination of sulfur dioxide, moisture, and carbon dioxide emissions from fossil fuel... fossil fuel combustion sources Method 6C—Determination of Sulfur Dioxide Emissions From Stationary... with SO2 to form particulate sulfite and by reacting with the indicator. If free ammonia is present...

  9. 40 CFR Appendix A-4 to Part 60 - Test Methods 6 through 10B

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... sources Method 6A—Determination of sulfur dioxide, moisture, and carbon dioxide emissions from fossil fuel... fossil fuel combustion sources Method 6C—Determination of Sulfur Dioxide Emissions From Stationary... with SO2 to form particulate sulfite and by reacting with the indicator. If free ammonia is present...

  10. 40 CFR Appendix A-4 to Part 60 - Test Methods 6 through 10B

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... sources Method 6A—Determination of sulfur dioxide, moisture, and carbon dioxide emissions from fossil fuel... fossil fuel combustion sources Method 6C—Determination of Sulfur Dioxide Emissions From Stationary... with SO2 to form particulate sulfite and by reacting with the indicator. If free ammonia is present...

  11. 40 CFR Appendix A-4 to Part 60 - Test Methods 6 through 10B

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... sources Method 6A—Determination of sulfur dioxide, moisture, and carbon dioxide emissions from fossil fuel... fossil fuel combustion sources Method 6C—Determination of Sulfur Dioxide Emissions From Stationary... reacting with SO2 to form particulate sulfite and by reacting with the indicator. If free ammonia is...

  12. 40 CFR Appendix A-4 to Part 60 - Test Methods 6 through 10B

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... sources Method 6A—Determination of sulfur dioxide, moisture, and carbon dioxide emissions from fossil fuel... fossil fuel combustion sources Method 6C—Determination of Sulfur Dioxide Emissions From Stationary... with SO2 to form particulate sulfite and by reacting with the indicator. If free ammonia is present...

  13. Quantifying uncertainty in stable isotope mixing models

    DOE PAGES

    Davis, Paul; Syme, James; Heikoop, Jeffrey; ...

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [ Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ 15N and δ 18O) butmore » all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated mixing fractions.« less

  14. Test method for telescopes using a point source at a finite distance

    NASA Technical Reports Server (NTRS)

    Griner, D. B.; Zissa, D. E.; Korsch, D.

    1985-01-01

    A test method for telescopes that makes use of a focused ring formed by an annular aperture when using a point source at a finite distance is evaluated theoretically and experimentally. The results show that the concept can be applied to near-normal, as well as grazing incidence. It is particularly suited for X-ray telescopes because of their intrinsically narrow annular apertures, and because of the largely reduced diffraction effects.

  15. Cavity Detection and Delineation Research. Report 2. Seismic Methodology: Medford Cave Site, Florida.

    DTIC Science & Technology

    1983-06-01

    energy. A distance of 50 ft was maintained between source and detector for one test and 25 ft for the other tests. Since the seismic unit was capable...during the tests. After a recording was made, the seismic source and geophone were each moved 5 ft, thus maintaining the 50- or 25-ft source-to- detector ...produced by cavities; therefore, detection using this technique was not achieved. The sensitivity of the uphole refraction method to the presence of

  16. 40 CFR 63.2993 - What test methods must I use in conducting performance tests?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 13 2012-07-01 2012-07-01 false What test methods must I use in conducting performance tests? 63.2993 Section 63.2993 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National...

  17. 40 CFR 63.2993 - What test methods must I use in conducting performance tests?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 13 2014-07-01 2014-07-01 false What test methods must I use in conducting performance tests? 63.2993 Section 63.2993 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National...

  18. Testing Method for External Cladding Systems - Incerc Romania

    NASA Astrophysics Data System (ADS)

    Simion, A.; Dragne, H.

    2017-06-01

    This research presents a new testing method in a natural scale for external cladding systems tested on buildings with minimum than 3 floors [1]. The testing method is unique in Romania and it is similar about many fire testing current methods from European Union states. Also, presents the fire propagation and the effect of fire smoke on the building façade composed of thermal insulation. Laboratory of testing and research for building fire safety from National Institute INCERC Bucharest, provides a test method for determining the fire performance characteristics of non-loadbearing external cladding systems and external wall insulation systems when applied to the face of a building and exposed to an external fire under controlled conditions [2]. The fire exposure is representative of an external fire source or a fully-developed (post-flashover) fire in a room, venting through an opening such as a window aperture that exposes the cladding to the effects of external flames, or an external fire source. On the future, fire tests will be experimented for answer demande a number of high-profile fires where the external facade of tall buildings provided a route for vertical fire spread.

  19. Human response research update

    NASA Technical Reports Server (NTRS)

    Schomer, Paul D.

    1990-01-01

    The methods, sources, instrumentation, the new facility at Aberdeen Proving Grounds, (APG) performance tests, and APG sources are briefly outlined. This presentation is represented by viewgraphs only.

  20. 40 CFR 63.805 - Performance test methods.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Section 63.805 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emission Standards for Wood Furniture Manufacturing Operations § 63.805 Performance test methods...

  1. 40 CFR 63.805 - Performance test methods.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Section 63.805 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emission Standards for Wood Furniture Manufacturing Operations § 63.805 Performance test methods...

  2. Simple performance evaluation of pulsed spontaneous parametric down-conversion sources for quantum communications.

    PubMed

    Smirr, Jean-Loup; Guilbaud, Sylvain; Ghalbouni, Joe; Frey, Robert; Diamanti, Eleni; Alléaume, Romain; Zaquine, Isabelle

    2011-01-17

    Fast characterization of pulsed spontaneous parametric down conversion (SPDC) sources is important for applications in quantum information processing and communications. We propose a simple method to perform this task, which only requires measuring the counts on the two output channels and the coincidences between them, as well as modeling the filter used to reduce the source bandwidth. The proposed method is experimentally tested and used for a complete evaluation of SPDC sources (pair emission probability, total losses, and fidelity) of various bandwidths. This method can find applications in the setting up of SPDC sources and in the continuous verification of the quality of quantum communication links.

  3. Assessment and control of spacecraft electromagnetic interference

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Design criteria are presented to provide guidance in assessing electromagnetic interference from onboard sources and establishing requisite control in spacecraft design, development, and testing. A comprehensive state-of-the-art review is given which covers flight experience, sources and transmission of electromagnetic interference, susceptible equipment, design procedure, control techniques, and test methods.

  4. Identification and modification of dominant noise sources in diesel engines

    NASA Astrophysics Data System (ADS)

    Hayward, Michael D.

    Determination of dominant noise sources in diesel engines is an integral step in the creation of quiet engines, but is a process which can involve an extensive series of expensive, time-consuming fired and motored tests. The goal of this research is to determine dominant noise source characteristics of a diesel engine in the near and far-fields with data from fewer tests than is currently required. Pre-conditioning and use of numerically robust methods to solve a set of cross-spectral density equations results in accurate calculation of the transfer paths between the near- and far-field measurement points. Application of singular value decomposition to an input cross-spectral matrix determines the spectral characteristics of a set of independent virtual sources, that, when scaled and added, result in the input cross spectral matrix. Each virtual source power spectral density is a singular value resulting from the decomposition performed over a range of frequencies. The complex relationship between virtual and physical sources is estimated through determination of virtual source contributions to each input measurement power spectral density. The method is made more user-friendly through use of a percentage contribution color plotting technique, where different normalizations can be used to help determine the presence of sources and the strengths of their contributions. Convolution of input measurements with the estimated path impulse responses results in a set of far-field components, to which the same singular value contribution plotting technique can be applied, thus allowing dominant noise source characteristics in the far-field to also be examined. Application of the methods presented results in determination of the spectral characteristics of dominant noise sources both in the near- and far-fields from one fired test, which significantly reduces the need for extensive fired and motored testing. Finally, it is shown that the far-field noise time history of a physically altered engine can be simulated through modification of singular values and recalculation of transfer paths between input and output measurements of previously recorded data.

  5. TEST METHODS TO CHARACTERIZE PARTICULATE MATTER EMISSIONS AND DEPOSITION RATES IN A RESEARCH HOUSE

    EPA Science Inventory

    The paper discusses test methods to characterize particulate matter (PM) emissions and deposition rates in a research house. In a room in the research house, specially configured for PM source testing, a high-efficiency particulate air (HEPA)-filtered air supply system, used for...

  6. New Performance Metrics for Quantitative Polymerase Chain Reaction-Based Microbial Source Tracking Methods

    EPA Science Inventory

    Binary sensitivity and specificity metrics are not adequate to describe the performance of quantitative microbial source tracking methods because the estimates depend on the amount of material tested and limit of detection. We introduce a new framework to compare the performance ...

  7. 40 CFR 60.1300 - What test methods must I use to stack test?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 6 2011-07-01 2011-07-01 false What test methods must I use to stack test? 60.1300 Section 60.1300 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Small Municipal Waste Combustion Units for...

  8. 40 CFR 60.1790 - What test methods must I use to stack test?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 6 2011-07-01 2011-07-01 false What test methods must I use to stack test? 60.1790 Section 60.1790 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines and Compliance Times for Small Municipal Waste...

  9. 40 CFR 60.1790 - What test methods must I use to stack test?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 7 2012-07-01 2012-07-01 false What test methods must I use to stack test? 60.1790 Section 60.1790 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines and Compliance Times for Small Municipal Waste...

  10. 40 CFR 60.1300 - What test methods must I use to stack test?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 7 2012-07-01 2012-07-01 false What test methods must I use to stack test? 60.1300 Section 60.1300 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Small Municipal Waste Combustion Units for...

  11. 40 CFR 60.1790 - What test methods must I use to stack test?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 7 2014-07-01 2014-07-01 false What test methods must I use to stack test? 60.1790 Section 60.1790 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines and Compliance Times for Small Municipal Waste...

  12. 40 CFR 60.1300 - What test methods must I use to stack test?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 7 2014-07-01 2014-07-01 false What test methods must I use to stack test? 60.1300 Section 60.1300 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Small Municipal Waste Combustion Units for...

  13. 40 CFR 63.2993 - What test methods must I use in conducting performance tests?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 13 2013-07-01 2012-07-01 true What test methods must I use in conducting performance tests? 63.2993 Section 63.2993 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emissio...

  14. 40 CFR 63.2993 - What test methods must I use in conducting performance tests?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 12 2011-07-01 2009-07-01 true What test methods must I use in conducting performance tests? 63.2993 Section 63.2993 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES National Emission Standards...

  15. 40 CFR 63.2993 - What test methods must I use in conducting performance tests?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 12 2010-07-01 2010-07-01 true What test methods must I use in conducting performance tests? 63.2993 Section 63.2993 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES National Emission Standards...

  16. 40 CFR 60.1790 - What test methods must I use to stack test?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 7 2013-07-01 2013-07-01 false What test methods must I use to stack test? 60.1790 Section 60.1790 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines and Compliance Times for Small Municipal Waste...

  17. 40 CFR 60.1300 - What test methods must I use to stack test?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 7 2013-07-01 2013-07-01 false What test methods must I use to stack test? 60.1300 Section 60.1300 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Small Municipal Waste Combustion Units for...

  18. Organic liquid scintillation detectors for on-the-fly neutron/gamma alarming and radionuclide identification in a pedestrian radiation portal monitor

    NASA Astrophysics Data System (ADS)

    Paff, Marc Gerrit; Ruch, Marc L.; Poitrasson-Riviere, Alexis; Sagadevan, Athena; Clarke, Shaun D.; Pozzi, Sara

    2015-07-01

    We present new experimental results from a radiation portal monitor based on the use of organic liquid scintillators. The system was tested as part of a 3He-free radiation portal monitor testing campaign at the European Commission's Joint Research Centre in Ispra, Italy, in February 2014. The radiation portal monitor was subjected to a wide range of test conditions described in ANSI N42.35, including a variety of gamma-ray sources and a 20,000 n/s 252Cf source. A false alarm test tested whether radiation portal monitors ever alarmed in the presence of only natural background. The University of Michigan Detection for Nuclear Nonproliferation Group's system triggered zero false alarms in 2739 trials. It consistently alarmed on a variety of gamma-ray sources travelling at 1.2 m/s at a 70 cm source to detector distance. The neutron source was detected at speeds up to 3 m/s and in configurations with up to 8 cm of high density polyethylene shielding. The success of on-the-fly radionuclide identification varied with the gamma-ray source measured as well as with which of two radionuclide identification methods was used. Both methods used a least squares comparison between the measured pulse height distributions to library spectra to pick the best match. The methods varied in how the pulse height distributions were modified prior to the least squares comparison. Correct identification rates were as high as 100% for highly enriched uranium, but as low as 50% for 241Am. Both radionuclide identification algorithms produced mixed results, but the concept of using liquid scintillation detectors for gamma-ray and neutron alarming in radiation portal monitor was validated.

  19. Uncertainty Analyses for Back Projection Methods

    NASA Astrophysics Data System (ADS)

    Zeng, H.; Wei, S.; Wu, W.

    2017-12-01

    So far few comprehensive error analyses for back projection methods have been conducted, although it is evident that high frequency seismic waves can be easily affected by earthquake depth, focal mechanisms and the Earth's 3D structures. Here we perform 1D and 3D synthetic tests for two back projection methods, MUltiple SIgnal Classification (MUSIC) (Meng et al., 2011) and Compressive Sensing (CS) (Yao et al., 2011). We generate synthetics for both point sources and finite rupture sources with different depths, focal mechanisms, as well as 1D and 3D structures in the source region. The 3D synthetics are generated through a hybrid scheme of Direct Solution Method and Spectral Element Method. Then we back project the synthetic data using MUSIC and CS. The synthetic tests show that the depth phases can be back projected as artificial sources both in space and time. For instance, for a source depth of 10km, back projection gives a strong signal 8km away from the true source. Such bias increases with depth, e.g., the error of horizontal location could be larger than 20km for a depth of 40km. If the array is located around the nodal direction of direct P-waves the teleseismic P-waves are dominated by the depth phases. Therefore, back projections are actually imaging the reflection points of depth phases more than the rupture front. Besides depth phases, the strong and long lasted coda waves due to 3D effects near trench can lead to additional complexities tested here. The strength contrast of different frequency contents in the rupture models also produces some variations to the back projection results. In the synthetic tests, MUSIC and CS derive consistent results. While MUSIC is more computationally efficient, CS works better for sparse arrays. In summary, our analyses indicate that the impact of various factors mentioned above should be taken into consideration when interpreting back projection images, before we can use them to infer the earthquake rupture physics.

  20. Tolerance of freshwater test organisms to formulated sediments for use as control materials in whole-sediment toxicity tests

    USGS Publications Warehouse

    Kemble, N.E.; Dwyer, F.J.; Ingersoll, C.G.; Dawson, T.D.; Norberg-King, T. J.

    1999-01-01

    A method is described for preparing formulated sediments for use intoxicity testing. Ingredients used to prepare formulated sediments included commercially available silt, clay, sand, humic acid, dolomite, and α-cellulose (as a source of organic carbon). α-Cellulose was selected as the source of organic carbon because it is commercially available, consistent from batch to batch, and low in contaminant concentrations. The tolerance of freshwater test organisms to formulated sediments for use as control materials in whole-sediment toxicity testing was evaluated. Sediment exposures were conducted for 10 d with the amphipod Hyalella azteca, the midges Chironomus riparius and C. tentans, and the oligochaete Lumbriculus variegatus and for 28 d with H. azteca. Responses of organisms in formulated sediments was compared with a field-collected control sediment that has routinely been used to determine test acceptability. Tolerance of organisms to formulated sediments was evaluated by determining responses to varying levels of α-cellulose, to varying levels of grain size, to evaluation of different food types, or to evaluation of different sources of overlying water. In the 10-d exposures, survival of organisms exposed to the formulated sediments routinely met or exceeded the responses of test organisms exposed to the control sediment and routinely met test acceptability criteria required in standard methods. Growth of amphipods and oligochaetes in 10-d exposures with formulated sediment was often less than growth of organisms in the field-collected control sediment. Additional research is needed, using the method employed to prepare formulated sediment, to determine if conditioning formulated sediments before starting 10-d tests would improve the growth of amphipods. In the 28-d exposures, survival of H. azteca was low when reconstituted water was used as the source of overlying water. However, when well water was used as the source of overlying water in 28-d exposures, consistent responses of amphipods were observed in both formulated and control sediments.

  1. 40 CFR Appendix A to Part 55 - Listing of State and Local Requirements Incorporated by Reference Into Part 55, by State

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... Significant Impact Levels (SILs) 18 AAC 50.220. Enforceable Test Methods (effective 10/01/2004) 18 AAC 50.225.../76) Rule 104 Reporting of Source Test Data and Analyses (Adopted 01/9/76) Rule 108 Alternative...) Rule 47 Source Test, Emission Monitor, and Call-Back Fees (Adopted 06/22/99) Rule 50 Opacity (Adopted...

  2. Optical nulling apparatus and method for testing an optical surface

    NASA Technical Reports Server (NTRS)

    Olczak, Eugene (Inventor); Hannon, John J. (Inventor); Dey, Thomas W. (Inventor); Jensen, Arthur E. (Inventor)

    2008-01-01

    An optical nulling apparatus for testing an optical surface includes an aspheric mirror having a reflecting surface for imaging light near or onto the optical surface under test, where the aspheric mirror is configured to reduce spherical aberration of the optical surface under test. The apparatus includes a light source for emitting light toward the aspheric mirror, the light source longitudinally aligned with the aspheric mirror and the optical surface under test. The aspheric mirror is disposed between the light source and the optical surface under test, and the emitted light is reflected off the reflecting surface of the aspheric mirror and imaged near or onto the optical surface under test. An optical measuring device is disposed between the light source and the aspheric mirror, where light reflected from the optical surface under test enters the optical measuring device. An imaging mirror is disposed longitudinally between the light source and the aspheric mirror, and the imaging mirror is configured to again reflect light, which is first reflected from the reflecting surface of the aspheric mirror, onto the optical surface under test.

  3. SYNCHROTRON RADIATION, FREE ELECTRON LASER, APPLICATION OF NUCLEAR TECHNOLOGY, ETC.: Study on the characteristics of linac based THz light source

    NASA Astrophysics Data System (ADS)

    Zhu, Xiong-Wei; Wang, Shu-Hong; Chen, Sen-Yu

    2009-10-01

    There are many methods based on linac for THz radiation production. As one of the options for the Beijing Advanced Light, an ERL test facility is proposed for THz radiation. In this test facility, there are 4 kinds of methods to produce THz radiation: coherent synchrotron radiation (CSR), synchrotron radiation (SR), low gain FEL oscillator, and high gain SASE FEL. In this paper, we study the characteristics of the 4 kinds of THz light sources.

  4. Direct measurement of air kerma rate in air from CDCS J-type caesium-137 therapy sources using a Farmer ionization chamber.

    PubMed

    Poynter, A J

    2000-04-01

    A simple method for directly measuring the reference air kerma rate from J-type 137Cs sources using a Farmer 2571 chamber has been evaluated. The method is useful as an independent means of verifying manufacturers' test data.

  5. EVALUATION OF POSSIBLE BIASES IN THE U.S. EPA'S METHOD 101A-MERCURY EMISSIONS FROM STATIONARY SOURCES

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA), Research Triangle Park, North Carolina, has a program to evaluate and standardize source testing methods for hazardous pollutants in support of current and future air quality regulations. ccasionally, questions arise concerning an e...

  6. Performance of forty-one microbial source tracking methods: A twenty-seven lab evaluation study

    EPA Science Inventory

    The last decade has seen development of numerous new microbial source tracking (MST) methodologies, but many of these have been tested in just a few laboratories with a limited number of fecal samples. This method evaluation study examined the specificity and sensitivity of 43 ...

  7. 3D Sound Techniques for Sound Source Elevation in a Loudspeaker Listening Environment

    NASA Astrophysics Data System (ADS)

    Kim, Yong Guk; Jo, Sungdong; Kim, Hong Kook; Jang, Sei-Jin; Lee, Seok-Pil

    In this paper, we propose several 3D sound techniques for sound source elevation in stereo loudspeaker listening environments. The proposed method integrates a head-related transfer function (HRTF) for sound positioning and early reflection for adding reverberant circumstance. In addition, spectral notch filtering and directional band boosting techniques are also included for increasing elevation perception capability. In order to evaluate the elevation performance of the proposed method, subjective listening tests are conducted using several kinds of sound sources such as white noise, sound effects, speech, and music samples. It is shown from the tests that the degrees of perceived elevation by the proposed method are around the 17º to 21º when the stereo loudspeakers are located on the horizontal plane.

  8. Validation of two innovative methods to measure contaminant mass flux in groundwater

    NASA Astrophysics Data System (ADS)

    Goltz, Mark N.; Close, Murray E.; Yoon, Hyouk; Huang, Junqi; Flintoft, Mark J.; Kim, Sehjong; Enfield, Carl

    2009-04-01

    The ability to quantify the mass flux of a groundwater contaminant that is leaching from a source area is critical to enable us to: (1) evaluate the risk posed by the contamination source and prioritize cleanup, (2) evaluate the effectiveness of source remediation technologies or natural attenuation processes, and (3) quantify a source term for use in models that may be applied to predict maximum contaminant concentrations in downstream wells. Recently, a number of new methods have been developed and subsequently applied to measure contaminant mass flux in groundwater in the field. However, none of these methods has been validated at larger than the laboratory-scale through a comparison of measured mass flux and a known flux that has been introduced into flowing groundwater. A couple of innovative flux measurement methods, the tandem circulation well (TCW) and modified integral pumping test (MIPT) methods, have recently been proposed. The TCW method can measure mass flux integrated over a large subsurface volume without extracting water. The TCW method may be implemented using two different techniques. One technique, the multi-dipole technique, is relatively simple and inexpensive, only requiring measurement of heads, while the second technique requires conducting a tracer test. The MIPT method is an easily implemented method of obtaining volume-integrated flux measurements. In the current study, flux measurements obtained using these two methods are compared with known mass fluxes in a three-dimensional, artificial aquifer. Experiments in the artificial aquifer show that the TCW multi-dipole and tracer test techniques accurately estimated flux, within 2% and 16%, respectively; although the good results obtained using the multi-dipole technique may be fortuitous. The MIPT method was not as accurate as the TCW method, underestimating flux by as much as 70%. MIPT method inaccuracies may be due to the fact that the method assumptions (two-dimensional steady groundwater flow to fully-screened wells) were not well-approximated. While fluxes measured using the MIPT method were consistently underestimated, the method's simplicity and applicability to the field may compensate for the inaccuracies that were observed in this artificial aquifer test.

  9. Installation and Characterization of Charged Particle Sources for Space Environmental Effects Testing

    NASA Technical Reports Server (NTRS)

    Skevington, Jennifer L.

    2010-01-01

    Charged particle sources are integral devices used by Marshall Space Flight Center s Environmental Effects Branch (EM50) in order to simulate space environments for accurate testing of materials and systems. By using these sources inside custom vacuum systems, materials can be tested to determine charging and discharging properties as well as resistance to sputter damage. This knowledge can enable scientists and engineers to choose proper materials that will not fail in harsh space environments. This paper combines the steps utilized to build a low energy electron gun (The "Skevington 3000") as well as the methods used to characterize the output of both the Skevington 3000 and a manufactured Xenon ion source. Such characterizations include beam flux, beam uniformity, and beam energy. Both sources were deemed suitable for simulating environments in future testing.

  10. In Situ NAPL Modification for Contaminant Source-Zone Passivation, Mass Flux Reduction, and Remediation

    NASA Astrophysics Data System (ADS)

    Mateas, D. J.; Tick, G.; Carroll, K. C.

    2016-12-01

    A remediation method was developed to reduce the aqueous solubility and mass-flux of target NAPL contaminants through the in-situ creation of a NAPL mixture source-zone. This method was tested in the laboratory using equilibrium batch tests and two-dimensional flow-cell experiments. The creation of two different NAPL mixture source zones were tested in which 1) volumes of relatively insoluble n-hexadecane (HEX) or vegetable oil (VO) were injected into a trichloroethene (TCE) contaminant source-zone; and 2) pre-determined HEX-TCE and VO-TCE mixture ratio source zones were emplaced into the flow cell prior to water flushing. NAPL-aqueous phase batch tests were conducted prior to the flow-cell experiments to evaluate the effects of various NAPL mixture ratios on equilibrium aqueous-phase concentrations of TCE and toluene (TOL) and to design optimal NAPL (HEX or VO) injection volumes for the flow-cell experiments. Uniform NAPL mixture source-zones were able to quickly decrease contaminant mass-flux, as demonstrated by the emplaced source-zone experiments. The success of the HEX and VO injections to also decrease mass flux was dependent on the ability of these injectants to homogeneously mix with TCE source-zone. Upon injection, both HEX and VO migrated away from the source-zone, to some extent. However, the lack of a steady-state dissolution phase and the inefficient mass-flux-reduction/mass-removal behavior produced after VO injection suggest that VO was more effective than HEX for mixing and partitioning within the source-zone region to form a more homogeneous NAPL mixture with TCE. VO appears to be a promising source-zone injectant-NAPL due to its negligible long-term toxicity and lower mobilization potential.

  11. 77 FR 1129 - Revisions to Test Methods and Testing Regulations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-09

    ...This action proposes editorial and technical corrections necessary for source testing of emissions and operations. The revisions include the addition of alternative equipment and methods as well as corrections to technical and typographical errors. We also solicit public comment on potential changes to the current procedures for determining emission stratification.

  12. Testing of aircraft passenger seat cushion materials. Full scale, test description and results, volume 1

    NASA Technical Reports Server (NTRS)

    Schutter, K. J.; Gaume, J. G.; Duskin, F. E.

    1981-01-01

    Eight different seat cushion configurations were subjected to full-scale burn tests. Each cushion configuration was tested twice for a total of sixteen tests. Two different fire sources were used. They consisted of one liter of Jet A fuel for eight tests and a radiant energy source with propane flame for eight tests. Both fire sources were ignited by a propane flame. During each test, data were recorded for smoke density, cushion temperatures, radiant heat flux, animal response to combustion products, rate of weight loss of test specimens, cabin temperature, and for the type and content of gas within the cabin atmosphere. When compared to existing passenger aircraft seat cushions, the test specimens incorporating a fire barrier and those fabricated from advanced materials, using improved construction methods, exhibited significantly greater fire resistance.

  13. In vitro eye irritation testing using the open source reconstructed hemicornea - a ring trial.

    PubMed

    Mewes, Karsten R; Engelke, Maria; Zorn-Kruppa, Michaela; Bartok, Melinda; Tandon, Rashmi; Brandner, Johanna M; Petersohn, Dirk

    2017-01-01

    The aim of the present ring trial was to test whether two new methodological approaches for the in vitro classification of eye irritating chemicals can be reliably transferred from the developers' laboratories to other sites. Both test methods are based on the well-established open source reconstructed 3D hemicornea models. In the first approach, the initial depth of injury after chemical treatment in the hemicornea model is derived from the quantitative analysis of histological sections. In the second approach, tissue viability, as a measure for corneal damage after chemical treatment, is analyzed separately for epithelium and stroma of the hemicornea model. The three independent laboratories that participated in the ring trial produced their own hemicornea models according to the test producer's instructions, thus supporting the open source concept. A total of 9 chemicals with different physicochemical and eye-irritating properties were tested to assess the between-laboratory reproducibility (BLR), the predictive performance, as well as possible limitations of the test systems. The BLR was 62.5% for the first and 100% for the second method. Both methods enabled to discriminate Cat. 1 chemicals from all non-Cat. 1 substances, which qualifies them to be used in a top-down approach. However, the selectivity between No Cat. and Cat. 2 chemicals still needs optimization.

  14. Progress toward the development and testing of source reconstruction methods for NIF neutron imaging.

    PubMed

    Loomis, E N; Grim, G P; Wilde, C; Wilson, D C; Morgan, G; Wilke, M; Tregillis, I; Merrill, F; Clark, D; Finch, J; Fittinghoff, D; Bower, D

    2010-10-01

    Development of analysis techniques for neutron imaging at the National Ignition Facility is an important and difficult task for the detailed understanding of high-neutron yield inertial confinement fusion implosions. Once developed, these methods must provide accurate images of the hot and cold fuels so that information about the implosion, such as symmetry and areal density, can be extracted. One method under development involves the numerical inversion of the pinhole image using knowledge of neutron transport through the pinhole aperture from Monte Carlo simulations. In this article we present results of source reconstructions based on simulated images that test the methods effectiveness with regard to pinhole misalignment.

  15. 40 CFR Appendix A to Part 55 - Listing of State and Local Requirements Incorporated by Reference Into Part 55, by State

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... Significant Impact Levels (SILs) 18 AAC 50.220. Enforceable Test Methods (effective 10/01/2004) 18 AAC 50.225... (Adopted 01/9/76) Rule 104 Reporting of Source Test Data and Analyses (Adopted 01/9/76) Rule 108....2Asbestos Removal Fees (Adopted 08/04/92) Rule 47Source Test, Emission Monitor, and Call-Back Fees (Adopted...

  16. Bootstrap inversion technique for atmospheric trace gas source detection and quantification using long open-path laser measurements

    NASA Astrophysics Data System (ADS)

    Alden, Caroline B.; Ghosh, Subhomoy; Coburn, Sean; Sweeney, Colm; Karion, Anna; Wright, Robert; Coddington, Ian; Rieker, Gregory B.; Prasad, Kuldeep

    2018-03-01

    Advances in natural gas extraction technology have led to increased activity in the production and transport sectors in the United States and, as a consequence, an increased need for reliable monitoring of methane leaks to the atmosphere. We present a statistical methodology in combination with an observing system for the detection and attribution of fugitive emissions of methane from distributed potential source location landscapes such as natural gas production sites. We measure long (> 500 m), integrated open-path concentrations of atmospheric methane using a dual frequency comb spectrometer and combine measurements with an atmospheric transport model to infer leak locations and strengths using a novel statistical method, the non-zero minimum bootstrap (NZMB). The new statistical method allows us to determine whether the empirical distribution of possible source strengths for a given location excludes zero. Using this information, we identify leaking source locations (i.e., natural gas wells) through rejection of the null hypothesis that the source is not leaking. The method is tested with a series of synthetic data inversions with varying measurement density and varying levels of model-data mismatch. It is also tested with field observations of (1) a non-leaking source location and (2) a source location where a controlled emission of 3.1 × 10-5 kg s-1 of methane gas is released over a period of several hours. This series of synthetic data tests and outdoor field observations using a controlled methane release demonstrates the viability of the approach for the detection and sizing of very small leaks of methane across large distances (4+ km2 in synthetic tests). The field tests demonstrate the ability to attribute small atmospheric enhancements of 17 ppb to the emitting source location against a background of combined atmospheric (e.g., background methane variability) and measurement uncertainty of 5 ppb (1σ), when measurements are averaged over 2 min. The results of the synthetic and field data testing show that the new observing system and statistical approach greatly decreases the incidence of false alarms (that is, wrongly identifying a well site to be leaking) compared with the same tests that do not use the NZMB approach and therefore offers increased leak detection and sizing capabilities.

  17. 40 CFR 60.344 - Test methods and procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false Test methods and procedures. 60.344 Section 60.344 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Lime...

  18. The Development of Testing Methods for Characterizing Emissions and Sources of Exposures from Polyurethane Products

    EPA Science Inventory

    The relationship between onsite manufacture of spray polyurethane foam insulation (SPFI) and potential exposures is not well understood. Currently, no comprehensive standard test methods exist for characterizing and quantifying product emissions. Exposures to diisocyanate compoun...

  19. Weighted small subdomain filtering technology

    NASA Astrophysics Data System (ADS)

    Tai, Zhenhua; Zhang, Fengxu; Zhang, Fengqin; Zhang, Xingzhou; Hao, Mengcheng

    2017-09-01

    A high-resolution method to define the horizontal edges of gravity sources is presented by improving the three-directional small subdomain filtering (TDSSF). This proposed method is the weighted small subdomain filtering (WSSF). The WSSF uses a numerical difference instead of the phase conversion in the TDSSF to reduce the computational complexity. To make the WSSF more insensitive to noise, the numerical difference is combined with the average algorithm. Unlike the TDSSF, the WSSF uses a weighted sum to integrate the numerical difference results along four directions into one contour, for making its interpretation more convenient and accurate. The locations of tightened gradient belts are used to define the edges of sources in the WSSF result. This proposed method is tested on synthetic data. The test results show that the WSSF provides the horizontal edges of sources more clearly and correctly, even if the sources are interfered with one another and the data is corrupted with random noise. Finally, the WSSF and two other known methods are applied to a real data respectively. The detected edges by the WSSF are sharper and more accurate.

  20. Development of an Efficient Binaural Simulation for the Analysis of Structural Acoustic Data

    NASA Technical Reports Server (NTRS)

    Johnson, Marty E.; Lalime, Aimee L.; Grosveld, Ferdinand W.; Rizzi, Stephen A.; Sullivan, Brenda M.

    2003-01-01

    Applying binaural simulation techniques to structural acoustic data can be very computationally intensive as the number of discrete noise sources can be very large. Typically, Head Related Transfer Functions (HRTFs) are used to individually filter the signals from each of the sources in the acoustic field. Therefore, creating a binaural simulation implies the use of potentially hundreds of real time filters. This paper details two methods of reducing the number of real-time computations required by: (i) using the singular value decomposition (SVD) to reduce the complexity of the HRTFs by breaking them into dominant singular values and vectors and (ii) by using equivalent source reduction (ESR) to reduce the number of sources to be analyzed in real-time by replacing sources on the scale of a structural wavelength with sources on the scale of an acoustic wavelength. The ESR and SVD reduction methods can be combined to provide an estimated computation time reduction of 99.4% for the structural acoustic data tested. In addition, preliminary tests have shown that there is a 97% correlation between the results of the combined reduction methods and the results found with the current binaural simulation techniques

  1. Determining the sensitivity of the amplitude source location (ASL) method through active seismic sources: An example from Te Maari Volcano, New Zealand

    NASA Astrophysics Data System (ADS)

    Walsh, Braden; Jolly, Arthur; Procter, Jonathan

    2017-04-01

    Using active seismic sources on Tongariro Volcano, New Zealand, the amplitude source location (ASL) method is calibrated and optimized through a series of sensitivity tests. By applying a geologic medium velocity of 1500 m/s and an attenuation value of Q=60 for surface waves along with amplification factors computed from regional earthquakes, the ASL produced location discrepancies larger than 1.0 km horizontally and up to 0.5 km in depth. Through the use of sensitivity tests on input parameters, we show that velocity and attenuation models have moderate to strong influences on the location results, but can be easily constrained. Changes in locations are accommodated through either lateral or depth movements. Station corrections (amplification factors) and station geometry strongly affect the ASL locations laterally, horizontally and in depth. Calibrating the amplification factors through the exploitation of the active seismic source events reduced location errors for the sources by up to 50%.

  2. A matrix-inversion method for gamma-source mapping from gamma-count data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adsley, Ian; Burgess, Claire; Bull, Richard K

    In a previous paper it was proposed that a simple matrix inversion method could be used to extract source distributions from gamma-count maps, using simple models to calculate the response matrix. The method was tested using numerically generated count maps. In the present work a 100 kBq Co{sup 60} source has been placed on a gridded surface and the count rate measured using a NaI scintillation detector. The resulting map of gamma counts was used as input to the matrix inversion procedure and the source position recovered. A multi-source array was simulated by superposition of several single-source count maps andmore » the source distribution was again recovered using matrix inversion. The measurements were performed for several detector heights. The effects of uncertainties in source-detector distances on the matrix inversion method are also examined. The results from this work give confidence in the application of the method to practical applications, such as the segregation of highly active objects amongst fuel-element debris. (authors)« less

  3. Relationship of Source Selection Methods to Contract Outcomes: an Analysis of Air Force Source Selection

    DTIC Science & Technology

    2015-12-01

    of squares more difficult. There are, however, solutions to these issues. A weighted mean can be used in place of the grand mean1 and the STATA ...multivariate test of means provided in STATA 12.1. This test checks whether or not population variances and covariances of both DVs (PALT and CPARS

  4. Relationship of Source Selection Methods to Contract Outcomes: An Analysis of Air Force Source Selection

    DTIC Science & Technology

    2015-12-01

    however, solutions to these issues. A weighted mean can be used in place of the grand mean1 and the STATA software automatically handles the assignment of...covariance matrices between groups (i.e., sphericity) using the multivariate test of means provided in STATA 12.1. This test checks whether or not

  5. Radiation Source Mapping with Bayesian Inverse Methods

    DOE PAGES

    Hykes, Joshua M.; Azmy, Yousry Y.

    2017-03-22

    In this work, we present a method to map the spectral and spatial distributions of radioactive sources using a limited number of detectors. Locating and identifying radioactive materials is important for border monitoring, in accounting for special nuclear material in processing facilities, and in cleanup operations following a radioactive material spill. Most methods to analyze these types of problems make restrictive assumptions about the distribution of the source. In contrast, the source mapping method presented here allows an arbitrary three-dimensional distribution in space and a gamma peak distribution in energy. To apply the method, the problem is cast as anmore » inverse problem where the system’s geometry and material composition are known and fixed, while the radiation source distribution is sought. A probabilistic Bayesian approach is used to solve the resulting inverse problem since the system of equations is ill-posed. The posterior is maximized with a Newton optimization method. The probabilistic approach also provides estimates of the confidence in the final source map prediction. A set of adjoint, discrete ordinates flux solutions, obtained in this work by the Denovo code, is required to efficiently compute detector responses from a candidate source distribution. These adjoint fluxes form the linear mapping from the state space to the response space. The test of the method’s success is simultaneously locating a set of 137Cs and 60Co gamma sources in a room. This test problem is solved using experimental measurements that we collected for this purpose. Because of the weak sources available for use in the experiment, some of the expected photopeaks were not distinguishable from the Compton continuum. However, by supplanting 14 flawed measurements (out of a total of 69) with synthetic responses computed by MCNP, the proof-of-principle source mapping was successful. The locations of the sources were predicted within 25 cm for two of the sources and 90 cm for the third, in a room with an ~4-x 4-m floor plan. Finally, the predicted source intensities were within a factor of ten of their true value.« less

  6. Catch-up validation study of an in vitro skin irritation test method based on an open source reconstructed epidermis (phase II).

    PubMed

    Groeber, F; Schober, L; Schmid, F F; Traube, A; Kolbus-Hernandez, S; Daton, K; Hoffmann, S; Petersohn, D; Schäfer-Korting, M; Walles, H; Mewes, K R

    2016-10-01

    To replace the Draize skin irritation assay (OECD guideline 404) several test methods based on reconstructed human epidermis (RHE) have been developed and were adopted in the OECD test guideline 439. However, all validated test methods in the guideline are linked to RHE provided by only three companies. Thus, the availability of these test models is dependent on the commercial interest of the producer. To overcome this limitation and thus to increase the accessibility of in vitro skin irritation testing, an open source reconstructed epidermis (OS-REp) was introduced. To demonstrate the capacity of the OS-REp in regulatory risk assessment, a catch-up validation study was performed. The participating laboratories used in-house generated OS-REp to assess the set of 20 reference substances according to the performance standards amending the OECD test guideline 439. Testing was performed under blinded conditions. The within-laboratory reproducibility of 87% and the inter-laboratory reproducibility of 85% prove a high reliability of irritancy testing using the OS-REp protocol. In addition, the prediction capacity was with an accuracy of 80% comparable to previous published RHE based test protocols. Taken together the results indicate that the OS-REp test method can be used as a standalone alternative skin irritation test replacing the OECD test guideline 404. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Noise source separation of diesel engine by combining binaural sound localization method and blind source separation method

    NASA Astrophysics Data System (ADS)

    Yao, Jiachi; Xiang, Yang; Qian, Sichong; Li, Shengyang; Wu, Shaowei

    2017-11-01

    In order to separate and identify the combustion noise and the piston slap noise of a diesel engine, a noise source separation and identification method that combines a binaural sound localization method and blind source separation method is proposed. During a diesel engine noise and vibration test, because a diesel engine has many complex noise sources, a lead covering method was carried out on a diesel engine to isolate other interference noise from the No. 1-5 cylinders. Only the No. 6 cylinder parts were left bare. Two microphones that simulated the human ears were utilized to measure the radiated noise signals 1 m away from the diesel engine. First, a binaural sound localization method was adopted to separate the noise sources that are in different places. Then, for noise sources that are in the same place, a blind source separation method is utilized to further separate and identify the noise sources. Finally, a coherence function method, continuous wavelet time-frequency analysis method, and prior knowledge of the diesel engine are combined to further identify the separation results. The results show that the proposed method can effectively separate and identify the combustion noise and the piston slap noise of a diesel engine. The frequency of the combustion noise and the piston slap noise are respectively concentrated at 4350 Hz and 1988 Hz. Compared with the blind source separation method, the proposed method has superior separation and identification effects, and the separation results have fewer interference components from other noise.

  8. 40 CFR 60.93 - Test methods and procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 7 2013-07-01 2013-07-01 false Test methods and procedures. 60.93 Section 60.93 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Hot Mix Asphalt...

  9. 40 CFR 60.93 - Test methods and procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 7 2012-07-01 2012-07-01 false Test methods and procedures. 60.93 Section 60.93 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Hot Mix Asphalt...

  10. 40 CFR 60.93 - Test methods and procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 7 2014-07-01 2014-07-01 false Test methods and procedures. 60.93 Section 60.93 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Hot Mix Asphalt...

  11. 40 CFR 60.93 - Test methods and procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 6 2011-07-01 2011-07-01 false Test methods and procedures. 60.93 Section 60.93 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Hot Mix Asphalt...

  12. 40 CFR 60.176 - Test methods and procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false Test methods and procedures. 60.176 Section 60.176 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Primary Zinc...

  13. 40 CFR 60.715 - Test methods and procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 6 2011-07-01 2011-07-01 false Test methods and procedures. 60.715 Section 60.715 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Magnetic Tape...

  14. New methods for engineering site characterization using reflection and surface wave seismic survey

    NASA Astrophysics Data System (ADS)

    Chaiprakaikeow, Susit

    This study presents two new seismic testing methods for engineering application, a new shallow seismic reflection method and Time Filtered Analysis of Surface Waves (TFASW). Both methods are described in this dissertation. The new shallow seismic reflection was developed to measure reflection at a single point using two to four receivers, assuming homogeneous, horizontal layering. It uses one or more shakers driven by a swept sine function as a source, and the cross-correlation technique to identify wave arrivals. The phase difference between the source forcing function and the ground motion due to the dynamic response of the shaker-ground interface was corrected by using a reference geophone. Attenuated high frequency energy was also recovered using the whitening in frequency domain. The new shallow seismic reflection testing was performed at the crest of Porcupine Dam in Paradise, Utah. The testing used two horizontal Vibroseis sources and four receivers for spacings between 6 and 300 ft. Unfortunately, the results showed no clear evidence of the reflectors despite correction of the magnitude and phase of the signals. However, an improvement in the shape of the cross-correlations was noticed after the corrections. The results showed distinct primary lobes in the corrected cross-correlated signals up to 150 ft offset. More consistent maximum peaks were observed in the corrected waveforms. TFASW is a new surface (Rayleigh) wave method to determine the shear wave velocity profile at a site. It is a time domain method as opposed to the Spectral Analysis of Surface Waves (SASW) method, which is a frequency domain method. This method uses digital filtering to optimize bandwidth used to determine the dispersion curve. Results from testings at three different sites in Utah indicated good agreement with the dispersion curves measured using both TFASW and SASW methods. The advantage of TFASW method is that the dispersion curves had less scatter at long wavelengths as a result from wider bandwidth used in those tests.

  15. Use of ultrasonic array method for positioning multiple partial discharge sources in transformer oil.

    PubMed

    Xie, Qing; Tao, Junhan; Wang, Yongqiang; Geng, Jianghai; Cheng, Shuyi; Lü, Fangcheng

    2014-08-01

    Fast and accurate positioning of partial discharge (PD) sources in transformer oil is very important for the safe, stable operation of power systems because it allows timely elimination of insulation faults. There is usually more than one PD source once an insulation fault occurs in the transformer oil. This study, which has both theoretical and practical significance, proposes a method of identifying multiple PD sources in the transformer oil. The method combines the two-sided correlation transformation algorithm in the broadband signal focusing and the modified Gerschgorin disk estimator. The method of classification of multiple signals is used to determine the directions of arrival of signals from multiple PD sources. The ultrasonic array positioning method is based on the multi-platform direction finding and the global optimization searching. Both the 4 × 4 square planar ultrasonic sensor array and the ultrasonic array detection platform are built to test the method of identifying and positioning multiple PD sources. The obtained results verify the validity and the engineering practicability of this method.

  16. Noise-Source Separation Using Internal and Far-Field Sensors for a Full-Scale Turbofan Engine

    NASA Technical Reports Server (NTRS)

    Hultgren, Lennart S.; Miles, Jeffrey H.

    2009-01-01

    Noise-source separation techniques for the extraction of the sub-dominant combustion noise from the total noise signatures obtained in static-engine tests are described. Three methods are applied to data from a static, full-scale engine test. Both 1/3-octave and narrow-band results are discussed. The results are used to assess the combustion-noise prediction capability of the Aircraft Noise Prediction Program (ANOPP). A new additional phase-angle-based discriminator for the three-signal method is also introduced.

  17. Obtaining the phase in the star test using genetic algorithms

    NASA Astrophysics Data System (ADS)

    Salazar Romero, Marcos A.; Vazquez-Montiel, Sergio; Cornejo-Rodriguez, Alejandro

    2004-10-01

    The star test is conceptually perhaps the most basic and simplest of all methods of testing image-forming optical systems, the irradiance distribution at the image of a point source (such as a star) is give for the Point Spread Function, PSF. The PSF is very sensitive to aberrations. One way to quantify the PSF is measuring the irradiance distribution on the image of the source point. On the other hand, if we know the aberrations introduced by the optical systems and utilizing the diffraction theory then we can calculate the PSF. In this work we propose a method in order to find the wavefront aberrations starting from the PSF, transforming the problem of fitting a polynomial of aberrations in a problem of optimization using Genetic Algorithm. Also, we show that this method is immune to the noise introduced in the register or recording of the image. Results of these methods are shown.

  18. Raw data normalization for a multi source inverse geometry CT system

    PubMed Central

    Baek, Jongduk; De Man, Bruno; Harrison, Daniel; Pelc, Norbert J.

    2015-01-01

    A multi-source inverse-geometry CT (MS-IGCT) system consists of a small 2D detector array and multiple x-ray sources. During data acquisition, each source is activated sequentially, and may have random source intensity fluctuations relative to their respective nominal intensity. While a conventional 3rd generation CT system uses a reference channel to monitor the source intensity fluctuation, the MS-IGCT system source illuminates a small portion of the entire field-of-view (FOV). Therefore, it is difficult for all sources to illuminate the reference channel and the projection data computed by standard normalization using flat field data of each source contains error and can cause significant artifacts. In this work, we present a raw data normalization algorithm to reduce the image artifacts caused by source intensity fluctuation. The proposed method was tested using computer simulations with a uniform water phantom and a Shepp-Logan phantom, and experimental data of an ice-filled PMMA phantom and a rabbit. The effect on image resolution and robustness of the noise were tested using MTF and standard deviation of the reconstructed noise image. With the intensity fluctuation and no correction, reconstructed images from simulation and experimental data show high frequency artifacts and ring artifacts which are removed effectively using the proposed method. It is also observed that the proposed method does not degrade the image resolution and is very robust to the presence of noise. PMID:25837090

  19. PSFGAN: a generative adversarial network system for separating quasar point sources and host galaxy light

    NASA Astrophysics Data System (ADS)

    Stark, Dominic; Launet, Barthelemy; Schawinski, Kevin; Zhang, Ce; Koss, Michael; Turp, M. Dennis; Sartori, Lia F.; Zhang, Hantian; Chen, Yiru; Weigel, Anna K.

    2018-06-01

    The study of unobscured active galactic nuclei (AGN) and quasars depends on the reliable decomposition of the light from the AGN point source and the extended host galaxy light. The problem is typically approached using parametric fitting routines using separate models for the host galaxy and the point spread function (PSF). We present a new approach using a Generative Adversarial Network (GAN) trained on galaxy images. We test the method using Sloan Digital Sky Survey r-band images with artificial AGN point sources added that are then removed using the GAN and with parametric methods using GALFIT. When the AGN point source is more than twice as bright as the host galaxy, we find that our method, PSFGAN, can recover point source and host galaxy magnitudes with smaller systematic error and a lower average scatter (49 per cent). PSFGAN is more tolerant to poor knowledge of the PSF than parametric methods. Our tests show that PSFGAN is robust against a broadening in the PSF width of ± 50 per cent if it is trained on multiple PSFs. We demonstrate that while a matched training set does improve performance, we can still subtract point sources using a PSFGAN trained on non-astronomical images. While initial training is computationally expensive, evaluating PSFGAN on data is more than 40 times faster than GALFIT fitting two components. Finally, PSFGAN is more robust and easy to use than parametric methods as it requires no input parameters.

  20. A method for the development of disease-specific reference standards vocabularies from textual biomedical literature resources

    PubMed Central

    Wang, Liqin; Bray, Bruce E.; Shi, Jianlin; Fiol, Guilherme Del; Haug, Peter J.

    2017-01-01

    Objective Disease-specific vocabularies are fundamental to many knowledge-based intelligent systems and applications like text annotation, cohort selection, disease diagnostic modeling, and therapy recommendation. Reference standards are critical in the development and validation of automated methods for disease-specific vocabularies. The goal of the present study is to design and test a generalizable method for the development of vocabulary reference standards from expert-curated, disease-specific biomedical literature resources. Methods We formed disease-specific corpora from literature resources like textbooks, evidence-based synthesized online sources, clinical practice guidelines, and journal articles. Medical experts annotated and adjudicated disease-specific terms in four classes (i.e., causes or risk factors, signs or symptoms, diagnostic tests or results, and treatment). Annotations were mapped to UMLS concepts. We assessed source variation, the contribution of each source to build disease-specific vocabularies, the saturation of the vocabularies with respect to the number of used sources, and the generalizability of the method with different diseases. Results The study resulted in 2588 string-unique annotations for heart failure in four classes, and 193 and 425 respectively for pulmonary embolism and rheumatoid arthritis in treatment class. Approximately 80% of the annotations were mapped to UMLS concepts. The agreement among heart failure sources ranged between 0.28 and 0.46. The contribution of these sources to the final vocabulary ranged between 18% and 49%. With the sources explored, the heart failure vocabulary reached near saturation in all four classes with the inclusion of minimal six sources (or between four to seven sources if only counting terms occurred in two or more sources). It took fewer sources to reach near saturation for the other two diseases in terms of the treatment class. Conclusions We developed a method for the development of disease-specific reference vocabularies. Expert-curated biomedical literature resources are substantial for acquiring disease-specific medical knowledge. It is feasible to reach near saturation in a disease-specific vocabulary using a relatively small number of literature sources. PMID:26971304

  1. Evaluation and improvements of a mayfly, Neocloeon (Centroptilum) triangulifer ?(Ephemeroptera: Baetidae) toxicity test method

    EPA Science Inventory

    A recently published test method for Neocloeon triangulifer assessed the sensitivities of larval mayflies to several reference toxicants (NaCl, KCl, and CuSO4). Subsequent exposures have shown discrepancies from those results previously reported. To identify potential sources of ...

  2. Origin of fecal contamination in waters from contrasted areas: stanols as Microbial Source Tracking markers.

    PubMed

    Derrien, M; Jardé, E; Gruau, G; Pourcher, A M; Gourmelon, M; Jadas-Hécart, A; Pierson Wickmann, A C

    2012-09-01

    Improving the microbiological quality of coastal and river waters relies on the development of reliable markers that are capable of determining sources of fecal pollution. Recently, a principal component analysis (PCA) method based on six stanol compounds (i.e. 5β-cholestan-3β-ol (coprostanol), 5β-cholestan-3α-ol (epicoprostanol), 24-methyl-5α-cholestan-3β-ol (campestanol), 24-ethyl-5α-cholestan-3β-ol (sitostanol), 24-ethyl-5β-cholestan-3β-ol (24-ethylcoprostanol) and 24-ethyl-5β-cholestan-3α-ol (24-ethylepicoprostanol)) was shown to be suitable for distinguishing between porcine and bovine feces. In this study, we tested if this PCA method, using the above six stanols, could be used as a tool in "Microbial Source Tracking (MST)" methods in water from areas of intensive agriculture where diffuse fecal contamination is often marked by the co-existence of human and animal sources. In particular, well-defined and stable clusters were found in PCA score plots clustering samples of "pure" human, bovine and porcine feces along with runoff and diluted waters in which the source of contamination is known. A good consistency was also observed between the source assignments made by the 6-stanol-based PCA method and the microbial markers for river waters contaminated by fecal matter of unknown origin. More generally, the tests conducted in this study argue for the addition of the PCA method based on six stanols in the MST toolbox to help identify fecal contamination sources. The data presented in this study show that this addition would improve the determination of fecal contamination sources when the contamination levels are low to moderate. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Accumulated source imaging of brain activity with both low and high-frequency neuromagnetic signals

    PubMed Central

    Xiang, Jing; Luo, Qian; Kotecha, Rupesh; Korman, Abraham; Zhang, Fawen; Luo, Huan; Fujiwara, Hisako; Hemasilpin, Nat; Rose, Douglas F.

    2014-01-01

    Recent studies have revealed the importance of high-frequency brain signals (>70 Hz). One challenge of high-frequency signal analysis is that the size of time-frequency representation of high-frequency brain signals could be larger than 1 terabytes (TB), which is beyond the upper limits of a typical computer workstation's memory (<196 GB). The aim of the present study is to develop a new method to provide greater sensitivity in detecting high-frequency magnetoencephalography (MEG) signals in a single automated and versatile interface, rather than the more traditional, time-intensive visual inspection methods, which may take up to several days. To address the aim, we developed a new method, accumulated source imaging, defined as the volumetric summation of source activity over a period of time. This method analyzes signals in both low- (1~70 Hz) and high-frequency (70~200 Hz) ranges at source levels. To extract meaningful information from MEG signals at sensor space, the signals were decomposed to channel-cross-channel matrix (CxC) representing the spatiotemporal patterns of every possible sensor-pair. A new algorithm was developed and tested by calculating the optimal CxC and source location-orientation weights for volumetric source imaging, thereby minimizing multi-source interference and reducing computational cost. The new method was implemented in C/C++ and tested with MEG data recorded from clinical epilepsy patients. The results of experimental data demonstrated that accumulated source imaging could effectively summarize and visualize MEG recordings within 12.7 h by using approximately 10 GB of computer memory. In contrast to the conventional method of visually identifying multi-frequency epileptic activities that traditionally took 2–3 days and used 1–2 TB storage, the new approach can quantify epileptic abnormalities in both low- and high-frequency ranges at source levels, using much less time and computer memory. PMID:24904402

  4. Accumulated source imaging of brain activity with both low and high-frequency neuromagnetic signals.

    PubMed

    Xiang, Jing; Luo, Qian; Kotecha, Rupesh; Korman, Abraham; Zhang, Fawen; Luo, Huan; Fujiwara, Hisako; Hemasilpin, Nat; Rose, Douglas F

    2014-01-01

    Recent studies have revealed the importance of high-frequency brain signals (>70 Hz). One challenge of high-frequency signal analysis is that the size of time-frequency representation of high-frequency brain signals could be larger than 1 terabytes (TB), which is beyond the upper limits of a typical computer workstation's memory (<196 GB). The aim of the present study is to develop a new method to provide greater sensitivity in detecting high-frequency magnetoencephalography (MEG) signals in a single automated and versatile interface, rather than the more traditional, time-intensive visual inspection methods, which may take up to several days. To address the aim, we developed a new method, accumulated source imaging, defined as the volumetric summation of source activity over a period of time. This method analyzes signals in both low- (1~70 Hz) and high-frequency (70~200 Hz) ranges at source levels. To extract meaningful information from MEG signals at sensor space, the signals were decomposed to channel-cross-channel matrix (CxC) representing the spatiotemporal patterns of every possible sensor-pair. A new algorithm was developed and tested by calculating the optimal CxC and source location-orientation weights for volumetric source imaging, thereby minimizing multi-source interference and reducing computational cost. The new method was implemented in C/C++ and tested with MEG data recorded from clinical epilepsy patients. The results of experimental data demonstrated that accumulated source imaging could effectively summarize and visualize MEG recordings within 12.7 h by using approximately 10 GB of computer memory. In contrast to the conventional method of visually identifying multi-frequency epileptic activities that traditionally took 2-3 days and used 1-2 TB storage, the new approach can quantify epileptic abnormalities in both low- and high-frequency ranges at source levels, using much less time and computer memory.

  5. Ninth and Tenth Grade Students' Mathematics Self-Efficacy Beliefs: The Sources and Relationships to Teacher Classroom Interpersonal Behaviors

    ERIC Educational Resources Information Center

    White, Amanda Garrett

    2009-01-01

    The purpose of the mix-methods action research study was to seek how the changes in students' perceptions about teacher classroom interpersonal behaviors, the four efficacy sources and mathematics self-efficacy beliefs were related. The methods used to accomplish this were: descriptive statistics, t-test, Pearson correlation coefficient…

  6. Selective structural source identification

    NASA Astrophysics Data System (ADS)

    Totaro, Nicolas

    2018-04-01

    In the field of acoustic source reconstruction, the inverse Patch Transfer Function (iPTF) has been recently proposed and has shown satisfactory results whatever the shape of the vibrating surface and whatever the acoustic environment. These two interesting features are due to the virtual acoustic volume concept underlying the iPTF methods. The aim of the present article is to show how this concept of virtual subsystem can be used in structures to reconstruct the applied force distribution. Some virtual boundary conditions can be applied on a part of the structure, called virtual testing structure, to identify the force distribution applied in that zone regardless of the presence of other sources outside the zone under consideration. In the present article, the applicability of the method is only demonstrated on planar structures. However, the final example show how the method can be applied to a complex shape planar structure with point welded stiffeners even in the tested zone. In that case, if the virtual testing structure includes the stiffeners the identified force distribution only exhibits the positions of external applied forces. If the virtual testing structure does not include the stiffeners, the identified force distribution permits to localize the forces due to the coupling between the structure and the stiffeners through the welded points as well as the ones due to the external forces. This is why this approach is considered here as a selective structural source identification method. It is demonstrated that this approach clearly falls in the same framework as the Force Analysis Technique, the Virtual Fields Method or the 2D spatial Fourier transform. Even if this approach has a lot in common with these latters, it has some interesting particularities like its low sensitivity to measurement noise.

  7. The Developmental Trajectory of Spatial Listening Skills in Normal-Hearing Children

    ERIC Educational Resources Information Center

    Lovett, Rosemary Elizabeth Susan; Kitterick, Padraig Thomas; Huang, Shan; Summerfield, Arthur Quentin

    2012-01-01

    Purpose: To establish the age at which children can complete tests of spatial listening and to measure the normative relationship between age and performance. Method: Fifty-six normal-hearing children, ages 1.5-7.9 years, attempted tests of the ability to discriminate a sound source on the left from one on the right, to localize a source, to track…

  8. Development of an Efficient Binaural Simulation for the Analysis of Structural Acoustic Data

    NASA Technical Reports Server (NTRS)

    Lalime, Aimee L.; Johnson, Marty E.; Rizzi, Stephen A. (Technical Monitor)

    2002-01-01

    Binaural or "virtual acoustic" representation has been proposed as a method of analyzing acoustic and vibroacoustic data. Unfortunately, this binaural representation can require extensive computer power to apply the Head Related Transfer Functions (HRTFs) to a large number of sources, as with a vibrating structure. This work focuses on reducing the number of real-time computations required in this binaural analysis through the use of Singular Value Decomposition (SVD) and Equivalent Source Reduction (ESR). The SVD method reduces the complexity of the HRTF computations by breaking the HRTFs into dominant singular values (and vectors). The ESR method reduces the number of sources to be analyzed in real-time computation by replacing sources on the scale of a structural wavelength with sources on the scale of an acoustic wavelength. It is shown that the effectiveness of the SVD and ESR methods improves as the complexity of the source increases. In addition, preliminary auralization tests have shown that the results from both the SVD and ESR methods are indistinguishable from the results found with the exhaustive method.

  9. A power comparison of generalized additive models and the spatial scan statistic in a case-control setting.

    PubMed

    Young, Robin L; Weinberg, Janice; Vieira, Verónica; Ozonoff, Al; Webster, Thomas F

    2010-07-19

    A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM) which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. The GAM permutation testing methods provide a regression-based alternative to the spatial scan statistic. Across all hypotheses examined in this research, the GAM methods had competing or greater power estimates and sensitivities exceeding that of the spatial scan statistic.

  10. A power comparison of generalized additive models and the spatial scan statistic in a case-control setting

    PubMed Central

    2010-01-01

    Background A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM) which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. Results This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. Conclusions The GAM permutation testing methods provide a regression-based alternative to the spatial scan statistic. Across all hypotheses examined in this research, the GAM methods had competing or greater power estimates and sensitivities exceeding that of the spatial scan statistic. PMID:20642827

  11. Implementation and performance test of cloud platform based on Hadoop

    NASA Astrophysics Data System (ADS)

    Xu, Jingxian; Guo, Jianhong; Ren, Chunlan

    2018-01-01

    Hadoop, as an open source project for the Apache foundation, is a distributed computing framework that deals with large amounts of data and has been widely used in the Internet industry. Therefore, it is meaningful to study the implementation of Hadoop platform and the performance of test platform. The purpose of this subject is to study the method of building Hadoop platform and to study the performance of test platform. This paper presents a method to implement Hadoop platform and a test platform performance method. Experimental results show that the proposed test performance method is effective and it can detect the performance of Hadoop platform.

  12. Loop Heat Pipe Operation Using Heat Source Temperature for Set Point Control

    NASA Technical Reports Server (NTRS)

    Ku, Jentung; Paiva, Kleber; Mantelli, Marcia

    2011-01-01

    The LHP operating temperature is governed by the saturation temperature of its reservoir. Controlling the reservoir saturation temperature is commonly accomplished by cold biasing the reservoir and using electrical heaters to provide the required control power. Using this method, the loop operating temperature can be controlled within +/- 0.5K. However, because of the thermal resistance that exists between the heat source and the LHP evaporator, the heat source temperature will vary with its heat output even if LHP operating temperature is kept constant. Since maintaining a constant heat source temperature is of most interest, a question often raised is whether the heat source temperature can be used for LHP set point temperature control. A test program with a miniature LHP has been carried out to investigate the effects on the LHP operation when the control temperature sensor is placed on the heat source instead of the reservoir. In these tests, the LHP reservoir is cold-biased and is heated by a control heater. Tests results show that it is feasible to use the heat source temperature for feedback control of the LHP operation. Using this method, the heat source temperature can be maintained within a tight range for moderate and high powers. At low powers, however, temperature oscillations may occur due to interactions among the reservoir control heater power, the heat source mass, and the heat output from the heat source. In addition, the heat source temperature could temporarily deviate from its set point during fast thermal transients. The implication is that more sophisticated feedback control algorithms need to be implemented for LHP transient operation when the heat source temperature is used for feedback control.

  13. Method and Apparatus for the Portable Identification of Material Thickness and Defects Using Spatially Controlled Heat Application

    NASA Technical Reports Server (NTRS)

    Cramer, K. Elliott (Inventor); Winfree, William P. (Inventor)

    1999-01-01

    A method and a portable apparatus for the nondestructive identification of defects in structures. The apparatus comprises a heat source and a thermal imager that move at a constant speed past a test surface of a structure. The thermal imager is off set at a predetermined distance from the heat source. The heat source induces a constant surface temperature. The imager follows the heat source and produces a video image of the thermal characteristics of the test surface. Material defects produce deviations from the constant surface temperature that move at the inverse of the constant speed. Thermal noise produces deviations that move at random speed. Computer averaging of the digitized thermal image data with respect to the constant speed minimizes noise and improves the signal of valid defects. The motion of thermographic equipment coupled with the high signal to noise ratio render it suitable for portable, on site analysis.

  14. A method for the microlensed flux variance of QSOs

    NASA Astrophysics Data System (ADS)

    Goodman, Jeremy; Sun, Ai-Lei

    2014-06-01

    A fast and practical method is described for calculating the microlensed flux variance of an arbitrary source by uncorrelated stars. The required inputs are the mean convergence and shear due to the smoothed potential of the lensing galaxy, the stellar mass function, and the absolute square of the Fourier transform of the surface brightness in the source plane. The mathematical approach follows previous authors but has been generalized, streamlined, and implemented in publicly available code. Examples of its application are given for Dexter and Agol's inhomogeneous-disc models as well as the usual Gaussian sources. Since the quantity calculated is a second moment of the magnification, it is only logarithmically sensitive to the sizes of very compact sources. However, for the inferred sizes of actual quasi-stellar objects (QSOs), it has some discriminatory power and may lend itself to simple statistical tests. At the very least, it should be useful for testing the convergence of microlensing simulations.

  15. Continuous wavelet transform analysis and modal location analysis acoustic emission source location for nuclear piping crack growth monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohd, Shukri; Holford, Karen M.; Pullin, Rhys

    2014-02-12

    Source location is an important feature of acoustic emission (AE) damage monitoring in nuclear piping. The ability to accurately locate sources can assist in source characterisation and early warning of failure. This paper describe the development of a novelAE source location technique termed 'Wavelet Transform analysis and Modal Location (WTML)' based on Lamb wave theory and time-frequency analysis that can be used for global monitoring of plate like steel structures. Source location was performed on a steel pipe of 1500 mm long and 220 mm outer diameter with nominal thickness of 5 mm under a planar location test setup usingmore » H-N sources. The accuracy of the new technique was compared with other AE source location methods such as the time of arrival (TOA) techniqueand DeltaTlocation. Theresults of the study show that the WTML method produces more accurate location resultscompared with TOA and triple point filtering location methods. The accuracy of the WTML approach is comparable with the deltaT location method but requires no initial acoustic calibration of the structure.« less

  16. A source-attractor approach to network detection of radiation sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Qishi; Barry, M. L..; Grieme, M.

    Radiation source detection using a network of detectors is an active field of research for homeland security and defense applications. We propose Source-attractor Radiation Detection (SRD) method to aggregate measurements from a network of detectors for radiation source detection. SRD method models a potential radiation source as a magnet -like attractor that pulls in pre-computed virtual points from the detector locations. A detection decision is made if a sufficient level of attraction, quantified by the increase in the clustering of the shifted virtual points, is observed. Compared with traditional methods, SRD has the following advantages: i) it does not requiremore » an accurate estimate of the source location from limited and noise-corrupted sensor readings, unlike the localizationbased methods, and ii) its virtual point shifting and clustering calculation involve simple arithmetic operations based on the number of detectors, avoiding the high computational complexity of grid-based likelihood estimation methods. We evaluate its detection performance using canonical datasets from Domestic Nuclear Detection Office s (DNDO) Intelligence Radiation Sensors Systems (IRSS) tests. SRD achieves both lower false alarm rate and false negative rate compared to three existing algorithms for network source detection.« less

  17. 40 CFR Table 9 to Subpart Xxxx of... - Minimum Data for Continuous Compliance With the Emission Limits for Tire Production Affected Sources

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emissions Standards for Hazardous Air Pollutants: Rubber Tire Manufacturing Pt. 63, Subpt. XXXX, Table 9 Table 9 to... Method 311 (40 CFR part 60, appendix A), or approved alternative method, test results indicating the mass...

  18. 40 CFR 63.9621 - What test methods and other procedures must I use to demonstrate initial compliance with the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) National Emission Standards for Hazardous Air Pollutants: Taconite Iron Ore Processing Initial Compliance... section. (b) For each ore crushing and handling affected source and each finished pellet handling affected... each ore crushing and handling affected source and each finished pellet handling affected source, you...

  19. Solid-phase microextraction of organophosphate pesticides in source waters for drinking water treatment facilities.

    PubMed

    Flynt, Elizabeth; Dupuy, Aubry; Kennedy, Charles; Bennett, Shanda

    2006-09-01

    The rapid detection of contaminants in our nation's drinking water has become a top homeland security priority in this time of increased national vigilance. Real-time monitoring of drinking water for deliberate or accidental contamination is key to national security. One method that can be employed for the rapid screening of pollutants in water is solid-phase microextraction (SPME). SPME is a rapid, sensitive, solvent-free system that can be used to screen for contaminants that have been accidentally or intentionally introduced into a water system. A method using SPME has been developed and optimized for the detection of seven organophosphate pesticides in drinking water treatment facility source waters. The method is tested in source waters for drinking water treatment facilities in Mississippi and Alabama. Water is collected from a deepwater well at Stennis Space Center (SSC), MS, the drinking water source for SSC, and from the Converse Reservoir, the main drinking water supply for Mobile, AL. Also tested are samples of water collected from the Mobile Alabama Water and Sewer System drinking water treatment plant prior to chlorination. The method limits of detection for the seven organophosphates were comparable to those described in several Environmental Protection Agency standard methods. They range from 0.25 to 0.94 microg/L.

  20. Preliminary Calibration Report of an Apparatus to Measure Vibration Characteristics of Low Frequency Disturbance Source Devices

    NASA Technical Reports Server (NTRS)

    Russell, James W.; Marshall, Robert A.; Finley, Tom D.; Lawrence, George F.

    1994-01-01

    This report presents a description of the test apparatus and the method of testing the low frequency disturbance source characteristics of small pumps, fans, camera motors, and recorders that are typical of those used in microgravity science facilities. The test apparatus will allow both force and acceleration spectra of these disturbance devices to be obtained from acceleration measurements over the frequency range from 2 to 300 Hz. Some preliminary calibration results are presented.

  1. A method for monitoring nuclear absorption coefficients of aviation fuels

    NASA Technical Reports Server (NTRS)

    Sprinkle, Danny R.; Shen, Chih-Ping

    1989-01-01

    A technique for monitoring variability in the nuclear absorption characteristics of aviation fuels has been developed. It is based on a highly collimated low energy gamma radiation source and a sodium iodide counter. The source and the counter assembly are separated by a geometrically well-defined test fuel cell. A computer program for determining the mass attenuation coefficient of the test fuel sample, based on the data acquired for a preset counting period, has been developed and tested on several types of aviation fuel.

  2. The Sedov Blast Wave as a Radial Piston Verification Test

    DOE PAGES

    Pederson, Clark; Brown, Bart; Morgan, Nathaniel

    2016-06-22

    The Sedov blast wave is of great utility as a verification problem for hydrodynamic methods. The typical implementation uses an energized cell of finite dimensions to represent the energy point source. We avoid this approximation by directly finding the effects of the energy source as a boundary condition (BC). Furthermore, the proposed method transforms the Sedov problem into an outward moving radial piston problem with a time-varying velocity. A portion of the mesh adjacent to the origin is removed and the boundaries of this hole are forced with the velocities from the Sedov solution. This verification test is implemented onmore » two types of meshes, and convergence is shown. Our results from the typical initial condition (IC) method and the new BC method are compared.« less

  3. 40 CFR 799.9120 - TSCA acute dermal toxicity.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... identification number. A system to randomly assign animals to test groups and control groups is required. (E... source of test animals. (2) Method of randomization in assigning animals to test and control groups. (3... CONTROL ACT (CONTINUED) IDENTIFICATION OF SPECIFIC CHEMICAL SUBSTANCE AND MIXTURE TESTING REQUIREMENTS...

  4. 40 CFR 799.9120 - TSCA acute dermal toxicity.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... identification number. A system to randomly assign animals to test groups and control groups is required. (E... source of test animals. (2) Method of randomization in assigning animals to test and control groups. (3... CONTROL ACT (CONTINUED) IDENTIFICATION OF SPECIFIC CHEMICAL SUBSTANCE AND MIXTURE TESTING REQUIREMENTS...

  5. 40 CFR 799.9120 - TSCA acute dermal toxicity.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... identification number. A system to randomly assign animals to test groups and control groups is required. (E... source of test animals. (2) Method of randomization in assigning animals to test and control groups. (3... CONTROL ACT (CONTINUED) IDENTIFICATION OF SPECIFIC CHEMICAL SUBSTANCE AND MIXTURE TESTING REQUIREMENTS...

  6. 40 CFR 799.9120 - TSCA acute dermal toxicity.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... identification number. A system to randomly assign animals to test groups and control groups is required. (E... source of test animals. (2) Method of randomization in assigning animals to test and control groups. (3... CONTROL ACT (CONTINUED) IDENTIFICATION OF SPECIFIC CHEMICAL SUBSTANCE AND MIXTURE TESTING REQUIREMENTS...

  7. 40 CFR 799.9120 - TSCA acute dermal toxicity.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... identification number. A system to randomly assign animals to test groups and control groups is required. (E... source of test animals. (2) Method of randomization in assigning animals to test and control groups. (3... CONTROL ACT (CONTINUED) IDENTIFICATION OF SPECIFIC CHEMICAL SUBSTANCE AND MIXTURE TESTING REQUIREMENTS...

  8. 75 FR 66735 - National Fire Protection Association (NFPA): Request for Comments on NFPA's Codes and Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-29

    ... Ignitibility of Exterior Wall Assemblies Using a Radiant Heat Energy Source. NFPA 269 Standard Test Method P... for Heat and Visible Smoke Release Rates for Materials and Products Using an Oxygen Consumption... Plastic Insulation. NFPA 285 Standard Fire Test P Method for Evaluation of Fire Propagation...

  9. Evaluation and improvements of a mayfly, Neocloeon (Centroptilum) triangulifer (Ephemeroptera: Baetidae) toxicity test method - SETAC Europe 2016

    EPA Science Inventory

    A recently published test method for Neocloeon triangulifer assessed the survival and growth of larval mayflies exposed to several reference toxicants (NaCl, KCl, and CuSO4). Results were not able to be replicated in subsequent experiments. To identify potential sources of variab...

  10. Pinpointing the North Korea Nuclear tests with body waves scattered by surface topography

    NASA Astrophysics Data System (ADS)

    Wang, N.; Shen, Y.; Bao, X.; Flinders, A. F.

    2017-12-01

    On September 3, 2017, North Korea conducted its sixth and by far the largest nuclear test at the Punggye-ri test site. In this work, we apply a novel full-wave location method that combines a non-linear grid-search algorithm with the 3D strain Green's tensor database to locate this event. We use the first arrivals (Pn waves) and their immediate codas, which are likely dominated by waves scattered by the surface topography near the source, to pinpoint the source location. We assess the solution in the search volume using a least-squares misfit between the observed and synthetic waveforms, which are obtained using the collocated-grid finite difference method on curvilinear grids. We calculate the one standard deviation level of the 'best' solution as a posterior error estimation. Our results show that the waveform based location method allows us to obtain accurate solutions with a small number of stations. The solutions are absolute locations as opposed to relative locations based on relative travel times, because topography-scattered waves depend on the geometric relations between the source and the unique topography near the source. Moreover, we use both differential waveforms and traveltimes to locate pairs of the North Korea tests in years 2016 and 2017 to further reduce the effects of inaccuracies in the reference velocity model (CRUST 1.0). Finally, we compare our solutions with those of other studies based on satellite images and relative traveltimes.

  11. Influence of mycorrhizal source and seeding methods on native grass species grown in soils from a disturbed site

    Treesearch

    Todd R. Caplan; Heather A. Pratt; Samuel R. Loftin

    1999-01-01

    Mycorrhizal fungi are crucial elements in native plant communities and restoring these fungi to disturbed sites is known to improve revegetation success. We tested the seedball method of plant dispersal for restoration of plants and mycorrhizal fungi to disturbed ecosystems. We tested the seedball method with a native mycorrhizal fungi inoculum, and a commercial...

  12. Non-contact method of search and analysis of pulsating vessels

    NASA Astrophysics Data System (ADS)

    Avtomonov, Yuri N.; Tsoy, Maria O.; Postnov, Dmitry E.

    2018-04-01

    Despite the variety of existing methods of recording the human pulse and a solid history of their development, there is still considerable interest in this topic. The development of new non-contact methods, based on advanced image processing, caused a new wave of interest in this issue. We present a simple but quite effective method for analyzing the mechanical pulsations of blood vessels lying close to the surface of the skin. Our technique is a modification of imaging (or remote) photoplethysmography (i-PPG). We supplemented this method with the addition of a laser light source, which made it possible to use other methods of searching for the proposed pulsation zone. During the testing of the method, several series of experiments were carried out with both artificial oscillating objects as well as with the target signal source (human wrist). The obtained results show that our method allows correct interpretation of complex data. To summarize, we proposed and tested an alternative method for the search and analysis of pulsating vessels.

  13. Study on the Non-contact Acoustic Inspection Method for Concrete Structures by using Strong Ultrasonic Sound source

    NASA Astrophysics Data System (ADS)

    Sugimoto, Tsuneyoshi; Uechi, Itsuki; Sugimoto, Kazuko; Utagawa, Noriyuki; Katakura, Kageyoshi

    Hammering test is widely used to inspect the defects in concrete structures. However, this method has a major difficulty in inspect at high-places, such as a tunnel ceiling or a bridge girder. Moreover, its detection accuracy is dependent on a tester's experience. Therefore, we study about the non-contact acoustic inspection method of the concrete structure using the air borne sound wave and a laser Doppler vibrometer. In this method, the concrete surface is excited by air-borne sound wave emitted with a long range acoustic device (LRAD), and the vibration velocity on the concrete surface is measured by a laser Doppler vibrometer. A defect part is detected by the same flexural resonance as the hammer method. It is already shown clearly that detection of a defect can be performed from a long distance of 5 m or more using a concrete test object. Moreover, it is shown that a real concrete structure can also be applied. However, when the conventional LRAD was used as a sound source, there were problems, such as restrictions of a measurement angle and the surrounding noise. In order to solve these problems, basic examination which used the strong ultrasonic wave sound source was carried out. In the experiment, the concrete test object which includes an imitation defect from 5-m distance was used. From the experimental result, when the ultrasonic sound source was used, restrictions of a measurement angle become less severe and it was shown that circumference noise also falls dramatically.

  14. Verlet scheme non-conservativeness for simulation of spherical particles collisional dynamics and method of its compensation

    NASA Astrophysics Data System (ADS)

    Savin, Andrei V.; Smirnov, Petr G.

    2018-05-01

    Simulation of collisional dynamics of a large ensemble of monodisperse particles by the method of discrete elements is considered. Verle scheme is used for integration of the equations of motion. Non-conservativeness of the finite-difference scheme is discovered depending on the time step, which is equivalent to a pure-numerical energy source appearance in the process of collision. Compensation method for the source is proposed and tested.

  15. A method for reducing the largest relative errors in Monte Carlo iterated-fission-source calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunter, J. L.; Sutton, T. M.

    2013-07-01

    In Monte Carlo iterated-fission-source calculations relative uncertainties on local tallies tend to be larger in lower-power regions and smaller in higher-power regions. Reducing the largest uncertainties to an acceptable level simply by running a larger number of neutron histories is often prohibitively expensive. The uniform fission site method has been developed to yield a more spatially-uniform distribution of relative uncertainties. This is accomplished by biasing the density of fission neutron source sites while not biasing the solution. The method is integrated into the source iteration process, and does not require any auxiliary forward or adjoint calculations. For a given amountmore » of computational effort, the use of the method results in a reduction of the largest uncertainties relative to the standard algorithm. Two variants of the method have been implemented and tested. Both have been shown to be effective. (authors)« less

  16. A Method for Harmonic Sources Detection based on Harmonic Distortion Power Rate

    NASA Astrophysics Data System (ADS)

    Lin, Ruixing; Xu, Lin; Zheng, Xian

    2018-03-01

    Harmonic sources detection at the point of common coupling is an essential step for harmonic contribution determination and harmonic mitigation. The harmonic distortion power rate index is proposed for harmonic source location based on IEEE Std 1459-2010 in the paper. The method only based on harmonic distortion power is not suitable when the background harmonic is large. To solve this problem, a threshold is determined by the prior information, when the harmonic distortion power is larger than the threshold, the customer side is considered as the main harmonic source, otherwise, the utility side is. A simple model of public power system was built in MATLAB/Simulink and field test results of typical harmonic loads verified the effectiveness of proposed method.

  17. Improvement and performance evaluation of the perturbation source method for an exact Monte Carlo perturbation calculation in fixed source problems

    NASA Astrophysics Data System (ADS)

    Sakamoto, Hiroki; Yamamoto, Toshihiro

    2017-09-01

    This paper presents improvement and performance evaluation of the "perturbation source method", which is one of the Monte Carlo perturbation techniques. The formerly proposed perturbation source method was first-order accurate, although it is known that the method can be easily extended to an exact perturbation method. A transport equation for calculating an exact flux difference caused by a perturbation is solved. A perturbation particle representing a flux difference is explicitly transported in the perturbed system, instead of in the unperturbed system. The source term of the transport equation is defined by the unperturbed flux and the cross section (or optical parameter) changes. The unperturbed flux is provided by an "on-the-fly" technique during the course of the ordinary fixed source calculation for the unperturbed system. A set of perturbation particle is started at the collision point in the perturbed region and tracked until death. For a perturbation in a smaller portion of the whole domain, the efficiency of the perturbation source method can be improved by using a virtual scattering coefficient or cross section in the perturbed region, forcing collisions. Performance is evaluated by comparing the proposed method to other Monte Carlo perturbation methods. Numerical tests performed for a particle transport in a two-dimensional geometry reveal that the perturbation source method is less effective than the correlated sampling method for a perturbation in a larger portion of the whole domain. However, for a perturbation in a smaller portion, the perturbation source method outperforms the correlated sampling method. The efficiency depends strongly on the adjustment of the new virtual scattering coefficient or cross section.

  18. Theoretical value of pre-trade testing for Salmonella in Swedish cattle herds.

    PubMed

    Sternberg Lewerin, Susanna

    2018-05-01

    The Swedish Salmonella control programme includes mandatory action if Salmonella is detected in a herd. The aim of this study was to assess the relative value of different strategies for pre-movement testing of cattle. Three fictitious herds were included: dairy, beef and specialised calf-fattening. The yearly risks of introducing Salmonella with and without individual serological or bulk milk testing were assessed as well as the effects of sourcing animals from low-prevalence areas or reducing the number of source herds. The initial risk was highest for the calf-fattening herd and lowest for the beef herd. For the beef and dairy herds, the yearly risk of Salmonella introduction was reduced by about 75% with individual testing. Sourcing animals from low-prevalence areas reduced the risk by >99%. For the calf-fattening herd, the yearly risk was reduced by almost 50% by individual testing or sourcing animals from a maximum of five herds. The method was useful for illustrating effects of risk mitigation when introducing animals into a herd. Sourcing animals from low-risk areas (or herds) is more effective than single testing of individual animals or bulk milk. A comprehensive approach to reduce the risk of introducing Salmonella from source herds is justified. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Seismological investigation of September 09 2016, North Korea underground nuclear test

    NASA Astrophysics Data System (ADS)

    Gaber, H.; Elkholy, S.; Abdelazim, M.; Hamama, I. H.; Othman, A. S.

    2017-12-01

    On Sep. 9, 2016, a seismic event of mb 5.3 took place in North Korea. This event was reported as a nuclear test. In this study, we applied a number of discriminant techniques that facilitate the ability to distinguish between explosions and earthquakes on the Korean Peninsula. The differences between explosions and earthquakes are due to variation in source dimension, epicenter depth and source mechanism, or a collection of them. There are many seismological differences between nuclear explosions and earthquakes, but not all of them are detectable at large distances or are appropriate to each earthquake and explosion. The discrimination methods used in the current study include the seismic source location, source depth, the differences in the frequency contents, complexity versus spectral ratio and Ms-mb differences for both earthquakes and explosions. Sep. 9, 2016, event is located in the region of North Korea nuclear test site at a zero depth, which is likely to be a nuclear explosion. Comparison between the P wave spectra of the nuclear test and the Sep. 8, 2000, North Korea earthquake, mb 4.9 shows that the spectrum of both events is nearly the same. The results of applying the theoretical model of Brune to P wave spectra of both explosion and earthquake show that the explosion manifests larger corner frequency than the earthquake, reflecting the nature of the different sources. The complexity and spectral ratio were also calculated from the waveform data recorded at a number of stations in order to investigate the relation between them. The observed classification percentage of this method is about 81%. Finally, the mb:Ms method is also investigated. We calculate mb and Ms for the Sep. 9, 2016, explosion and compare the result with the mb: Ms chart obtained from the previous studies. This method is working well with the explosion.

  20. Realistic Subsurface Anomaly Discrimination Using Electromagnetic Induction and an SVM Classifier

    NASA Astrophysics Data System (ADS)

    Pablo Fernández, Juan; Shubitidze, Fridon; Shamatava, Irma; Barrowes, Benjamin E.; O'Neill, Kevin

    2010-12-01

    The environmental research program of the United States military has set up blind tests for detection and discrimination of unexploded ordnance. One such test consists of measurements taken with the EM-63 sensor at Camp Sibert, AL. We review the performance on the test of a procedure that combines a field-potential (HAP) method to locate targets, the normalized surface magnetic source (NSMS) model to characterize them, and a support vector machine (SVM) to classify them. The HAP method infers location from the scattered magnetic field and its associated scalar potential, the latter reconstructed using equivalent sources. NSMS replaces the target with an enclosing spheroid of equivalent radial magnetization whose integral it uses as a discriminator. SVM generalizes from empirical evidence and can be adapted for multiclass discrimination using a voting system. Our method identifies all potentially dangerous targets correctly and has a false-alarm rate of about 5%.

  1. Solar cell and module performance assessment based on indoor calibration methods

    NASA Astrophysics Data System (ADS)

    Bogus, K.

    A combined space/terrestrial solar cell test calibration method that requires five steps and can be performed indoors is described. The test conditions are designed to qualify the cell or module output data in standard illumination and temperature conditions. Measurements are made of the short-circuit current, the open circuit voltage, the maximum power, the efficiency, and the spectral response. Standard sunlight must be replicated both in earth surface and AM0 conditions; Xe lamps are normally used for the light source, with spectral measurements taken of the light. Cell and module spectral response are assayed by using monochromators and narrow band pass monochromatic filters. Attention is required to define the performance characteristics of modules under partial shadowing. Error sources that may effect the measurements are discussed, as are previous cell performance testing and calibration methods and their effectiveness in comparison with the behaviors of satellite solar power panels.

  2. Comparison of Calibration Methods for Tristimulus Colorimeters.

    PubMed

    Gardner, James L

    2007-01-01

    Uncertainties in source color measurements with a tristimulus colorimeter are estimated for calibration factors determined, based on a known source spectral distribution or on accurate measurements of the spectral responsivities of the colorimeter channels. Application is to the National Institute of Standards and Technology (NIST) colorimeter and an International Commission on Illumination (CIE) Illuminant A calibration. Detector-based calibration factors generally have lower uncertainties than source-based calibration factors. Uncertainties are also estimated for calculations of spectral mismatch factors. Where both spectral responsivities of the colorimeter channels and the spectral power distributions of the calibration and test sources are known, uncertainties are lowest if the colorimeter calibration factors are recalculated for the test source; this process also avoids correlations between the CIE Source A calibration factors and the spectral mismatch factors.

  3. Comparison of Calibration Methods for Tristimulus Colorimeters

    PubMed Central

    Gardner, James L.

    2007-01-01

    Uncertainties in source color measurements with a tristimulus colorimeter are estimated for calibration factors determined, based on a known source spectral distribution or on accurate measurements of the spectral responsivities of the colorimeter channels. Application is to the National Institute of Standards and Technology (NIST) colorimeter and an International Commission on Illumination (CIE) Illuminant A calibration. Detector-based calibration factors generally have lower uncertainties than source-based calibration factors. Uncertainties are also estimated for calculations of spectral mismatch factors. Where both spectral responsivities of the colorimeter channels and the spectral power distributions of the calibration and test sources are known, uncertainties are lowest if the colorimeter calibration factors are recalculated for the test source; this process also avoids correlations between the CIE Source A calibration factors and the spectral mismatch factors. PMID:27110460

  4. Rapid detection and E-test antimicrobial susceptibility testing of Vibrio parahaemolyticus isolated from seafood and environmental sources in Malaysia.

    PubMed

    Al-Othrubi, Saleh M; Hanafiah, Alfizah; Radu, Son; Neoh, Humin; Jamal, Rahaman

    2011-04-01

    To find out the prevalence and antimicrobial susceptibility of Vibrio parahaemolyticus in seafoods and environmental sources. The study was carried out at the Center of Excellence for Food Safety Research, University Putra Malaysia; Universiti Kebangsaan Malaysia; Medical Molecular Biology Institute; and University Kebansaan Malaysia Hospital, Malaysia between January 2006 and August 2008. One hundred and forty-four isolates from 400 samples of seafood (122 isolates) and seawater sources (22 isolates) were investigated for the presence of thermostable direct hemolysin (tdh+) and TDH-related hemolysin (trh+) genes using the standard methods. The E-test method was used to test the antimicrobial susceptibility. The study indicates low occurrence of tdh+ (0.69%) and trh+ isolates (8.3%). None of the isolates tested posses both virulence genes. High sensitivity was observed against tetracycline (98%). The mean minimum inhibitory concentration (MIC) of the isolates toward ampicillin increased from 4 ug/ml in 2004 to 24 ug/ml in 2007. The current study demonstrates a low occurrence of pathogenic Vibrio parahaemolyticus in the marine environment and seafood. Nonetheless, the potential risk of vibrio infection due to consumption of Vibrio parahaemolyticus contaminated seafood in Malaysia should not be neglected.

  5. Hypothesis tests for the detection of constant speed radiation moving sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir

    2015-07-01

    Radiation Portal Monitors are deployed in linear network to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal to noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes amore » benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive background, and a vehicle source carrier under the same respectively high and low count rate radioactive background, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm, while guaranteeing the stability of its optimization parameter regardless of signal to noise ratio variations between 2 to 0.8. (authors)« less

  6. Comparison of aircraft noise measured in flight test and in the NASA Ames 40- by 80-foot wind tunnel.

    NASA Technical Reports Server (NTRS)

    Atencio, A., Jr.; Soderman, P. T.

    1973-01-01

    A method to determine free-field aircraft noise spectra from wind-tunnel measurements has been developed. The crux of the method is the correction for reverberations. Calibrated loud speakers are used to simulate model sound sources in the wind tunnel. Corrections based on the difference between the direct and reverberant field levels are applied to wind-tunnel data for a wide range of aircraft noise sources. To establish the validity of the correction method, two research aircraft - one propeller-driven (YOV-10A) and one turbojet-powered (XV-5B) - were flown in free field and then tested in the wind tunnel. Corrected noise spectra from the two environments agree closely.

  7. Study of acoustic emission during mechanical tests of large flight weight tank structure

    NASA Technical Reports Server (NTRS)

    Mccauley, B. O.; Nakamura, Y.; Veach, C. L.

    1973-01-01

    A PPO-insulated, flight-weight, subscale, aluminum tank was monitored for acoustic emissions during a proof test and during 100 cycles of environmental test simulating space flights. The use of a combination of frequency filtering and appropriate spatial filtering to reduce background noise was found to be sufficient to detect acoustic emission signals of relatively small intensity expected from subcritical crack growth in the structure. Several emission source locations were identified, including the one where a flaw was detected by post-test x-ray inspections. For most source locations, however, post-test inspections did not detect flaws; this was partially attributed to the higher sensitivity of the acoustic emission technique than any other currently available NDT method for detecting flaws. For these non-verifiable emission sources, a problem still remains in correctly interpreting observed emission signals.

  8. High-order scheme for the source-sink term in a one-dimensional water temperature model

    PubMed Central

    Jing, Zheng; Kang, Ling

    2017-01-01

    The source-sink term in water temperature models represents the net heat absorbed or released by a water system. This term is very important because it accounts for solar radiation that can significantly affect water temperature, especially in lakes. However, existing numerical methods for discretizing the source-sink term are very simplistic, causing significant deviations between simulation results and measured data. To address this problem, we present a numerical method specific to the source-sink term. A vertical one-dimensional heat conduction equation was chosen to describe water temperature changes. A two-step operator-splitting method was adopted as the numerical solution. In the first step, using the undetermined coefficient method, a high-order scheme was adopted for discretizing the source-sink term. In the second step, the diffusion term was discretized using the Crank-Nicolson scheme. The effectiveness and capability of the numerical method was assessed by performing numerical tests. Then, the proposed numerical method was applied to a simulation of Guozheng Lake (located in central China). The modeling results were in an excellent agreement with measured data. PMID:28264005

  9. High-order scheme for the source-sink term in a one-dimensional water temperature model.

    PubMed

    Jing, Zheng; Kang, Ling

    2017-01-01

    The source-sink term in water temperature models represents the net heat absorbed or released by a water system. This term is very important because it accounts for solar radiation that can significantly affect water temperature, especially in lakes. However, existing numerical methods for discretizing the source-sink term are very simplistic, causing significant deviations between simulation results and measured data. To address this problem, we present a numerical method specific to the source-sink term. A vertical one-dimensional heat conduction equation was chosen to describe water temperature changes. A two-step operator-splitting method was adopted as the numerical solution. In the first step, using the undetermined coefficient method, a high-order scheme was adopted for discretizing the source-sink term. In the second step, the diffusion term was discretized using the Crank-Nicolson scheme. The effectiveness and capability of the numerical method was assessed by performing numerical tests. Then, the proposed numerical method was applied to a simulation of Guozheng Lake (located in central China). The modeling results were in an excellent agreement with measured data.

  10. Drinking water quality standards and standard tests: Worldwide. (Latest citations from the Food Science and Technology Abstracts database). Published Search

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1993-06-01

    The bibliography contains citations concerning standards and standard tests for water quality in drinking water sources, reservoirs, and distribution systems. Standards from domestic and international sources are presented. Glossaries and vocabularies that concern water quality analysis, testing, and evaluation are included. Standard test methods for individual elements, selected chemicals, sensory properties, radioactivity, and other chemical and physical properties are described. Discussions for proposed standards on new pollutant materials are briefly considered. (Contains a minimum of 203 citations and includes a subject term index and title list.)

  11. A new method and application for determining the nitrogen isotopic composition of NOx

    NASA Astrophysics Data System (ADS)

    Hastings, M. G.; Miller, D. J.; Wojtal, P.; O'Connor, M.

    2015-12-01

    Atmospheric nitrogen oxides (NOx = NO + NO2) play key roles in atmospheric chemistry, air quality, and radiative forcing, and contribute to nitric acid deposition. Sources of NOx include both natural and anthropogenic emissions, which vary significantly in space and time. NOx isotopic signatures offer a potentially valuable tool to trace source impacts on atmospheric chemistry and regional acid deposition. Previous work on NOx isotopic signatures suggests large ranges in values, even from the same emission source, as well as overlapping ranges amongst different sources, making it difficult to use the isotopic composition as a quantitative tracer of source influences. These prior measurements have utilized a variety of methods for collecting the NOx as nitrate or nitrite for isotopic analysis, and testing of some of these methods (including active and passive collections) reveal inconsistencies in efficiency of collection, as well as issues related to changes in conditions such as humidity, temperature, and NOx fluxes. A recently developed method allows for accurately measuring the nitrogen isotopic composition of NOx (NOx = NO + NO2) after capturing the NOx in a potassium permanganate/sodium hydroxide solution as nitrate (Fibiger et al., Anal. Chem., 2014). The method has been thoroughly tested in the laboratory and field, and efficiently collects NO and NO2 under a variety of conditions. There are several advantages to collecting NOx actively, including the ability to collect over minutes to hourly time scales, and the ability to collect in environments with highly variable NOx sources and concentrations. Challenges include a nitrate background present in potassium permanganate (solid and liquid forms), accurately deriving ambient NOx concentrations based upon flow rate and solution concentrations above this variable background, and potential interferences from other nitrogen species. This method was designed to collect NOx in environments with very different emission source loadings in an effort to isotopically characterize NOx sources. Results to date suggest very different values, and less variability than previous work, particularly for vehicle emissions. Ultimately, we aim to determine whether the influence of NOx sources can be quantitatively tracked in the environment.

  12. Open-Source Photometric System for Enzymatic Nitrate Quantification

    PubMed Central

    Wittbrodt, B. T.; Squires, D. A.; Walbeck, J.; Campbell, E.; Campbell, W. H.; Pearce, J. M.

    2015-01-01

    Nitrate, the most oxidized form of nitrogen, is regulated to protect people and animals from harmful levels as there is a large over abundance due to anthropogenic factors. Widespread field testing for nitrate could begin to address the nitrate pollution problem, however, the Cadmium Reduction Method, the leading certified method to detect and quantify nitrate, demands the use of a toxic heavy metal. An alternative, the recently proposed Environmental Protection Agency Nitrate Reductase Nitrate-Nitrogen Analysis Method, eliminates this problem but requires an expensive proprietary spectrophotometer. The development of an inexpensive portable, handheld photometer will greatly expedite field nitrate analysis to combat pollution. To accomplish this goal, a methodology for the design, development, and technical validation of an improved open-source water testing platform capable of performing Nitrate Reductase Nitrate-Nitrogen Analysis Method. This approach is evaluated for its potential to i) eliminate the need for toxic chemicals in water testing for nitrate and nitrite, ii) reduce the cost of equipment to perform this method for measurement for water quality, and iii) make the method easier to carryout in the field. The device is able to perform as well as commercial proprietary systems for less than 15% of the cost for materials. This allows for greater access to the technology and the new, safer nitrate testing technique. PMID:26244342

  13. Open-Source Photometric System for Enzymatic Nitrate Quantification.

    PubMed

    Wittbrodt, B T; Squires, D A; Walbeck, J; Campbell, E; Campbell, W H; Pearce, J M

    2015-01-01

    Nitrate, the most oxidized form of nitrogen, is regulated to protect people and animals from harmful levels as there is a large over abundance due to anthropogenic factors. Widespread field testing for nitrate could begin to address the nitrate pollution problem, however, the Cadmium Reduction Method, the leading certified method to detect and quantify nitrate, demands the use of a toxic heavy metal. An alternative, the recently proposed Environmental Protection Agency Nitrate Reductase Nitrate-Nitrogen Analysis Method, eliminates this problem but requires an expensive proprietary spectrophotometer. The development of an inexpensive portable, handheld photometer will greatly expedite field nitrate analysis to combat pollution. To accomplish this goal, a methodology for the design, development, and technical validation of an improved open-source water testing platform capable of performing Nitrate Reductase Nitrate-Nitrogen Analysis Method. This approach is evaluated for its potential to i) eliminate the need for toxic chemicals in water testing for nitrate and nitrite, ii) reduce the cost of equipment to perform this method for measurement for water quality, and iii) make the method easier to carryout in the field. The device is able to perform as well as commercial proprietary systems for less than 15% of the cost for materials. This allows for greater access to the technology and the new, safer nitrate testing technique.

  14. Quantifying the isotopic composition of NOx emission sources: An analysis of collection methods

    NASA Astrophysics Data System (ADS)

    Fibiger, D.; Hastings, M.

    2012-04-01

    We analyze various collection methods for nitrogen oxides, NOx (NO2 and NO), used to evaluate the nitrogen isotopic composition (δ15N). Atmospheric NOx is a major contributor to acid rain deposition upon its conversion to nitric acid; it also plays a significant role in determining air quality through the production of tropospheric ozone. NOx is released by both anthropogenic (fossil fuel combustion, biomass burning, aircraft emissions) and natural (lightning, biogenic production in soils) sources. Global concentrations of NOx are rising because of increased anthropogenic emissions, while natural source emissions also contribute significantly to the global NOx burden. The contributions of both natural and anthropogenic sources and their considerable variability in space and time make it difficult to attribute local NOx concentrations (and, thus, nitric acid) to a particular source. Several recent studies suggest that variability in the isotopic composition of nitric acid deposition is related to variability in the isotopic signatures of NOx emission sources. Nevertheless, the isotopic composition of most NOx sources has not been thoroughly constrained. Ultimately, the direct capture and quantification of the nitrogen isotopic signatures of NOx sources will allow for the tracing of NOx emissions sources and their impact on environmental quality. Moreover, this will provide a new means by which to verify emissions estimates and atmospheric models. We present laboratory results of methods used for capturing NOx from air into solution. A variety of methods have been used in field studies, but no independent laboratory verification of the efficiencies of these methods has been performed. When analyzing isotopic composition, it is important that NOx be collected quantitatively or the possibility of fractionation must be constrained. We have found that collection efficiency can vary widely under different conditions in the laboratory and fractionation does not vary predictably with collection efficiency. For example, prior measurements frequently utilized triethanolamine solution for collecting NOx, but the collection efficiency was found to drop quickly as the solution aged. The most promising method tested is a NaOH/KMnO4 solution (Margeson and Knoll, Anal. Chem., 1985) which can collect NOx quantitatively from the air. Laboratory tests of previously used methods, along with progress toward creating a suitable and verifiable field deployable collection method will be presented.

  15. Accurate source location from waves scattered by surface topography: Applications to the Nevada and North Korean test sites

    NASA Astrophysics Data System (ADS)

    Shen, Y.; Wang, N.; Bao, X.; Flinders, A. F.

    2016-12-01

    Scattered waves generated near the source contains energy converted from the near-field waves to the far-field propagating waves, which can be used to achieve location accuracy beyond the diffraction limit. In this work, we apply a novel full-wave location method that combines a grid-search algorithm with the 3D Green's tensor database to locate the Non-Proliferation Experiment (NPE) at the Nevada test site and the North Korean nuclear tests. We use the first arrivals (Pn/Pg) and their immediate codas, which are likely dominated by waves scattered at the surface topography near the source, to determine the source location. We investigate seismograms in the frequency of [1.0 2.0] Hz to reduce noises in the data and highlight topography scattered waves. High resolution topographic models constructed from 10 and 90 m grids are used for Nevada and North Korea, respectively. The reference velocity model is based on CRUST 1.0. We use the collocated-grid finite difference method on curvilinear grids to calculate the strain Green's tensor and obtain synthetic waveforms using source-receiver reciprocity. The `best' solution is found based on the least-square misfit between the observed and synthetic waveforms. To suppress random noises, an optimal weighting method for three-component seismograms is applied in misfit calculation. Our results show that the scattered waves are crucial in improving resolution and allow us to obtain accurate solutions with a small number of stations. Since the scattered waves depends on topography, which is known at the wavelengths of regional seismic waves, our approach yields absolute, instead of relative, source locations. We compare our solutions with those of USGS and other studies. Moreover, we use differential waveforms to locate pairs of the North Korea tests from years 2006, 2009, 2013 and 2016 to further reduce the effects of unmodeled heterogeneities and errors in the reference velocity model.

  16. The use of triangulation in qualitative research.

    PubMed

    Carter, Nancy; Bryant-Lukosius, Denise; DiCenso, Alba; Blythe, Jennifer; Neville, Alan J

    2014-09-01

    Triangulation refers to the use of multiple methods or data sources in qualitative research to develop a comprehensive understanding of phenomena (Patton, 1999). Triangulation also has been viewed as a qualitative research strategy to test validity through the convergence of information from different sources. Denzin (1978) and Patton (1999) identified four types of triangulation: (a) method triangulation, (b) investigator triangulation, (c) theory triangulation, and (d) data source triangulation. The current article will present the four types of triangulation followed by a discussion of the use of focus groups (FGs) and in-depth individual (IDI) interviews as an example of data source triangulation in qualitative inquiry.

  17. Comparing the field and laboratory emission cell (FLEC) with traditional emissions testing chambers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roache, N.F.; Guo, Z.; Fortmann, R.

    1996-12-31

    A series of tests was designed to evaluate the performance of the field and laboratory emission cell (FLEC) as applied to the testing of emissions from two indoor coating materials, floor wax and latex paint. These tests included validation of the repeatability of the test method, evaluation of the effect of different air velocities on source emissions, and a comparison of FLEC versus small chamber characterization of emissions. The FLEC exhibited good repeatability in characterization of emissions when applied to both sources under identical conditions. Tests with different air velocities showed significant effects on the emissions from latex paint, yetmore » little effect on emissions from the floor wax. Comparisons of data from the FLEC and small chamber show good correlation for measurements involving floor wax, but less favorable results for emissions from latex paint. The procedures and findings are discussed; conclusions are limited and include emphasis on the need for additional study and development of a standard method.« less

  18. SOURCE SAMPLING AND ANALYSIS GUIDANCE - A METHODS DIRECTORY

    EPA Science Inventory

    Sampling and analytical methodologies are needed by EPA and industry for testing stationary sources for specific organic compounds such as those listed under the Resource Conservation and Recovery Act (RCRA) Appendix VIII and Appendix IX and the Clean Air Act of 1990. omputerized...

  19. A robust hypothesis test for the sensitive detection of constant speed radiation moving sources

    NASA Astrophysics Data System (ADS)

    Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir; Boudergui, Karim; Moline, Yoann; Sannié, Guillaume; Gameiro, Jordan; Normand, Stéphane; Méchin, Laurence

    2015-09-01

    Radiation Portal Monitors are deployed in linear networks to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal-to-noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes a benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive backgrounds, and a vehicle source carrier under the same respectively high and low count rate radioactive backgrounds, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm. It also guarantees that the optimal coverage factor for this compromise remains stable regardless of signal-to-noise ratio variations between 2 and 0.8, therefore allowing the final user to parametrize the test with the sole prior knowledge of background amplitude.

  20. A double-correlation tremor-location method

    NASA Astrophysics Data System (ADS)

    Li, Ka Lok; Sgattoni, Giulia; Sadeghisorkhani, Hamzeh; Roberts, Roland; Gudmundsson, Olafur

    2017-02-01

    A double-correlation method is introduced to locate tremor sources based on stacks of complex, doubly-correlated tremor records of multiple triplets of seismographs back projected to hypothetical source locations in a geographic grid. Peaks in the resulting stack of moduli are inferred source locations. The stack of the moduli is a robust measure of energy radiated from a point source or point sources even when the velocity information is imprecise. Application to real data shows how double correlation focuses the source mapping compared to the common single correlation approach. Synthetic tests demonstrate the robustness of the method and its resolution limitations which are controlled by the station geometry, the finite frequency of the signal, the quality of the used velocity information and noise level. Both random noise and signal or noise correlated at time shifts that are inconsistent with the assumed velocity structure can be effectively suppressed. Assuming a surface wave velocity, we can constrain the source location even if the surface wave component does not dominate. The method can also in principle be used with body waves in 3-D, although this requires more data and seismographs placed near the source for depth resolution.

  1. An air brake model for longitudinal train dynamics studies

    NASA Astrophysics Data System (ADS)

    Wei, Wei; Hu, Yang; Wu, Qing; Zhao, Xubao; Zhang, Jun; Zhang, Yuan

    2017-04-01

    Experience of heavy haul train operation shows that heavy haul train fatigue fracture of coupler and its related components, even the accidents are caused by excessive coupler force. The most economical and effective method to study on train longitudinal impulse by reducing the coupler force is simulation method. The characteristics of train air brake system is an important excitation source for the study of longitudinal impulse. It is very difficult to obtain the braking characteristic by the test method, a better way to get the input parameters of the excitation source in the train longitudinal dynamics is modelling the train air brake system. In this paper, the air brake system model of integrated system of air brake and longitudinal dynamics is introduced. This introduce is focus on the locomotive automatic brake valve and vehicle distribution valve model, and the comparative analysis of the simulation and test results of the braking system is given. It is proved that the model can predict the characteristics of train braking system. This method provides a good solution for the excitation source of longitudinal dynamic analysis system.

  2. Characterization for imperfect polarizers under imperfect conditions.

    PubMed

    Nee, S M; Yoo, C; Cole, T; Burge, D

    1998-01-01

    The principles for measuring the extinction ratio and transmittance of a polarizer are formulated by use of the principal Mueller matrix, which includes both polarization and depolarization. The extinction ratio is about half of the depolarization, and the contrast is the inverse of the extinction ratio. Errors in the extinction ratio caused by partially polarized incident light and the misalignment of polarizers can be corrected by the devised zone average method and the null method. Used with a laser source, the null method can measure contrasts for very good polarizers. Correct algorithms are established to deduce the depolarization for three comparable polarizers calibrated mutually. These methods are tested with wire-grid polarizers used in the 3-5-microm wavelength region with a laser source and also a lamp source. The contrasts obtained from both methods agree.

  3. DETECTING UNSPECIFIED STRUCTURE IN LOW-COUNT IMAGES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stein, Nathan M.; Dyk, David A. van; Kashyap, Vinay L.

    Unexpected structure in images of astronomical sources often presents itself upon visual inspection of the image, but such apparent structure may either correspond to true features in the source or be due to noise in the data. This paper presents a method for testing whether inferred structure in an image with Poisson noise represents a significant departure from a baseline (null) model of the image. To infer image structure, we conduct a Bayesian analysis of a full model that uses a multiscale component to allow flexible departures from the posited null model. As a test statistic, we use a tailmore » probability of the posterior distribution under the full model. This choice of test statistic allows us to estimate a computationally efficient upper bound on a p-value that enables us to draw strong conclusions even when there are limited computational resources that can be devoted to simulations under the null model. We demonstrate the statistical performance of our method on simulated images. Applying our method to an X-ray image of the quasar 0730+257, we find significant evidence against the null model of a single point source and uniform background, lending support to the claim of an X-ray jet.« less

  4. Fan Noise Prediction with Applications to Aircraft System Noise Assessment

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Envia, Edmane; Burley, Casey L.

    2009-01-01

    This paper describes an assessment of current fan noise prediction tools by comparing measured and predicted sideline acoustic levels from a benchmark fan noise wind tunnel test. Specifically, an empirical method and newly developed coupled computational approach are utilized to predict aft fan noise for a benchmark test configuration. Comparisons with sideline noise measurements are performed to assess the relative merits of the two approaches. The study identifies issues entailed in coupling the source and propagation codes, as well as provides insight into the capabilities of the tools in predicting the fan noise source and subsequent propagation and radiation. In contrast to the empirical method, the new coupled computational approach provides the ability to investigate acoustic near-field effects. The potential benefits/costs of these new methods are also compared with the existing capabilities in a current aircraft noise system prediction tool. The knowledge gained in this work provides a basis for improved fan source specification in overall aircraft system noise studies.

  5. A Bayesian Network Based Global Sensitivity Analysis Method for Identifying Dominant Processes in a Multi-physics Model

    NASA Astrophysics Data System (ADS)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2016-12-01

    Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.

  6. A method for monitoring the variability in nuclear absorption characteristics of aviation fuels

    NASA Technical Reports Server (NTRS)

    Sprinkle, Danny R.; Shen, Chih-Ping

    1988-01-01

    A technique for monitoring variability in the nuclear absorption characteristics of aviation fuels has been developed. It is based on a highly collimated low energy gamma radiation source and a sodium iodide counter. The source and the counter assembly are separated by a geometrically well-defined test fuel cell. A computer program for determining the mass attenuation coefficient of the test fuel sample, based on the data acquired for a preset counting period, has been developed and tested on several types of aviation fuel.

  7. Inverse identification of unknown finite-duration air pollutant release from a point source in urban environment

    NASA Astrophysics Data System (ADS)

    Kovalets, Ivan V.; Efthimiou, George C.; Andronopoulos, Spyros; Venetsanos, Alexander G.; Argyropoulos, Christos D.; Kakosimos, Konstantinos E.

    2018-05-01

    In this work, we present an inverse computational method for the identification of the location, start time, duration and quantity of emitted substance of an unknown air pollution source of finite time duration in an urban environment. We considered a problem of transient pollutant dispersion under stationary meteorological fields, which is a reasonable assumption for the assimilation of available concentration measurements within 1 h from the start of an incident. We optimized the calculation of the source-receptor function by developing a method which requires integrating as many backward adjoint equations as the available measurement stations. This resulted in high numerical efficiency of the method. The source parameters are computed by maximizing the correlation function of the simulated and observed concentrations. The method has been integrated into the CFD code ADREA-HF and it has been tested successfully by performing a series of source inversion runs using the data of 200 individual realizations of puff releases, previously generated in a wind tunnel experiment.

  8. Psychological Testing and Psychological Assessment: A Review of Evidence and Issues.

    ERIC Educational Resources Information Center

    Meyer, Gregory J.; Finn, Stephen E.; Eyde, Lorraine D.; Kay, Gary G.; Moreland, Kevin L.; Dies, Robert R.; Eisman, Elena J.; Kubiszyn, Tom W.; Reed, Geoffrey M.

    2001-01-01

    Summarizes issues associated with psychological assessment, concluding that: psychological test validity is strong and is comparable to medical test validity; distinct assessment methods provide unique sources of information; and clinicians who rely solely on interviews are prone to incomplete understandings. Suggests that multimethod assessment…

  9. 10 CFR Appendix E to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Water Heaters

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Heater means a water heater that uses electricity as the energy source, is designed to heat and store... that uses gas as the energy source, is designed to heat and store water at a thermostatically... energy source, is designed to heat and store water at a thermostatically controlled temperature of less...

  10. 10 CFR Appendix E to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Water Heaters

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Heater means a water heater that uses electricity as the energy source, is designed to heat and store... that uses gas as the energy source, is designed to heat and store water at a thermostatically... energy source, is designed to heat and store water at a thermostatically controlled temperature of less...

  11. 10 CFR Appendix E to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Water Heaters

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Heater means a water heater that uses electricity as the energy source, is designed to heat and store... that uses gas as the energy source, is designed to heat and store water at a thermostatically... energy source, is designed to heat and store water at a thermostatically controlled temperature of less...

  12. A hybrid approach for nonlinear computational aeroacoustics predictions

    NASA Astrophysics Data System (ADS)

    Sassanis, Vasileios; Sescu, Adrian; Collins, Eric M.; Harris, Robert E.; Luke, Edward A.

    2017-01-01

    In many aeroacoustics applications involving nonlinear waves and obstructions in the far-field, approaches based on the classical acoustic analogy theory or the linearised Euler equations are unable to fully characterise the acoustic field. Therefore, computational aeroacoustics hybrid methods that incorporate nonlinear wave propagation have to be constructed. In this study, a hybrid approach coupling Navier-Stokes equations in the acoustic source region with nonlinear Euler equations in the acoustic propagation region is introduced and tested. The full Navier-Stokes equations are solved in the source region to identify the acoustic sources. The flow variables of interest are then transferred from the source region to the acoustic propagation region, where the full nonlinear Euler equations with source terms are solved. The transition between the two regions is made through a buffer zone where the flow variables are penalised via a source term added to the Euler equations. Tests were conducted on simple acoustic and vorticity disturbances, two-dimensional jets (Mach 0.9 and 2), and a three-dimensional jet (Mach 1.5), impinging on a wall. The method is proven to be effective and accurate in predicting sound pressure levels associated with the propagation of linear and nonlinear waves in the near- and far-field regions.

  13. ANALYTICAL METHOD DEVELOPMENT FOR THE ANALYSIS OF N-NITROSODIMETHYLAMINE (NDMA) IN DRINKING WATER

    EPA Science Inventory

    N-Nitrosodimethylamine (NDMA), a by-product of the manufacture of liquid rocket fuel, has recently been identified as a contaminant in several California drinking water sources. The initial source of the contamination was identified as an aerospace facility. Subsequent testing ...

  14. 30 CFR 75.333 - Ventilation controls.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Radiant Heat Energy Source.” This publication is incorporated by reference and may be inspected at any... partitions, permanent stoppings, and regulators include concrete, concrete block, brick, cinder block, tile..., “Standard Test Method for Surface Flammability of Materials Using A Radiant Heat Energy Source.” This...

  15. 30 CFR 75.333 - Ventilation controls.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Radiant Heat Energy Source.” This publication is incorporated by reference and may be inspected at any... partitions, permanent stoppings, and regulators include concrete, concrete block, brick, cinder block, tile..., “Standard Test Method for Surface Flammability of Materials Using A Radiant Heat Energy Source.” This...

  16. Development of the EM tomography system by the vertical electromagnetic profiling (VEMP) method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miura, Y.; Osato, K.; Takasugi, S.

    1995-12-31

    As a part of the {open_quotes}Deep-Seated Geothermal Resources Survey{close_quotes} project being undertaken by the NEDO, the Vertical ElectroMagnetic Profiling (VEMP) method is being developed to accurately obtain deep resistivity structure. The VEMP method acquires multi-frequency three-component magnetic field data in an open hole well using controlled sources (loop sources or grounded-wire sources) emitted at the surface. Numerical simulation using EM3D demonstrated that phase data of the VEMP method is very sensitive to resistivity structure and the phase data will also indicate presence of deep anomalies. Forward modelling was also used to determine required transmitter moments for various grounded-wire and loopmore » sources for a field test using the WD-1 well in the Kakkonda geothermal area. Field logging of the well was carried out in May 1994 and the processed field data matches well the simulated data.« less

  17. Indirect (source-free) integration method. I. Wave-forms from geodesic generic orbits of EMRIs

    NASA Astrophysics Data System (ADS)

    Ritter, Patxi; Aoudia, Sofiane; Spallicci, Alessandro D. A. M.; Cordier, Stéphane

    2016-12-01

    The Regge-Wheeler-Zerilli (RWZ) wave-equation describes Schwarzschild-Droste black hole perturbations. The source term contains a Dirac distribution and its derivative. We have previously designed a method of integration in time domain. It consists of a finite difference scheme where analytic expressions, dealing with the wave-function discontinuity through the jump conditions, replace the direct integration of the source and the potential. Herein, we successfully apply the same method to the geodesic generic orbits of EMRI (Extreme Mass Ratio Inspiral) sources, at second order. An EMRI is a Compact Star (CS) captured by a Super-Massive Black Hole (SMBH). These are considered the best probes for testing gravitation in strong regime. The gravitational wave-forms, the radiated energy and angular momentum at infinity are computed and extensively compared with other methods, for different orbits (circular, elliptic, parabolic, including zoom-whirl).

  18. Quantitative measurement of pass-by noise radiated by vehicles running at high speeds

    NASA Astrophysics Data System (ADS)

    Yang, Diange; Wang, Ziteng; Li, Bing; Luo, Yugong; Lian, Xiaomin

    2011-03-01

    It has been a challenge in the past to accurately locate and quantify the pass-by noise source radiated by the running vehicles. A system composed of a microphone array is developed in our current work to do this work. An acoustic-holography method for moving sound sources is designed to handle the Doppler effect effectively in the time domain. The effective sound pressure distribution is reconstructed on the surface of a running vehicle. The method has achieved a high calculation efficiency and is able to quantitatively measure the sound pressure at the sound source and identify the location of the main sound source. The method is also validated by the simulation experiments and the measurement tests with known moving speakers. Finally, the engine noise, tire noise, exhaust noise and wind noise of the vehicle running at different speeds are successfully identified by this method.

  19. Effects of Environmental Toxicants on Metabolic Activity of Natural Microbial Communities

    PubMed Central

    Barnhart, Carole L. H.; Vestal, J. Robie

    1983-01-01

    Two methods of measuring microbial activity were used to study the effects of toxicants on natural microbial communities. The methods were compared for suitability for toxicity testing, sensitivity, and adaptability to field applications. This study included measurements of the incorporation of 14C-labeled acetate into microbial lipids and microbial glucosidase activity. Activities were measured per unit biomass, determined as lipid phosphate. The effects of various organic and inorganic toxicants on various natural microbial communities were studied. Both methods were useful in detecting toxicity, and their comparative sensitivities varied with the system studied. In one system, the methods showed approximately the same sensitivities in testing the effects of metals, but the acetate incorporation method was more sensitive in detecting the toxicity of organic compounds. The incorporation method was used to study the effects of a point source of pollution on the microbiota of a receiving stream. Toxic doses were found to be two orders of magnitude higher in sediments than in water taken from the same site, indicating chelation or adsorption of the toxicant by the sediment. The microbiota taken from below a point source outfall was 2 to 100 times more resistant to the toxicants tested than was that taken from above the outfall. Downstream filtrates in most cases had an inhibitory effect on the natural microbiota taken from above the pollution source. The microbial methods were compared with commonly used bioassay methods, using higher organisms, and were found to be similar in ability to detect comparative toxicities of compounds, but were less sensitive than methods which use standard media because of the influences of environmental factors. PMID:16346432

  20. Damage source identification of reinforced concrete structure using acoustic emission technique.

    PubMed

    Panjsetooni, Alireza; Bunnori, Norazura Muhamad; Vakili, Amir Hossein

    2013-01-01

    Acoustic emission (AE) technique is one of the nondestructive evaluation (NDE) techniques that have been considered as the prime candidate for structural health and damage monitoring in loaded structures. This technique was employed for investigation process of damage in reinforced concrete (RC) frame specimens. A number of reinforced concrete RC frames were tested under loading cycle and were simultaneously monitored using AE. The AE test data were analyzed using the AE source location analysis method. The results showed that AE technique is suitable to identify the sources location of damage in RC structures.

  1. Damage Source Identification of Reinforced Concrete Structure Using Acoustic Emission Technique

    PubMed Central

    Panjsetooni, Alireza; Bunnori, Norazura Muhamad; Vakili, Amir Hossein

    2013-01-01

    Acoustic emission (AE) technique is one of the nondestructive evaluation (NDE) techniques that have been considered as the prime candidate for structural health and damage monitoring in loaded structures. This technique was employed for investigation process of damage in reinforced concrete (RC) frame specimens. A number of reinforced concrete RC frames were tested under loading cycle and were simultaneously monitored using AE. The AE test data were analyzed using the AE source location analysis method. The results showed that AE technique is suitable to identify the sources location of damage in RC structures. PMID:23997681

  2. Locating non-volcanic tremor along the San Andreas Fault using a multiple array source imaging technique

    USGS Publications Warehouse

    Ryberg, T.; Haberland, C.H.; Fuis, G.S.; Ellsworth, W.L.; Shelly, D.R.

    2010-01-01

    Non-volcanic tremor (NVT) has been observed at several subduction zones and at the San Andreas Fault (SAF). Tremor locations are commonly derived by cross-correlating envelope-transformed seismic traces in combination with source-scanning techniques. Recently, they have also been located by using relative relocations with master events, that is low-frequency earthquakes that are part of the tremor; locations are derived by conventional traveltime-based methods. Here we present a method to locate the sources of NVT using an imaging approach for multiple array data. The performance of the method is checked with synthetic tests and the relocation of earthquakes. We also applied the method to tremor occurring near Cholame, California. A set of small-aperture arrays (i.e. an array consisting of arrays) installed around Cholame provided the data set for this study. We observed several tremor episodes and located tremor sources in the vicinity of SAF. During individual tremor episodes, we observed a systematic change of source location, indicating rapid migration of the tremor source along SAF. ?? 2010 The Authors Geophysical Journal International ?? 2010 RAS.

  3. [Explicit memory for type font of words in source monitoring and recognition tasks].

    PubMed

    Hatanaka, Yoshiko; Fujita, Tetsuya

    2004-02-01

    We investigated whether people can consciously remember type fonts of words by methods of examining explicit memory; source-monitoring and old/new-recognition. We set matched, non-matched, and non-studied conditions between the study and the test words using two kinds of type fonts; Gothic and MARU. After studying words in one way of encoding, semantic or physical, subjects in a source-monitoring task made a three way discrimination between new words, Gothic words, and MARU words (Exp. 1). Subjects in an old/new-recognition task indicated whether test words were previously presented or not (Exp. 2). We compared the source judgments with old/new recognition data. As a result, these data showed conscious recollection for type font of words on the source monitoring task and dissociation between source monitoring and old/new recognition performance.

  4. Treatment of internal sources in the finite-volume ELLAM

    USGS Publications Warehouse

    Healy, R.W.; ,; ,; ,; ,; ,

    2000-01-01

    The finite-volume Eulerian-Lagrangian localized adjoint method (FVELLAM) is a mass-conservative approach for solving the advection-dispersion equation. The method has been shown to be accurate and efficient for solving advection-dominated problems of solute transport in ground water in 1, 2, and 3 dimensions. Previous implementations of FVELLAM have had difficulty in representing internal sources because the standard assumption of lowest order Raviart-Thomas velocity field does not hold for source cells. Therefore, tracking of particles within source cells is problematic. A new approach has been developed to account for internal sources in FVELLAM. It is assumed that the source is uniformly distributed across a grid cell and that instantaneous mixing takes place within the cell, such that concentration is uniform across the cell at any time. Sub-time steps are used in the time-integration scheme to track mass outflow from the edges of the source cell. This avoids the need for tracking within the source cell. We describe the new method and compare results for a test problem with a wide range of cell Peclet numbers.

  5. Inverse current source density method in two dimensions: inferring neural activation from multielectrode recordings.

    PubMed

    Łęski, Szymon; Pettersen, Klas H; Tunstall, Beth; Einevoll, Gaute T; Gigg, John; Wójcik, Daniel K

    2011-12-01

    The recent development of large multielectrode recording arrays has made it affordable for an increasing number of laboratories to record from multiple brain regions simultaneously. The development of analytical tools for array data, however, lags behind these technological advances in hardware. In this paper, we present a method based on forward modeling for estimating current source density from electrophysiological signals recorded on a two-dimensional grid using multi-electrode rectangular arrays. This new method, which we call two-dimensional inverse Current Source Density (iCSD 2D), is based upon and extends our previous one- and three-dimensional techniques. We test several variants of our method, both on surrogate data generated from a collection of Gaussian sources, and on model data from a population of layer 5 neocortical pyramidal neurons. We also apply the method to experimental data from the rat subiculum. The main advantages of the proposed method are the explicit specification of its assumptions, the possibility to include system-specific information as it becomes available, the ability to estimate CSD at the grid boundaries, and lower reconstruction errors when compared to the traditional approach. These features make iCSD 2D a substantial improvement over the approaches used so far and a powerful new tool for the analysis of multielectrode array data. We also provide a free GUI-based MATLAB toolbox to analyze and visualize our test data as well as user datasets.

  6. Source-Type Identification Analysis Using Regional Seismic Moment Tensors

    NASA Astrophysics Data System (ADS)

    Chiang, A.; Dreger, D. S.; Ford, S. R.; Walter, W. R.

    2012-12-01

    Waveform inversion to determine the seismic moment tensor is a standard approach in determining the source mechanism of natural and manmade seismicity, and may be used to identify, or discriminate different types of seismic sources. The successful applications of the regional moment tensor method at the Nevada Test Site (NTS) and the 2006 and 2009 North Korean nuclear tests (Ford et al., 2009a, 2009b, 2010) show that the method is robust and capable for source-type discrimination at regional distances. The well-separated populations of explosions, earthquakes and collapses on a Hudson et al., (1989) source-type diagram enables source-type discrimination; however the question remains whether or not the separation of events is universal in other regions, where we have limited station coverage and knowledge of Earth structure. Ford et al., (2012) have shown that combining regional waveform data and P-wave first motions removes the CLVD-isotropic tradeoff and uniquely discriminating the 2009 North Korean test as an explosion. Therefore, including additional constraints from regional and teleseismic P-wave first motions enables source-type discrimination at regions with limited station coverage. We present moment tensor analysis of earthquakes and explosions (M6) from Lop Nor and Semipalatinsk test sites for station paths crossing Kazakhstan and Western China. We also present analyses of smaller events from industrial sites. In these sparse coverage situations we combine regional long-period waveforms, and high-frequency P-wave polarity from the same stations, as well as from teleseismic arrays to constrain the source type. Discrimination capability with respect to velocity model and station coverage is examined, and additionally we investigate the velocity model dependence of vanishing free-surface traction effects on seismic moment tensor inversion of shallow sources and recovery of explosive scalar moment. Our synthetic data tests indicate that biases in scalar seismic moment and discrimination for shallow sources are small and can be understood in a systematic manner. We are presently investigating the frequency dependence of vanishing traction of a very shallow (10m depth) M2+ chemical explosion recorded at several kilometer distances, and preliminary results indicate at the typical frequency passband we employ the bias does not affect our ability to retrieve the correct source mechanism but may affect the retrieval of the correct scalar seismic moment. Finally, we assess discrimination capability in a composite P-value statistical framework.

  7. En face projection imaging of the human choroidal layers with tracking SLO and swept source OCT angiography methods

    NASA Astrophysics Data System (ADS)

    Gorczynska, Iwona; Migacz, Justin; Zawadzki, Robert J.; Sudheendran, Narendran; Jian, Yifan; Tiruveedhula, Pavan K.; Roorda, Austin; Werner, John S.

    2015-07-01

    We tested and compared the capability of multiple optical coherence tomography (OCT) angiography methods: phase variance, amplitude decorrelation and speckle variance, with application of the split spectrum technique, to image the choroiretinal complex of the human eye. To test the possibility of OCT imaging stability improvement we utilized a real-time tracking scanning laser ophthalmoscopy (TSLO) system combined with a swept source OCT setup. In addition, we implemented a post- processing volume averaging method for improved angiographic image quality and reduction of motion artifacts. The OCT system operated at the central wavelength of 1040nm to enable sufficient depth penetration into the choroid. Imaging was performed in the eyes of healthy volunteers and patients diagnosed with age-related macular degeneration.

  8. Comparison of Frequency-Domain Array Methods for Studying Earthquake Rupture Process

    NASA Astrophysics Data System (ADS)

    Sheng, Y.; Yin, J.; Yao, H.

    2014-12-01

    Seismic array methods, in both time- and frequency- domains, have been widely used to study the rupture process and energy radiation of earthquakes. With better spatial resolution, the high-resolution frequency-domain methods, such as Multiple Signal Classification (MUSIC) (Schimdt, 1986; Meng et al., 2011) and the recently developed Compressive Sensing (CS) technique (Yao et al., 2011, 2013), are revealing new features of earthquake rupture processes. We have performed various tests on the methods of MUSIC, CS, minimum-variance distortionless response (MVDR) Beamforming and conventional Beamforming in order to better understand the advantages and features of these methods for studying earthquake rupture processes. We use the ricker wavelet to synthesize seismograms and use these frequency-domain techniques to relocate the synthetic sources we set, for instance, two sources separated in space but, their waveforms completely overlapping in the time domain. We also test the effects of the sliding window scheme on the recovery of a series of input sources, in particular, some artifacts that are caused by the sliding window scheme. Based on our tests, we find that CS, which is developed from the theory of sparsity inversion, has relatively high spatial resolution than the other frequency-domain methods and has better performance at lower frequencies. In high-frequency bands, MUSIC, as well as MVDR Beamforming, is more stable, especially in the multi-source situation. Meanwhile, CS tends to produce more artifacts when data have poor signal-to-noise ratio. Although these techniques can distinctly improve the spatial resolution, they still produce some artifacts along with the sliding of the time window. Furthermore, we propose a new method, which combines both the time-domain and frequency-domain techniques, to suppress these artifacts and obtain more reliable earthquake rupture images. Finally, we apply this new technique to study the 2013 Okhotsk deep mega earthquake in order to better capture the rupture characteristics (e.g., rupture area and velocity) of this earthquake.

  9. Determination of antenna factors using a three-antenna method at open-field test site

    NASA Astrophysics Data System (ADS)

    Masuzawa, Hiroshi; Tejima, Teruo; Harima, Katsushige; Morikawa, Takao

    1992-09-01

    Recently NIST has used the three-antenna method for calibration of the antenna factor of an antenna used for EMI measurements. This method does not require the specially designed standard antennas which are necessary in the standard field method or the standard antenna method, and can be used at an open-field test site. This paper theoretically and experimentally examines the measurement errors of this method and evaluates the precision of the antenna-factor calibration. It is found that the main source of the error is the non-ideal propagation characteristics of the test site, which should therefore be measured before the calibration. The precision of the antenna-factor calibration at the test site used in these experiments, is estimated to be 0.5 dB.

  10. Development of a test method for carbonyl compounds from stationary source emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhihua Fan; Peterson, M.R.; Jayanty, R.K.M.

    1997-12-31

    Carbonyl compounds have received increasing attention because of their important role in ground-level ozone formation. The common method used for the measurement of aldehydes and ketones is 2,4-dinitrophenylhydrazine (DNPH) derivatization followed by high performance liquid chromatography and ultra violet (HPLC-UV) analysis. One of the problems associated with this method is the low recovery for certain compounds such as acrolein. This paper presents a study in the development of a test method for the collection and measurement of carbonyl compounds from stationary source emissions. This method involves collection of carbonyl compounds in impingers, conversion of carbonyl compounds to a stable derivativemore » with O-2,3,4,5,6-pentafluorobenzyl hydroxylamine hydrochloride (PFBHA), and separation and measurement by electron capture gas chromatography (GC-ECD). Eight compounds were selected for the evaluation of this method: formaldehyde, acetaldehyde, acrolein, acetone, butanal, methyl ethyl ketone (MEK), methyl isobutyl ketone (MIBK), and hexanal.« less

  11. Testing Murphy's Law: Urban Myths as a Source of School Science Projects.

    ERIC Educational Resources Information Center

    Matthews, Robert A. J.

    2001-01-01

    Discusses the urban myth that "If toast can land butter-side down, it will" as an example of a source of projects demonstrating the use of the scientific method beyond its usual settings. Other urban myths suitable for investigation are discussed. (Author/MM)

  12. An imaging-based photometric and colorimetric measurement method for characterizing OLED panels for lighting applications

    NASA Astrophysics Data System (ADS)

    Zhu, Yiting; Narendran, Nadarajah; Tan, Jianchuan; Mou, Xi

    2014-09-01

    The organic light-emitting diode (OLED) has demonstrated its novelty in displays and certain lighting applications. Similar to white light-emitting diode (LED) technology, it also holds the promise of saving energy. Even though the luminous efficacy values of OLED products have been steadily growing, their longevity is still not well understood. Furthermore, currently there is no industry standard for photometric and colorimetric testing, short and long term, of OLEDs. Each OLED manufacturer tests its OLED panels under different electrical and thermal conditions using different measurement methods. In this study, an imaging-based photometric and colorimetric measurement method for OLED panels was investigated. Unlike an LED that can be considered as a point source, the OLED is a large form area source. Therefore, for an area source to satisfy lighting application needs, it is important that it maintains uniform light level and color properties across the emitting surface of the panel over a long period. This study intended to develop a measurement procedure that can be used to test long-term photometric and colorimetric properties of OLED panels. The objective was to better understand how test parameters such as drive current or luminance and temperature affect the degradation rate. In addition, this study investigated whether data interpolation could allow for determination of degradation and lifetime, L70, at application conditions based on the degradation rates measured at different operating conditions.

  13. Evaluation of Methods for In-Situ Calibration of Field-Deployable Microphone Phased Arrays

    NASA Technical Reports Server (NTRS)

    Humphreys, William M.; Lockard, David P.; Khorrami, Mehdi R.; Culliton, William G.; McSwain, Robert G.

    2017-01-01

    Current field-deployable microphone phased arrays for aeroacoustic flight testing require the placement of hundreds of individual sensors over a large area. Depending on the duration of the test campaign, the microphones may be required to stay deployed at the testing site for weeks or even months. This presents a challenge in regards to tracking the response (i.e., sensitivity) of the individual sensors as a function of time in order to evaluate the health of the array. To address this challenge, two different methods for in-situ tracking of microphone responses are described. The first relies on the use of an aerial sound source attached as a payload on a hovering small Unmanned Aerial System (sUAS) vehicle. The second relies on the use of individually excited ground-based sound sources strategically placed throughout the array pattern. Testing of the two methods was performed in microphone array deployments conducted at Fort A.P. Hill in 2015 and at Edwards Air Force Base in 2016. The results indicate that the drift in individual sensor responses can be tracked reasonably well using both methods. Thus, in-situ response tracking methods are useful as a diagnostic tool for monitoring the health of a phased array during long duration deployments.

  14. A novel method for detecting light source for digital images forensic

    NASA Astrophysics Data System (ADS)

    Roy, A. K.; Mitra, S. K.; Agrawal, R.

    2011-06-01

    Manipulation in image has been in practice since centuries. These manipulated images are intended to alter facts — facts of ethics, morality, politics, sex, celebrity or chaos. Image forensic science is used to detect these manipulations in a digital image. There are several standard ways to analyze an image for manipulation. Each one has some limitation. Also very rarely any method tried to capitalize on the way image was taken by the camera. We propose a new method that is based on light and its shade as light and shade are the fundamental input resources that may carry all the information of the image. The proposed method measures the direction of light source and uses the light based technique for identification of any intentional partial manipulation in the said digital image. The method is tested for known manipulated images to correctly identify the light sources. The light source of an image is measured in terms of angle. The experimental results show the robustness of the methodology.

  15. Power Source Status Estimation and Drive Control Method for Autonomous Decentralized Hybrid Train

    NASA Astrophysics Data System (ADS)

    Furuya, Takemasa; Ogawa, Kenichi; Yamamoto, Takamitsu; Hasegawa, Hitoshi

    A hybrid control system has two main functions: power sharing and equipment protection. In this paper, we discuss the design, construction and testing of a drive control method for an autonomous decentralized hybrid train with 100-kW-class fuel cells (FC) and 36-kWh lithium-ion batteries (Li-Batt). The main objectives of this study are to identify the operation status of the power sources on the basis of the input voltage of the traction inverter and to estimate the maximum traction power control basis of the power-source status. The proposed control method is useful in preventing overload operation of the onboard power sources in an autonomous decentralized hybrid system that has a flexible main circuit configuration and a few control signal lines. Further, with this method, the initial cost of a hybrid system can be reduced and the retrofit design of the hybrid system can be simplified. The effectiveness of the proposed method is experimentally confirmed by using a real-scale hybrid train system.

  16. A fast signal subspace approach for the determination of absolute levels from phased microphone array measurements

    NASA Astrophysics Data System (ADS)

    Sarradj, Ennes

    2010-04-01

    Phased microphone arrays are used in a variety of applications for the estimation of acoustic source location and spectra. The popular conventional delay-and-sum beamforming methods used with such arrays suffer from inaccurate estimations of absolute source levels and in some cases also from low resolution. Deconvolution approaches such as DAMAS have better performance, but require high computational effort. A fast beamforming method is proposed that can be used in conjunction with a phased microphone array in applications with focus on the correct quantitative estimation of acoustic source spectra. This method bases on an eigenvalue decomposition of the cross spectral matrix of microphone signals and uses the eigenvalues from the signal subspace to estimate absolute source levels. The theoretical basis of the method is discussed together with an assessment of the quality of the estimation. Experimental tests using a loudspeaker setup and an airfoil trailing edge noise setup in an aeroacoustic wind tunnel show that the proposed method is robust and leads to reliable quantitative results.

  17. Recent Declines in Infant Mortality in the United States, 2005-2011

    MedlinePlus

    ... 37 completed weeks of gestation. Data source and methods Data presented in this report were based on ... Text statements were tested for statistical significance using methods described elsewhere ( 3 ), and a statement that a ...

  18. Isotropic source terms of San Jacinto fault zone earthquakes based on waveform inversions with a generalized CAP method

    NASA Astrophysics Data System (ADS)

    Ross, Z. E.; Ben-Zion, Y.; Zhu, L.

    2015-02-01

    We analyse source tensor properties of seven Mw > 4.2 earthquakes in the complex trifurcation area of the San Jacinto Fault Zone, CA, with a focus on isotropic radiation that may be produced by rock damage in the source volumes. The earthquake mechanisms are derived with generalized `Cut and Paste' (gCAP) inversions of three-component waveforms typically recorded by >70 stations at regional distances. The gCAP method includes parameters ζ and χ representing, respectively, the relative strength of the isotropic and CLVD source terms. The possible errors in the isotropic and CLVD components due to station variability is quantified with bootstrap resampling for each event. The results indicate statistically significant explosive isotropic components for at least six of the events, corresponding to ˜0.4-8 per cent of the total potency/moment of the sources. In contrast, the CLVD components for most events are not found to be statistically significant. Trade-off and correlation between the isotropic and CLVD components are studied using synthetic tests with realistic station configurations. The associated uncertainties are found to be generally smaller than the observed isotropic components. Two different tests with velocity model perturbation are conducted to quantify the uncertainty due to inaccuracies in the Green's functions. Applications of the Mann-Whitney U test indicate statistically significant explosive isotropic terms for most events consistent with brittle damage production at the source.

  19. New approach for point pollution source identification in rivers based on the backward probability method.

    PubMed

    Wang, Jiabiao; Zhao, Jianshi; Lei, Xiaohui; Wang, Hao

    2018-06-13

    Pollution risk from the discharge of industrial waste or accidental spills during transportation poses a considerable threat to the security of rivers. The ability to quickly identify the pollution source is extremely important to enable emergency disposal of pollutants. This study proposes a new approach for point source identification of sudden water pollution in rivers, which aims to determine where (source location), when (release time) and how much pollutant (released mass) was introduced into the river. Based on the backward probability method (BPM) and the linear regression model (LR), the proposed LR-BPM converts the ill-posed problem of source identification into an optimization model, which is solved using a Differential Evolution Algorithm (DEA). The decoupled parameters of released mass are not dependent on prior information, which improves the identification efficiency. A hypothetical case study with a different number of pollution sources was conducted to test the proposed approach, and the largest relative errors for identified location, release time, and released mass in all tests were not greater than 10%. Uncertainty in the LR-BPM is mainly due to a problem with model equifinality, but averaging the results of repeated tests greatly reduces errors. Furthermore, increasing the gauging sections further improves identification results. A real-world case study examines the applicability of the LR-BPM in practice, where it is demonstrated to be more accurate and time-saving than two existing approaches, Bayesian-MCMC and basic DEA. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Development of a high sensitivity pinhole type gamma camera using semiconductors for low dose rate fields

    NASA Astrophysics Data System (ADS)

    Ueno, Yuichiro; Takahashi, Isao; Ishitsu, Takafumi; Tadokoro, Takahiro; Okada, Koichi; Nagumo, Yasushi; Fujishima, Yasutake; Yoshida, Akira; Umegaki, Kikuo

    2018-06-01

    We developed a pinhole type gamma camera, using a compact detector module of a pixelated CdTe semiconductor, which has suitable sensitivity and quantitative accuracy for low dose rate fields. In order to improve the sensitivity of the pinhole type semiconductor gamma camera, we adopted three methods: a signal processing method to set the discriminating level lower, a high sensitivity pinhole collimator and a smoothing image filter that improves the efficiency of the source identification. We tested basic performances of the developed gamma camera and carefully examined effects of the three methods. From the sensitivity test, we found that the effective sensitivity was about 21 times higher than that of the gamma camera for high dose rate fields which we had previously developed. We confirmed that the gamma camera had sufficient sensitivity and high quantitative accuracy; for example, a weak hot spot (0.9 μSv/h) around a tree root could be detected within 45 min in a low dose rate field test, and errors of measured dose rates with point sources were less than 7% in a dose rate accuracy test.

  1. The application of network synthesis to repeating classical gamma-ray bursts

    NASA Technical Reports Server (NTRS)

    Hurley, K.; Kouveliotou, C.; Fishman, J.; Meegan, C.; Laros, J.; Klebesadel, R.

    1995-01-01

    It has been suggested that the Burst and Transient Source Experiment (BATSE) gamma-ray burst catalog contains several groups of bursts clustered in space or in space and time, which provide evidence that a substantial fraction of the classical gamma-ray burst sources repeat. Because many of the bursts in these groups are weak, they are not directly detected by the Ulysses GRB experiment. We apply the network synthesis method to these events to test the repeating burst hypothesis. Although we find no evidence for repeating sources, the method must be applied under more general conditions before reaching any definite conclusions about the existence of classical gamma-ray burst repeating sources.

  2. Multispectral image fusion for target detection

    NASA Astrophysics Data System (ADS)

    Leviner, Marom; Maltz, Masha

    2009-09-01

    Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in an experiment using MSSF against two established methods: Averaging and Principle Components Analysis (PCA), and against its two source bands, visible and infrared. The task that we studied was: target detection in the cluttered environment. MSSF proved superior to the other fusion methods. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.

  3. Electrolyte measurement device and measurement procedure

    DOEpatents

    Cooper, Kevin R.; Scribner, Louie L.

    2010-01-26

    A method and apparatus for measuring the through-thickness resistance or conductance of a thin electrolyte is provided. The method and apparatus includes positioning a first source electrode on a first side of an electrolyte to be tested, positioning a second source electrode on a second side of the electrolyte, positioning a first sense electrode on the second side of the electrolyte, and positioning a second sense electrode on the first side of the electrolyte. current is then passed between the first and second source electrodes and the voltage between the first and second sense electrodes is measured.

  4. [Application of Stata software to test heterogeneity in meta-analysis method].

    PubMed

    Wang, Dan; Mou, Zhen-yun; Zhai, Jun-xia; Zong, Hong-xia; Zhao, Xiao-dong

    2008-07-01

    To introduce the application of Stata software to heterogeneity test in meta-analysis. A data set was set up according to the example in the study, and the corresponding commands of the methods in Stata 9 software were applied to test the example. The methods used were Q-test and I2 statistic attached to the fixed effect model forest plot, H statistic and Galbraith plot. The existence of the heterogeneity among studies could be detected by Q-test and H statistic and the degree of the heterogeneity could be detected by I2 statistic. The outliers which were the sources of the heterogeneity could be spotted from the Galbraith plot. Heterogeneity test in meta-analysis can be completed by the four methods in Stata software simply and quickly. H and I2 statistics are more robust, and the outliers of the heterogeneity can be clearly seen in the Galbraith plot among the four methods.

  5. Source-receptor matrix calculation with a Lagrangian particle dispersion model in backward mode

    NASA Astrophysics Data System (ADS)

    Seibert, P.; Frank, A.

    2003-04-01

    A method for the calculation of source-receptor (s-r) relationships (sensitivity of a trace substance concentration at some place and time to emission at some place and time) with Lagrangian particle models has been derived and presented previously (Air Pollution Modeling and its Application XIV, Proc. of ITM Boulder 2000). Now, the generalisation to any linear s-r relationship, including dry and wet deposition, decay etc., is presented. It was implemented in the model FLEXPART and tested extensively in idealised set-ups. These tests turned out to be very useful for finding minor model bugs and inaccuracies, and can be recommended generally for model testing. Recently, a convection scheme has been integrated in FLEXPART which was also tested. Both source and receptor can be specified in mass mixing ratio or mass units. Properly taking care of this is quite relevant for sources and receptors at different levels in the atmosphere. Furthermore, we present a test with the transport of aerosol-bound Caesium-137 from the areas contaminated by the Chernobyl disaster to Stockholm during one month.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Passarge, M; Fix, M K; Manser, P

    Purpose: To create and test an accurate EPID-frame-based VMAT QA metric to detect gross dose errors in real-time and to provide information about the source of error. Methods: A Swiss cheese model was created for an EPID-based real-time QA process. The system compares a treatmentplan- based reference set of EPID images with images acquired over each 2° gantry angle interval. The metric utilizes a sequence of independent consecutively executed error detection Methods: a masking technique that verifies infield radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment to quantify rotation, scaling andmore » translation; standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each test were determined. For algorithm testing, twelve different types of errors were selected to modify the original plan. Corresponding predictions for each test case were generated, which included measurement-based noise. Each test case was run multiple times (with different noise per run) to assess the ability to detect introduced errors. Results: Averaged over five test runs, 99.1% of all plan variations that resulted in patient dose errors were detected within 2° and 100% within 4° (∼1% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 91.5% were detected by the system within 2°. Based on the type of method that detected the error, determination of error sources was achieved. Conclusion: An EPID-based during-treatment error detection system for VMAT deliveries was successfully designed and tested. The system utilizes a sequence of methods to identify and prevent gross treatment delivery errors. The system was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of errors in real-time and indicate the error source. J. V. Siebers receives funding support from Varian Medical Systems.« less

  7. Analysis of large system black box verification test data

    NASA Technical Reports Server (NTRS)

    Clapp, Kenneth C.; Iyer, Ravishankar Krishnan

    1993-01-01

    Issues regarding black box, large systems verification are explored. It begins by collecting data from several testing teams. An integrated database containing test, fault, repair, and source file information is generated. Intuitive effectiveness measures are generated using conventional black box testing results analysis methods. Conventional analysts methods indicate that the testing was effective in the sense that as more tests were run, more faults were found. Average behavior and individual data points are analyzed. The data is categorized and average behavior shows a very wide variation in number of tests run and in pass rates (pass rates ranged from 71 percent to 98 percent). The 'white box' data contained in the integrated database is studied in detail. Conservative measures of effectiveness are discussed. Testing efficiency (ratio of repairs to number of tests) is measured at 3 percent, fault record effectiveness (ratio of repairs to fault records) is measured at 55 percent, and test script redundancy (ratio of number of failed tests to minimum number of tests needed to find the faults) ranges from 4.2 to 15.8. Error prone source files and subsystems are identified. A correlational mapping of test functional area to product subsystem is completed. A new adaptive testing process based on real-time generation of the integrated database is proposed.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jozsef, G

    Purpose: To build a test device for HDR afterloaders capable of checking source positions, times at positions and estimate the activity of the source. Methods: A catheter is taped on a plastic scintillation sheet. When a source travels through the catheter, the scintillator sheet lights up around the source. The sheet is monitored with a video camera, and records the movement of the light spot. The center of the spot on each image on the video provides the source location, and the time stamps of the images can provide the dwell time the source spend in each location. Finally, themore » brightness of the light spot is related to the activity of the source. A code was developed for noise removal, calibrate the scale of the image to centimeters, eliminate the distortion caused by the oblique view angle, identifying the boundaries of the light spot, transforming the image into binary and detect and calculate the source motion, positions and times. The images are much less noisy if the camera is shielded. That requires that the light spot is monitored in a mirror, rather than directly. The whole assembly is covered from external light and has a size of approximately 17×35×25cm (H×L×W) Results: A cheap camera in BW mode proved to be sufficient with a plastic scintillator sheet. The best images were resulted by a 3mm thick sheet with ZnS:Ag surface coating. The shielding of the camera decreased the noise, but could not eliminate it. A test run even in noisy condition resulted in approximately 1 mm and 1 sec difference from the planned positions and dwell times. Activity tests are in progress. Conclusion: The proposed method is feasible. It might simplify the monthly QA process of HDR Brachytherapy units.« less

  9. 40 CFR 63.642 - General standards.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... reduction. (4) Data shall be reduced in accordance with the EPA-approved methods specified in the applicable section or, if other test methods are used, the data and methods shall be validated according to the protocol in Method 301 of appendix A of this part. (e) Each owner or operator of a source subject to this...

  10. 40 CFR 52.126 - Control strategy and regulations: Particulate matter.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... be determined by using method 2 and traversing according to method 1. Gas analysis shall be performed using the integrated sample technique of method 3, and moisture content shall be determined by the condenser technique of method 4. (iii) All tests shall be conducted while the source is operating at the...

  11. Near-source mobile methane emission estimates using EPA Method33a and a novel probabilistic approach as a basis for leak quantification in urban areas

    NASA Astrophysics Data System (ADS)

    Albertson, J. D.

    2015-12-01

    Methane emissions from underground pipeline leaks remain an ongoing issue in the development of accurate methane emission inventories for the natural gas supply chain. Application of mobile methods during routine street surveys would help address this issue, but there are large uncertainties in current approaches. In this paper, we describe results from a series of near-source (< 30 m) controlled methane releases where an instrumented van was used to measure methane concentrations during both fixed location sampling and during mobile traverses immediately downwind of the source. The measurements were used to evaluate the application of EPA Method 33A for estimating methane emissions downwind of a source and also to test the application of a new probabilistic approach for estimating emission rates from mobile traverse data.

  12. Electron volt spectroscopy on a pulsed neutron source

    NASA Astrophysics Data System (ADS)

    Newport, R. J.; Penfold, J.; Williams, W. G.

    1984-07-01

    The principal design aspects of a pulsed source neutron spectrometer in which the scattered neutron energy is determined by a resonance absorption filter difference method are discussed. Calculations of the accessible dynamic range, resolution and spectrum simulations are given for the spectrometer on a high intensity pulsed neutron source, such as the spallation neutron source (SNS) now being constructed at the Rutherford Appleton Laboratory. Special emphasis is made of the advantage gained by placing coarse and fixed energy-sensitive filters before and after the scatterer; these enhance the inelastic/elastic descrimination of the method. A brief description is given of a double difference filter method which gives a superior difference peak shape, as well as a better energy transfer resolution. Finally, some first results of scattering from zirconium hydride, obtained on a test spectrometer, are presented.

  13. Use of a Microphone Phased Array to Determine Noise Sources in a Rocket Plume

    NASA Technical Reports Server (NTRS)

    Panda, J.; Mosher, R.

    2010-01-01

    A 70-element microphone phased array was used to identify noise sources in the plume of a solid rocket motor. An environment chamber was built and other precautions were taken to protect the sensitive condenser microphones from rain, thunderstorms and other environmental elements during prolonged stay in the outdoor test stand. A camera mounted at the center of the array was used to photograph the plume. In the first phase of the study the array was placed in an anechoic chamber for calibration, and validation of the indigenous Matlab(R) based beamform software. It was found that the "advanced" beamform methods, such as CLEAN-SC was partially successful in identifying speaker sources placed closer than the Rayleigh criteria. To participate in the field test all equipments were shipped to NASA Marshal Space Flight Center, where the elements of the array hardware were rebuilt around the test stand. The sensitive amplifiers and the data acquisition hardware were placed in a safe basement, and 100m long cables were used to connect the microphones, Kulites and the camera. The array chamber and the microphones were found to withstand the environmental elements as well as the shaking from the rocket plume generated noise. The beamform map was superimposed on a photo of the rocket plume to readily identify the source distribution. It was found that the plume made an exceptionally long, >30 diameter, noise source over a large frequency range. The shock pattern created spatial modulation of the noise source. Interestingly, the concrete pad of the horizontal test stand was found to be a good acoustic reflector: the beamform map showed two distinct source distributions- the plume and its reflection on the pad. The array was found to be most effective in the frequency range of 2kHz to 10kHz. As expected, the classical beamform method excessively smeared the noise sources at lower frequencies and produced excessive side-lobes at higher frequencies. The "advanced" beamform routine CLEAN-SC created a series of lumped sources which may be unphysical. We believe that the present effort is the first-ever attempt to directly measure noise source distribution in a rocket plume.

  14. IMPROVING PARTICULATE MATTER SOURCE APPORTIONMENT FOR HEALTH STUDIES: A TRAINED RECEPTOR MODELING APPROACH WITH SENSITIVITY, UNCERTAINTY AND SPATIAL ANALYSES

    EPA Science Inventory

    An approach for conducting PM source apportionment will be developed, tested, and applied that directly addresses limitations in current SA methods, in particular variability, biases, and intensive resource requirements. Uncertainties in SA results and sensitivities to SA inpu...

  15. 40 CFR 63.5997 - How do I conduct tests and procedures for tire cord production affected sources?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES (CONTINUED) National Emissions Standards for Hazardous Air...? (a) Methods to determine the mass percent of each HAP in coatings. (1) To determine the HAP content...

  16. A novel multi-segment path analysis based on a heterogeneous velocity model for the localization of acoustic emission sources in complex propagation media.

    PubMed

    Gollob, Stephan; Kocur, Georg Karl; Schumacher, Thomas; Mhamdi, Lassaad; Vogel, Thomas

    2017-02-01

    In acoustic emission analysis, common source location algorithms assume, independently of the nature of the propagation medium, a straight (shortest) wave path between the source and the sensors. For heterogeneous media such as concrete, the wave travels in complex paths due to the interaction with the dissimilar material contents and with the possible geometrical and material irregularities present in these media. For instance, cracks and large air voids present in concrete influence significantly the way the wave travels, by causing wave path deviations. Neglecting these deviations by assuming straight paths can introduce significant errors to the source location results. In this paper, a novel source localization method called FastWay is proposed. It accounts, contrary to most available shortest path-based methods, for the different effects of material discontinuities (cracks and voids). FastWay, based on a heterogeneous velocity model, uses the fastest rather than the shortest travel paths between the source and each sensor. The method was evaluated both numerically and experimentally and the results from both evaluation tests show that, in general, FastWay was able to locate sources of acoustic emissions more accurately and reliably than the traditional source localization methods. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Psychological testing and psychological assessment. A review of evidence and issues.

    PubMed

    Meyer, G J; Finn, S E; Eyde, L D; Kay, G G; Moreland, K L; Dies, R R; Eisman, E J; Kubiszyn, T W; Reed, G M

    2001-02-01

    This article summarizes evidence and issues associated with psychological assessment. Data from more than 125 meta-analyses on test validity and 800 samples examining multimethod assessment suggest 4 general conclusions: (a) Psychological test validity is strong and compelling, (b) psychological test validity is comparable to medical test validity, (c) distinct assessment methods provide unique sources of information, and (d) clinicians who rely exclusively on interviews are prone to incomplete understandings. Following principles for optimal nomothetic research, the authors suggest that a multimethod assessment battery provides a structured means for skilled clinicians to maximize the validity of individualized assessments. Future investigations should move beyond an examination of test scales to focus more on the role of psychologists who use tests as helpful tools to furnish patients and referral sources with professional consultation.

  18. Testing of aircraft passenger seat cushion material, full scale. Data, volume 2

    NASA Technical Reports Server (NTRS)

    Schutter, K. J.; Gaume, J. G.; Duskin, F. E.

    1980-01-01

    Burn characteristics of presently used and proposed seat cushion materials and types of constructions were determined. Eight different seat cushion configurations were subjected to full scale burn tests. Each cushion configuration was tested twice for a total of 16 tests. Two different fire sources were used: Jet A-fuel for eight tests, and a radiant energy source with propane flame for eight tests. Data were recorded for smoke density, cushion temperatures, radiant heat flux, animal response to combustion products, rate of weight loss of test specimens, cabin temperature, and type and content of gas within the cabin. When compared to existing seat cushions, the test specimens incorporating a fire barrier and those fabricated from advanced materials, using improved construction methods, exhibited significantly greater fire resistance. Flammability comparison tests were conducted upon one fire blocking configuration and one polyimide configuration.

  19. Estimated Accuracy of Three Common Trajectory Statistical Methods

    NASA Technical Reports Server (NTRS)

    Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.

    2011-01-01

    Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h and 0.5 0.95 for the decay time of 12 h. The best results of source reconstruction can be expected for the trace substances with a decay time on the order of several days. Although the methods considered in this paper do not guarantee high accuracy they are computationally simple and fast. Using the TSMs in optimum conditions and taking into account the range of uncertainties, one can obtain a first hint on potential source areas.

  20. Randomly iterated search and statistical competency as powerful inversion tools for deformation source modeling: Application to volcano interferometric synthetic aperture radar data

    NASA Astrophysics Data System (ADS)

    Shirzaei, M.; Walter, T. R.

    2009-10-01

    Modern geodetic techniques provide valuable and near real-time observations of volcanic activity. Characterizing the source of deformation based on these observations has become of major importance in related monitoring efforts. We investigate two random search approaches, simulated annealing (SA) and genetic algorithm (GA), and utilize them in an iterated manner. The iterated approach helps to prevent GA in general and SA in particular from getting trapped in local minima, and it also increases redundancy for exploring the search space. We apply a statistical competency test for estimating the confidence interval of the inversion source parameters, considering their internal interaction through the model, the effect of the model deficiency, and the observational error. Here, we present and test this new randomly iterated search and statistical competency (RISC) optimization method together with GA and SA for the modeling of data associated with volcanic deformations. Following synthetic and sensitivity tests, we apply the improved inversion techniques to two episodes of activity in the Campi Flegrei volcanic region in Italy, observed by the interferometric synthetic aperture radar technique. Inversion of these data allows derivation of deformation source parameters and their associated quality so that we can compare the two inversion methods. The RISC approach was found to be an efficient method in terms of computation time and search results and may be applied to other optimization problems in volcanic and tectonic environments.

  1. The sensitivity of relative toxicity rankings by the USF/NASA test method to some test variables

    NASA Technical Reports Server (NTRS)

    Hilado, C. J.; Labossiere, L. A.; Leon, H. A.; Kourtides, D. A.; Parker, J. A.; Hsu, M.-T. S.

    1976-01-01

    Pyrolysis temperature and the distance between the source and sensor of effluents are two important variables in tests for relative toxicity. Modifications of the USF/NASA toxicity screening test method to increase the upper temperature limit of pyrolysis, reduce the distance between the sample and the test animals, and increase the chamber volume available for animal occupancy, did not significantly alter rankings of relative toxicity of four representative materials. The changes rendered some differences no longer significant, but did not reverse any rankings. The materials studied were cotton, wool, aromatic polyamide, and polybenzimidazole.

  2. Technical and investigative support for high density digital satellite recording systems

    NASA Technical Reports Server (NTRS)

    Schultz, R. A.

    1982-01-01

    Methods and results of examinations and tests conducted on magnetic recording tapes under consideration for a high density digital (HDDR) satellite recording system are described. The examinations and tests investigate the performance of tapes with respect to their physical, magnetic and electrical characteristics. The objective of the tests, the likely significance of typical results, and the importance of the characteristics under investigation to the application are included. Theoretical discussions of measurement methods are provided where appropriate. Methods and results are discussed; the results of some sections are tabulated together to facilitate their comparison. The conclusion of each test section relates the test results to their possible significance and attempts to correlate the results of that section with the results of other tests. Some of the sections analyze sources of error inherent in the measurement methods or relate the value of the information obtained to the objectives of the test or the overall purpose of the project.

  3. Lightning Radio Source Retrieval Using Advanced Lightning Direction Finder (ALDF) Networks

    NASA Technical Reports Server (NTRS)

    Koshak, William J.; Blakeslee, Richard J.; Bailey, J. C.

    1998-01-01

    A linear algebraic solution is provided for the problem of retrieving the location and time of occurrence of lightning ground strikes from an Advanced Lightning Direction Finder (ALDF) network. The ALDF network measures field strength, magnetic bearing and arrival time of lightning radio emissions. Solutions for the plane (i.e., no Earth curvature) are provided that implement all of tile measurements mentioned above. Tests of the retrieval method are provided using computer-simulated data sets. We also introduce a quadratic planar solution that is useful when only three arrival time measurements are available. The algebra of the quadratic root results are examined in detail to clarify what portions of the analysis region lead to fundamental ambiguities in source location. Complex root results are shown to be associated with the presence of measurement errors when the lightning source lies near an outer sensor baseline of the ALDF network. In the absence of measurement errors, quadratic root degeneracy (no source location ambiguity) is shown to exist exactly on the outer sensor baselines for arbitrary non-collinear network geometries. The accuracy of the quadratic planar method is tested with computer generated data sets. The results are generally better than those obtained from the three station linear planar method when bearing errors are about 2 deg. We also note some of the advantages and disadvantages of these methods over the nonlinear method of chi(sup 2) minimization employed by the National Lightning Detection Network (NLDN) and discussed in Cummins et al.(1993, 1995, 1998).

  4. Optimal Placement of Dynamic Var Sources by Using Empirical Controllability Covariance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Junjian; Huang, Weihong; Sun, Kai

    In this paper, the empirical controllability covariance (ECC), which is calculated around the considered operating condition of a power system, is applied to quantify the degree of controllability of system voltages under specific dynamic var source locations. An optimal dynamic var source placement method addressing fault-induced delayed voltage recovery (FIDVR) issues is further formulated as an optimization problem that maximizes the determinant of ECC. The optimization problem is effectively solved by the NOMAD solver, which implements the mesh adaptive direct search algorithm. The proposed method is tested on an NPCC 140-bus system and the results show that the proposed methodmore » with fault specified ECC can solve the FIDVR issue caused by the most severe contingency with fewer dynamic var sources than the voltage sensitivity index (VSI)-based method. The proposed method with fault unspecified ECC does not depend on the settings of the contingency and can address more FIDVR issues than the VSI method when placing the same number of SVCs under different fault durations. It is also shown that the proposed method can help mitigate voltage collapse.« less

  5. Comparison of Predicted and Measured Attenuation of Turbine Noise from a Static Engine Test

    NASA Technical Reports Server (NTRS)

    Chien, Eugene W.; Ruiz, Marta; Yu, Jia; Morin, Bruce L.; Cicon, Dennis; Schwieger, Paul S.; Nark, Douglas M.

    2007-01-01

    Aircraft noise has become an increasing concern for commercial airlines. Worldwide demand for quieter aircraft is increasing, making the prediction of engine noise suppression one of the most important fields of research. The Low-Pressure Turbine (LPT) can be an important noise source during the approach condition for commercial aircraft. The National Aeronautics and Space Administration (NASA), Pratt & Whitney (P&W), and Goodrich Aerostructures (Goodrich) conducted a joint program to validate a method for predicting turbine noise attenuation. The method includes noise-source estimation, acoustic treatment impedance prediction, and in-duct noise propagation analysis. Two noise propagation prediction codes, Eversman Finite Element Method (FEM) code [1] and the CDUCT-LaRC [2] code, were used in this study to compare the predicted and the measured turbine noise attenuation from a static engine test. In this paper, the test setup, test configurations and test results are detailed in Section II. A description of the input parameters, including estimated noise modal content (in terms of acoustic potential), and acoustic treatment impedance values are provided in Section III. The prediction-to-test correlation study results are illustrated and discussed in Section IV and V for the FEM and the CDUCT-LaRC codes, respectively, and a summary of the results is presented in Section VI.

  6. On floats and float tests

    NASA Technical Reports Server (NTRS)

    Seewald, Friedrich

    1931-01-01

    The principal source of information on float resistance is the model test. In view of the insuperable difficulties opposing any attempt at theoretical treatment of the resistance problem, particularly at attitudes which tend toward satisfactory take-off, such as the transitory stage to planing, the towing test is and will remain the primary method for some time.

  7. Testing For EM Upsets In Aircraft Control Computers

    NASA Technical Reports Server (NTRS)

    Belcastro, Celeste M.

    1994-01-01

    Effects of transient electrical signals evaluated in laboratory tests. Method of evaluating nominally fault-tolerant, aircraft-type digital-computer-based control system devised. Provides for evaluation of susceptibility of system to upset and evaluation of integrity of control when system subjected to transient electrical signals like those induced by electromagnetic (EM) source, in this case lightning. Beyond aerospace applications, fault-tolerant control systems becoming more wide-spread in industry; such as in automobiles. Method supports practical, systematic tests for evaluation of designs of fault-tolerant control systems.

  8. Brain source localization: A new method based on MUltiple SIgnal Classification algorithm and spatial sparsity of the field signal for electroencephalogram measurements

    NASA Astrophysics Data System (ADS)

    Vergallo, P.; Lay-Ekuakille, A.

    2013-08-01

    Brain activity can be recorded by means of EEG (Electroencephalogram) electrodes placed on the scalp of the patient. The EEG reflects the activity of groups of neurons located in the head, and the fundamental problem in neurophysiology is the identification of the sources responsible of brain activity, especially if a seizure occurs and in this case it is important to identify it. The studies conducted in order to formalize the relationship between the electromagnetic activity in the head and the recording of the generated external field allow to know pattern of brain activity. The inverse problem, that is given the sampling field at different electrodes the underlying asset must be determined, is more difficult because the problem may not have a unique solution, or the search for the solution is made difficult by a low spatial resolution which may not allow to distinguish between activities involving sources close to each other. Thus, sources of interest may be obscured or not detected and known method in source localization problem as MUSIC (MUltiple SIgnal Classification) could fail. Many advanced source localization techniques achieve a best resolution by exploiting sparsity: if the number of sources is small as a result, the neural power vs. location is sparse. In this work a solution based on the spatial sparsity of the field signal is presented and analyzed to improve MUSIC method. For this purpose, it is necessary to set a priori information of the sparsity in the signal. The problem is formulated and solved using a regularization method as Tikhonov, which calculates a solution that is the better compromise between two cost functions to minimize, one related to the fitting of the data, and another concerning the maintenance of the sparsity of the signal. At the first, the method is tested on simulated EEG signals obtained by the solution of the forward problem. Relatively to the model considered for the head and brain sources, the result obtained allows to have a significant improvement compared to the classical MUSIC method, with a small margin of uncertainty about the exact location of the sources. In fact, the constraints of the spatial sparsity on the signal field allow to concentrate power in the directions of active sources, and consequently it is possible to calculate the position of the sources within the considered volume conductor. Later, the method is tested on the real EEG data too. The result is in accordance with the clinical report even if improvements are necessary to have further accurate estimates of the positions of the sources.

  9. Microseismic source locations with deconvolution migration

    NASA Astrophysics Data System (ADS)

    Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu

    2018-03-01

    Identifying and locating microseismic events are critical problems in hydraulic fracturing monitoring for unconventional resources exploration. In contrast to active seismic data, microseismic data are usually recorded with unknown source excitation time and source location. In this study, we introduce deconvolution migration by combining deconvolution interferometry with interferometric cross-correlation migration (CCM). This method avoids the need for the source excitation time and enhances both the spatial resolution and robustness by eliminating the square term of the source wavelets from CCM. The proposed algorithm is divided into the following three steps: (1) generate the virtual gathers by deconvolving the master trace with all other traces in the microseismic gather to remove the unknown excitation time; (2) migrate the virtual gather to obtain a single image of the source location and (3) stack all of these images together to get the final estimation image of the source location. We test the proposed method on complex synthetic and field data set from the surface hydraulic fracturing monitoring, and compare the results with those obtained by interferometric CCM. The results demonstrate that the proposed method can obtain a 50 per cent higher spatial resolution image of the source location, and more robust estimation with smaller errors of the localization especially in the presence of velocity model errors. This method is also beneficial for source mechanism inversion and global seismology applications.

  10. Soil-contact decay tests using small blocks : a procedural analysis

    Treesearch

    Rodney C. De Groot; James W. Evans; Paul G. Forsyth; Camille M. Freitag; Jeffrey J. Morrell

    Much discussion has been held regarding the merits of laboratory decay tests compared with field tests to evaluate wood preservatives. In this study, procedural aspects of soil jar decay tests with 1 cm 3 blocks were critically examined. Differences among individual bottles were a major source of variation in this method. The reproducibility and sensitivity of the soil...

  11. A New Generation of Leaching Tests – The Leaching Environmental Assessment Framework

    EPA Science Inventory

    Provides an overview of newly released leaching tests that provide a more accurate source term when estimating environmental release of metals and other constituents of potential concern (COPCs). The Leaching Environmental Assessment Framework (LEAF) methods have been (1) develo...

  12. The effects of supplementary cementitious materials on alkali-silica reaction : [technical summary].

    DOT National Transportation Integrated Search

    2015-07-01

    The Kansas Department of Transportation (KDOT) has controlled alkali-silica : reaction (ASR) for more than 70 years through the use of selected aggregates. : Sand and gravel sources had to be tested using Kansas Test Method KTMR- : 23 (1999), Wetting...

  13. The effects of supplementary cementitious materials on alkali-silica reaction.

    DOT National Transportation Integrated Search

    2015-07-01

    The Kansas Department of Transportation (KDOT) has controlled alkali-silica reaction (ASR) for more than : 70 years through the use of selected aggregates. Sand and gravel sources had to be tested using Kansas Test Method : KTMR-23 (1999), Wetting an...

  14. A novel amblyopia treatment system based on LED light source

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoqing; Chen, Qingshan; Wang, Xiaoling

    2011-05-01

    A novel LED (light emitting diode) light source of five different colors (white, red, green, blue and yellow) is adopted instead of conventional incandescent lamps for an amblyopia treatment system and seven training methods for rectifying amblyopia are incorporated so as for achieving an integrated therapy. The LED light source is designed to provide uniform illumination, adjustable light intensity and alterable colors. Experimental tests indicate that the LED light source operates steadily and fulfills the technical demand of amblyopia treatment.

  15. A novel amblyopia treatment system based on LED light source

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoqing; Chen, Qingshan; Wang, Xiaoling

    2010-12-01

    A novel LED (light emitting diode) light source of five different colors (white, red, green, blue and yellow) is adopted instead of conventional incandescent lamps for an amblyopia treatment system and seven training methods for rectifying amblyopia are incorporated so as for achieving an integrated therapy. The LED light source is designed to provide uniform illumination, adjustable light intensity and alterable colors. Experimental tests indicate that the LED light source operates steadily and fulfills the technical demand of amblyopia treatment.

  16. Dose calibrator linearity test: 99mTc versus 18F radioisotopes*

    PubMed Central

    Willegaignon, José; Sapienza, Marcelo Tatit; Coura-Filho, George Barberio; Garcez, Alexandre Teles; Alves, Carlos Eduardo Gonzalez Ribeiro; Cardona, Marissa Anabel Rivera; Gutterres, Ricardo Fraga; Buchpiguel, Carlos Alberto

    2015-01-01

    Objective The present study was aimed at evaluating the viability of replacing 18F with 99mTc in dose calibrator linearity testing. Materials and Methods The test was performed with sources of 99mTc (62 GBq) and 18F (12 GBq) whose activities were measured up to values lower than 1 MBq. Ratios and deviations between experimental and theoretical 99mTc and 18F sources activities were calculated and subsequently compared. Results Mean deviations between experimental and theoretical 99mTc and 18F sources activities were 0.56 (± 1.79)% and 0.92 (± 1.19)%, respectively. The mean ratio between activities indicated by the device for the 99mTc source as measured with the equipment pre-calibrated to measure 99mTc and 18F was 3.42 (± 0.06), and for the 18F source this ratio was 3.39 (± 0.05), values considered constant over the measurement time. Conclusion The results of the linearity test using 99mTc were compatible with those obtained with the 18F source, indicating the viability of utilizing both radioisotopes in dose calibrator linearity testing. Such information in association with the high potential of radiation exposure and costs involved in 18F acquisition suggest 99mTc as the element of choice to perform dose calibrator linearity tests in centers that use 18F, without any detriment to the procedure as well as to the quality of the nuclear medicine service. PMID:25798005

  17. First results from experiment in South China Sea using marine controlled source electromagnetic

    NASA Astrophysics Data System (ADS)

    Li, Yuan; Wang, Lipeng; Deng, Ming

    2016-04-01

    We concentrated on the use of marine controlled-source electromagnetic (CSEM) sounding with a horizontal electric dipole source towed close to the seafloor and receivers anchored on the seafloor. We applied the CSEM method in South China Sea for the first time in 2014, which not only test the application of our instrument, but also test our data processing method. Electromagnetic fields transmitted by a towed electric dipole source in deep sea were measured by a linear array of six seafloor receivers, positioned 600 meter (m) apart. Our results provided two highly resistivity layers beneath the survey line and the gas hydrate saturation profile associated with the anomalous resistivity. In the letter, we discussed some anomalous layers during the interpretation steps. The most plausible explanation of the first resistivity layer anomalies is that large amounts of gas hydrate have accumulated at 200 m depth below the seep sites, and the second layers is considerable volumes of gas hydrate have accumulated the seafloor at survey line according to the conceptual model, during the resistivity compared with other evidence like seismic and well data from the same survey. We should try other observation like heat flow, geochemical or other evidence to test the statement in the future.

  18. Screening assays of termite gut microbes that potentially as probiotic for human to digest cellulose as new food source

    NASA Astrophysics Data System (ADS)

    Abdullah, R.; Ananda, K. R. T.; Wijanarka

    2018-05-01

    According to UN, earth population will increase approximately 7.3 billion people up to 11.2 billion from 2015 until 2100. On the other side, food needs are not balance with the availability of food on earth. People of the world need solution for a new food source. By cellulose digesting ability, people analyzed can consume cellulose as the new food source to get glucose. The aims of research is obtaining termite gut cellulase bacteria selected which is potential as probiotic to split cellulose. Method used was as follows; isolation of termite gut microbes, microbial cellulase purification by screening method and probiotic test includes microbial pathogenicity test and human stomach acid and salt osmotic concentration resistance test. The result shows, 3 pure isolates of termite gut microbes can break down cellulose in the medium 1% CMC and 0.1% congo red (indicator of cellulose degradation activity) and life at pH 2- 2.5 and osmotic salt condition. Two isolates show the activity of gamma hemolysis (non-pathogenic in terms of pathogenicity on human blood). In conclusion, there are isolated termite gut microbes can be used as probiotic candidate for human to digest cellulose of the new food source for global food scarcity era.

  19. SIMPLIFIED PRACTICAL TEST METHOD FOR PORTABLE DOSE METERS USING SEVERAL SEALED RADIOACTIVE SOURCES.

    PubMed

    Mikamoto, Takahiro; Yamada, Takahiro; Kurosawa, Tadahiro

    2016-09-01

    Sealed radioactive sources which have small activity were employed for the determination of response and tests for non-linearity and energy dependence of detector responses. Close source-to-detector geometry (at 0.3 m or less) was employed to practical tests for portable dose meters to accumulate statistically sufficient ionizing currents. Difference between response in the present experimentally studied field and in the reference field complied with ISO 4037 due to non-uniformity of radiation fluence at close geometry was corrected by use of Monte Carlo simulation. As a consequence, corrected results were consistent with the results obtained in the ISO 4037 reference field within their uncertainties. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Standard Test Methods for Textile Composites

    NASA Technical Reports Server (NTRS)

    Masters, John E.; Portanova, Marc A.

    1996-01-01

    Standard testing methods for composite laminates reinforced with continuous networks of braided, woven, or stitched fibers have been evaluated. The microstructure of these textile' composite materials differs significantly from that of tape laminates. Consequently, specimen dimensions and loading methods developed for tape type composites may not be applicable to textile composites. To this end, a series of evaluations were made comparing testing practices currently used in the composite industry. Information was gathered from a variety of sources and analyzed to establish a series of recommended test methods for textile composites. The current practices established for laminated composite materials by ASTM and the MIL-HDBK-17 Committee were considered. This document provides recommended test methods for determining both in-plane and out-of-plane properties. Specifically, test methods are suggested for: unnotched tension and compression; open and filled hole tension; open hole compression; bolt bearing; and interlaminar tension. A detailed description of the material architectures evaluated is also provided, as is a recommended instrumentation practice.

  1. Well water quality in rural Nicaragua using a low-cost bacterial test and microbial source tracking.

    PubMed

    Weiss, Patricia; Aw, Tiong Gim; Urquhart, Gerald R; Galeano, Miguel Ruiz; Rose, Joan B

    2016-04-01

    Water-related diseases, particularly diarrhea, are major contributors to morbidity and mortality in developing countries. Monitoring water quality on a global scale is crucial to making progress in terms of population health. Traditional analytical methods are difficult to use in many regions of the world in low-resource settings that face severe water quality issues due to the inaccessibility of laboratories. This study aimed to evaluate a new low-cost method (the compartment bag test (CBT)) in rural Nicaragua. The CBT was used to quantify the presence of Escherichia coli in drinking water wells and aimed to determine the source(s) of any microbial contamination. Results indicate that the CBT is a viable method for use in remote rural regions. The overall quality of well water in Pueblo Nuevo, Nicaragua was deemed unsafe, and results led to the conclusion that animal fecal wastes may be one of the leading causes of well contamination. Elevation and depth of wells were not found to impact overall water quality. However rope-pump wells had a 64.1% reduction in contamination when compared with simple wells.

  2. Probing interferometric parallax with interplanetary spacecraft

    NASA Astrophysics Data System (ADS)

    Rodeghiero, G.; Gini, F.; Marchili, N.; Jain, P.; Ralston, J. P.; Dallacasa, D.; Naletto, G.; Possenti, A.; Barbieri, C.; Franceschini, A.; Zampieri, L.

    2017-07-01

    We describe an experimental scenario for testing a novel method to measure distance and proper motion of astronomical sources. The method is based on multi-epoch observations of amplitude or intensity correlations between separate receiving systems. This technique is called Interferometric Parallax, and efficiently exploits phase information that has traditionally been overlooked. The test case we discuss combines amplitude correlations of signals from deep space interplanetary spacecraft with those from distant galactic and extragalactic radio sources with the goal of estimating the interplanetary spacecraft distance. Interferometric parallax relies on the detection of wavefront curvature effects in signals collected by pairs of separate receiving systems. The method shows promising potentialities over current techniques when the target is unresolved from the background reference sources. Developments in this field might lead to the construction of an independent, geometrical cosmic distance ladder using a dedicated project and future generation instruments. We present a conceptual overview supported by numerical estimates of its performances applied to a spacecraft orbiting the Solar System. Simulations support the feasibility of measurements with a simple and time-saving observational scheme using current facilities.

  3. Study of acoustic emission during mechanical tests of large flight weight tank structure

    NASA Technical Reports Server (NTRS)

    Nakamura, Y.; Mccauley, B. O.; Veach, C. L.

    1972-01-01

    A polyphenylane oxide insulated, flight weight, subscale, aluminum tank was monitored for acoustic emissions during a proof test and during 100 cycles of environmental test simulating space flights. The use of a combination of frequency filtering and appropriate spatial filtering to reduce background noise was found to be sufficient to detect acoustic emission signals of relatively small intensity expected from subcritical crack growth in the structure. Several emission source locations were identified, including the one where a flaw was detected by post-test X-ray inspections. For most source locations, however, post-test inspections did not detect flaws; this was partially attributed to the higher sensitivity of the acoustic emission technique than any other currently available NDT method for detecting flaws.

  4. Laboratory Evaluations of the Enterococcus qPCR Method for Recreational Water Quality Testing: Method Performance and Sources of Uncertainty in Quantitative Measurements

    EPA Science Inventory

    The BEACH Act of 2000 directed the U.S. EPA to establish more expeditious methods for the detection of pathogen indicators in coastal waters, as well as new water quality criteria based on these methods. Progress has been made in developing a quantitative PCR (qPCR) method for en...

  5. A trial of reliable estimation of non-double-couple component of microearthquakes

    NASA Astrophysics Data System (ADS)

    Imanishi, K.; Uchide, T.

    2017-12-01

    Although most tectonic earthquakes are caused by shear failure, it has been reported that injection-induced seismicity and earthquakes occurring in volcanoes and geothermal areas contain non double couple (non-DC) components (e.g, Dreger et al., 2000). Also in the tectonic earthquakes, small non-DC components are beginning to be detected (e.g, Ross et al., 2015). However, it is generally limited to relatively large earthquakes that the non-DC component can be estimated with sufficient accuracy. In order to gain further understanding of fluid-driven earthquakes and fault zone properties, it is important to estimate full moment tensor of many microearthquakes with high precision. In the last AGU meeting, we proposed a method that iteratively applies the relative moment tensor inversion (RMTI) (Dahm, 1996) to source clusters improving each moment tensor as well as their relative accuracy. This new method overcomes the problem of RMTI that errors in the mechanism of reference events lead to biased solutions for other events, while taking advantage of RMTI that the source mechanisms can be determined without a computation of Green's function. The procedure is briefly summarized as follows: (1) Sample co-located multiple earthquakes with focal mechanisms, as initial solutions, determined by an ordinary method. (2) Apply the RMTI to estimate the source mechanism of each event relative to those of the other events. (3) Repeat the step 2 for the modified source mechanisms until the reduction of total residual converges. In order to confirm whether the method can resolve non-DC components, we conducted numerical tests on synthetic data. Amplitudes were computed assuming non-DC sources, amplifying by factor between 0.2 and 4 as site effects, and adding 10% random noise. As initial solutions in the step 1, we gave DC sources with arbitrary strike, dip and rake angle. In a test with eight sources at 12 stations, for example, all solutions were successively improved by iteration. Non-DC components were successfully resolved in spite of the fact that we gave DC sources as initial solutions. The application of the method to microearthquakes in geothermal area in Japan will be presented.

  6. Compression Testing of Textile Composite Materials

    NASA Technical Reports Server (NTRS)

    Masters, John E.

    1996-01-01

    The applicability of existing test methods, which were developed primarily for laminates made of unidirectional prepreg tape, to textile composites is an area of concern. The issue is whether the values measured for the 2-D and 3-D braided, woven, stitched, and knit materials are accurate representations of the true material response. This report provides a review of efforts to establish a compression test method for textile reinforced composite materials. Experimental data have been gathered from several sources and evaluated to assess the effectiveness of a variety of test methods. The effectiveness of the individual test methods to measure the material's modulus and strength is determined. Data are presented for 2-D triaxial braided, 3-D woven, and stitched graphite/epoxy material. However, the determination of a recommended test method and specimen dimensions is based, primarily, on experimental results obtained by the Boeing Defense and Space Group for 2-D triaxially braided materials. They evaluated seven test methods: NASA Short Block, Modified IITRI, Boeing Open Hole Compression, Zabora Compression, Boeing Compression after Impact, NASA ST-4, and a Sandwich Column Test.

  7. Determining the source characteristics of explosions near the Earth's surface

    DOE PAGES

    Pasyanos, Michael E.; Ford, Sean R.

    2015-04-09

    We present a method to determine the source characteristics of explosions near the airearth interface. The technique is an extension of the regional amplitude envelope method and now accounts for the reduction of seismic amplitudes as the depth of the explosion approaches the free surface and less energy is coupled into the ground. We first apply the method to the Humming Roadrunner series of shallow explosions in New Mexico where the yields and depths are known. From these tests, we find an appreciation of knowing the material properties for both source coupling/excitation and the free surface effect. Although there ismore » the expected tradeoff between depth and yield due to coupling effects, the estimated yields are generally close to the known values when the depth is constrained to the free surface. We then apply the method to a regionally recorded explosion in Syria. We estimate an explosive yield less than the 60 tons claimed by sources in the open press. The modifications to the method allow us to apply the technique to new classes of events, but we will need a better understanding of explosion source models and properties of additional geologic materials.« less

  8. Continuous wavelet transform and Euler deconvolution method and their application to magnetic field data of Jharia coalfield, India

    NASA Astrophysics Data System (ADS)

    Singh, Arvind; Singh, Upendra Kumar

    2017-02-01

    This paper deals with the application of continuous wavelet transform (CWT) and Euler deconvolution methods to estimate the source depth using magnetic anomalies. These methods are utilized mainly to focus on the fundamental issue of mapping the major coal seam and locating tectonic lineaments. The main aim of the study is to locate and characterize the source of the magnetic field by transferring the data into an auxiliary space by CWT. The method has been tested on several synthetic source anomalies and finally applied to magnetic field data from Jharia coalfield, India. Using magnetic field data, the mean depth of causative sources points out the different lithospheric depth over the study region. Also, it is inferred that there are two faults, namely the northern boundary fault and the southern boundary fault, which have an orientation in the northeastern and southeastern direction respectively. Moreover, the central part of the region is more faulted and folded than the other parts and has sediment thickness of about 2.4 km. The methods give mean depth of the causative sources without any a priori information, which can be used as an initial model in any inversion algorithm.

  9. Combining land use information and small stream sampling with PCR-based methods for better characterization of diffuse sources of human fecal pollution.

    PubMed

    Peed, Lindsay A; Nietch, Christopher T; Kelty, Catherine A; Meckes, Mark; Mooney, Thomas; Sivaganesan, Mano; Shanks, Orin C

    2011-07-01

    Diffuse sources of human fecal pollution allow for the direct discharge of waste into receiving waters with minimal or no treatment. Traditional culture-based methods are commonly used to characterize fecal pollution in ambient waters, however these methods do not discern between human and other animal sources of fecal pollution making it difficult to identify diffuse pollution sources. Human-associated quantitative real-time PCR (qPCR) methods in combination with low-order headwatershed sampling, precipitation information, and high-resolution geographic information system land use data can be useful for identifying diffuse source of human fecal pollution in receiving waters. To test this assertion, this study monitored nine headwatersheds over a two-year period potentially impacted by faulty septic systems and leaky sanitary sewer lines. Human fecal pollution was measured using three different human-associated qPCR methods and a positive significant correlation was seen between abundance of human-associated genetic markers and septic systems following wet weather events. In contrast, a negative correlation was observed with sanitary sewer line densities suggesting septic systems are the predominant diffuse source of human fecal pollution in the study area. These results demonstrate the advantages of combining water sampling, climate information, land-use computer-based modeling, and molecular biology disciplines to better characterize diffuse sources of human fecal pollution in environmental waters.

  10. Project on Elite Athlete Commitment (PEAK): IV. identification of new candidate commitment sources in the sport commitment model.

    PubMed

    Scanlan, Tara K; Russell, David G; Scanlan, Larry A; Klunchoo, Tatiana J; Chow, Graig M

    2013-10-01

    Following a thorough review of the current updated Sport Commitment Model, new candidate commitment sources for possible future inclusion in the model are presented. They were derived from data obtained using the Scanlan Collaborative Interview Method. Three elite New Zealand teams participated: amateur All Black rugby players, amateur Silver Fern netball players, and professional All Black rugby players. An inductive content analysis of these players' open-ended descriptions of their sources of commitment identified four unique new candidate commitment sources: Desire to Excel, Team Tradition, Elite Team Membership, and Worthy of Team Membership. A detailed definition of each candidate source is included along with example quotes from participants. Using a mixed-methods approach, these candidate sources provide a basis for future investigations to test their viability and generalizability for possible expansion of the Sport Commitment Model.

  11. 40 CFR 63.5997 - How do I conduct tests and procedures for tire cord production affected sources?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES National Emissions Standards for Hazardous Air Pollutants: Rubber...) Methods to determine the mass percent of each HAP in coatings. (1) To determine the HAP content in the...

  12. Strontium Isotopes and the Reconstruction of the Chaco Regional System: Evaluating Uncertainty with Bayesian Mixing Models

    PubMed Central

    Drake, Brandon Lee; Wills, Wirt H.; Hamilton, Marian I.; Dorshow, Wetherbee

    2014-01-01

    Strontium isotope sourcing has become a common and useful method for assigning sources to archaeological artifacts. In Chaco Canyon, an Ancestral Pueblo regional center in New Mexico, previous studies using these methods have suggested that significant portion of maize and wood originate in the Chuska Mountains region, 75 km to the East. In the present manuscript, these results were tested using both frequentist methods (to determine if geochemical sources can truly be differentiated) and Bayesian methods (to address uncertainty in geochemical source attribution). It was found that Chaco Canyon and the Chuska Mountain region are not easily distinguishable based on radiogenic strontium isotope values. The strontium profiles of many geochemical sources in the region overlap, making it difficult to definitively identify any one particular geochemical source for the canyon's pre-historic maize. Bayesian mixing models support the argument that some spruce and fir wood originated in the San Mateo Mountains, but that this cannot explain all 87Sr/86Sr values in Chaco timber. Overall radiogenic strontium isotope data do not clearly identify a single major geochemical source for maize, ponderosa, and most spruce/fir timber. As such, the degree to which Chaco Canyon relied upon outside support for both food and construction material is still ambiguous. PMID:24854352

  13. Development and Testing of a High Level Axial Array Duct Sound Source for the NASA Flow Impedance Test Facility

    NASA Technical Reports Server (NTRS)

    Johnson, Marty E.; Fuller, Chris R.; Jones, Michael G. (Technical Monitor)

    2000-01-01

    In this report both a frequency domain method for creating high level harmonic excitation and a time domain inverse method for creating large pulses in a duct are developed. To create controllable, high level sound an axial array of six JBL-2485 compression drivers was used. The pressure downstream is considered as input voltages to the sources filtered by the natural dynamics of the sources and the duct. It is shown that this dynamic behavior can be compensated for by filtering the inputs such that both time delays and phase changes are taken into account. The methods developed maximize the sound output while (i) keeping within the power constraints of the sources and (ii) maintaining a suitable level of reproduction accuracy. Harmonic excitation pressure levels of over 155dB were created experimentally over a wide frequency range (1000-4000Hz). For pulse excitation there is a tradeoff between accuracy of reproduction and sound level achieved. However, the accurate reproduction of a pulse with a maximum pressure level over 6500Pa was achieved experimentally. It was also shown that the throat connecting the driver to the duct makes it difficult to inject sound just below the cut-on of each acoustic mode (pre cut-on loading effect).

  14. Quality Test of Flexible Flat Cable (FFC) With Short Open Test Using Law Ohm Approach through Embedded Fuzzy Logic Based On Open Source Arduino Data Logger

    NASA Astrophysics Data System (ADS)

    Rohmanu, Ajar; Everhard, Yan

    2017-04-01

    A technological development, especially in the field of electronics is very fast. One of the developments in the electronics hardware device is Flexible Flat Cable (FFC), which serves as a media liaison between the main boards with other hardware parts. The production of Flexible Flat Cable (FFC) will go through the process of testing and measuring of the quality Flexible Flat Cable (FFC). Currently, the testing and measurement is still done manually by observing the Light Emitting Diode (LED) by the operator, so there were many problems. This study will be made of test quality Flexible Flat Cable (FFC) computationally utilize Open Source Embedded System. The method used is the measurement with Short Open Test method using Ohm’s Law approach to 4-wire (Kelvin) and fuzzy logic as a decision maker measurement results based on Open Source Arduino Data Logger. This system uses a sensor current INA219 as a sensor to read the voltage value thus obtained resistance value Flexible Flat Cable (FFC). To get a good system we will do the Black-box testing as well as testing the accuracy and precision with the standard deviation method. In testing the system using three models samples were obtained the test results in the form of standard deviation for the first model of 1.921 second model of 4.567 and 6.300 for the third model. While the value of the Standard Error of Mean (SEM) for the first model of the model 0.304 second at 0.736 and 0.996 of the third model. In testing this system, we will also obtain the average value of the measurement tolerance resistance values for the first model of - 3.50% 4.45% second model and the third model of 5.18% with the standard measurement of prisoners and improve productivity becomes 118.33%. From the results of the testing system is expected to improve the quality and productivity in the process of testing Flexible Flat Cable (FFC).

  15. Apparatus and method for simulating material damage from a fusion reactor

    DOEpatents

    Smith, Dale L.; Greenwood, Lawrence R.; Loomis, Benny A.

    1989-01-01

    An apparatus and method for simulating a fusion environment on a first wall or blanket structure. A material test specimen is contained in a capsule made of a material having a low hydrogen solubility and permeability. The capsule is partially filled with a lithium solution, such that the test specimen is encapsulated by the lithium. The capsule is irradiated by a fast fission neutron source.

  16. Apparatus and method for simulating material damage from a fusion reactor

    DOEpatents

    Smith, D.L.; Greenwood, L.R.; Loomis, B.A.

    1988-05-20

    This paper discusses an apparatus and method for simulating a fusion environment on a first wall or blanket structure. A material test specimen is contained in a capsule made of a material having a low hydrogen solubility and permeability. The capsule is partially filled with a lithium solution, such that the test specimen is encapsulated by the lithium. The capsule is irradiated by a fast fission neutron source.

  17. Apparatus and method for simulating material damage from a fusion reactor

    DOEpatents

    Smith, Dale L.; Greenwood, Lawrence R.; Loomis, Benny A.

    1989-03-07

    An apparatus and method for simulating a fusion environment on a first wall or blanket structure. A material test specimen is contained in a capsule made of a material having a low hydrogen solubility and permeability. The capsule is partially filled with a lithium solution, such that the test specimen is encapsulated by the lithium. The capsule is irradiated by a fast fission neutron source.

  18. A Background Noise Reduction Technique Using Adaptive Noise Cancellation for Microphone Arrays

    NASA Technical Reports Server (NTRS)

    Spalt, Taylor B.; Fuller, Christopher R.; Brooks, Thomas F.; Humphreys, William M., Jr.; Brooks, Thomas F.

    2011-01-01

    Background noise in wind tunnel environments poses a challenge to acoustic measurements due to possible low or negative Signal to Noise Ratios (SNRs) present in the testing environment. This paper overviews the application of time domain Adaptive Noise Cancellation (ANC) to microphone array signals with an intended application of background noise reduction in wind tunnels. An experiment was conducted to simulate background noise from a wind tunnel circuit measured by an out-of-flow microphone array in the tunnel test section. A reference microphone was used to acquire a background noise signal which interfered with the desired primary noise source signal at the array. The technique s efficacy was investigated using frequency spectra from the array microphones, array beamforming of the point source region, and subsequent deconvolution using the Deconvolution Approach for the Mapping of Acoustic Sources (DAMAS) algorithm. Comparisons were made with the conventional techniques for improving SNR of spectral and Cross-Spectral Matrix subtraction. The method was seen to recover the primary signal level in SNRs as low as -29 dB and outperform the conventional methods. A second processing approach using the center array microphone as the noise reference was investigated for more general applicability of the ANC technique. It outperformed the conventional methods at the -29 dB SNR but yielded less accurate results when coherence over the array dropped. This approach could possibly improve conventional testing methodology but must be investigated further under more realistic testing conditions.

  19. Asbestos Testing: Is the EPA Misleading You?

    ERIC Educational Resources Information Center

    Levins, Hoag

    1983-01-01

    Experts warn that only electron microscopes can see the smaller fibers of asbestos that are known to cause the most cancers, though the Environmental Protection Agency still endorses optical microscopes for asbestos removal verification. Asbestos testing methods are explained and sources of information are provided. (MLF)

  20. 40 CFR 60.474 - Test methods and procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... a finding concerning compliance with the mass standard for the blowing still. If the Administrator finds that the facility was in compliance with the mass standard during the performance test but failed... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Asphalt...

  1. 40 CFR 60.474 - Test methods and procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... a finding concerning compliance with the mass standard for the blowing still. If the Administrator finds that the facility was in compliance with the mass standard during the performance test but failed... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Asphalt...

  2. 40 CFR 60.474 - Test methods and procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... a finding concerning compliance with the mass standard for the blowing still. If the Administrator finds that the facility was in compliance with the mass standard during the performance test but failed... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Asphalt...

  3. Automatic insulation resistance testing apparatus

    DOEpatents

    Wyant, Francis J.; Nowlen, Steven P.; Luker, Spencer M.

    2005-06-14

    An apparatus and method for automatic measurement of insulation resistances of a multi-conductor cable. In one embodiment of the invention, the apparatus comprises a power supply source, an input measuring means, an output measuring means, a plurality of input relay controlled contacts, a plurality of output relay controlled contacts, a relay controller and a computer. In another embodiment of the invention the apparatus comprises a power supply source, an input measuring means, an output measuring means, an input switching unit, an output switching unit and a control unit/data logger. Embodiments of the apparatus of the invention may also incorporate cable fire testing means. The apparatus and methods of the present invention use either voltage or current for input and output measured variables.

  4. System For Surveillance Of Spectral Signals

    DOEpatents

    Gross, Kenneth C.; Wegerich, Stephan W.; Criss-Puszkiewicz, Cynthia; Wilks, Alan D.

    2004-10-12

    A method and system for monitoring at least one of a system, a process and a data source. A method and system have been developed for carrying out surveillance, testing and modification of an ongoing process or other source of data, such as a spectroscopic examination. A signal from the system under surveillance is collected and compared with a reference signal, a frequency domain transformation carried out for the system signal and reference signal, a frequency domain difference function established. The process is then repeated until a full range of data is accumulated over the time domain and a Sequential Probability Ratio Test ("SPRT") methodology applied to determine a three-dimensional surface plot characteristic of the operating state of the system under surveillance.

  5. System For Surveillance Of Spectral Signals

    DOEpatents

    Gross, Kenneth C.; Wegerich, Stephan; Criss-Puszkiewicz, Cynthia; Wilks, Alan D.

    2003-04-22

    A method and system for monitoring at least one of a system, a process and a data source. A method and system have been developed for carrying out surveillance, testing and modification of an ongoing process or other source of data, such as a spectroscopic examination. A signal from the system under surveillance is collected and compared with a reference signal, a frequency domain transformation carried out for the system signal and reference signal, a frequency domain difference function established. The process is then repeated until a full range of data is accumulated over the time domain and a Sequential Probability Ratio Test methodology applied to determine a three-dimensional surface plot characteristic of the operating state of the system under surveillance.

  6. System for surveillance of spectral signals

    DOEpatents

    Gross, Kenneth C.; Wegerich, Stephan W.; Criss-Puszkiewicz, Cynthia; Wilks, Alan D.

    2006-02-14

    A method and system for monitoring at least one of a system, a process and a data source. A method and system have been developed for carrying out surveillance, testing and modification of an ongoing process or other source of data, such as a spectroscopic examination. A signal from the system under surveillance is collected and compared with a reference signal, a frequency domain transformation carried out for the system signal and reference signal, a frequency domain difference function established. The process is then repeated until a full range of data is accumulated over the time domain and a Sequential Probability Ratio Test ("SPRT") methodology applied to determine a three-dimensional surface plot characteristic of the operating state of the system under surveillance.

  7. System for surveillance of spectral signals

    DOEpatents

    Gross, Kenneth C.; Wegerich, Stephan W.; Criss-Puszkiewicz, Cynthia; Wilks, Alan D.

    2001-01-01

    A method and system for monitoring at least one of a system, a process and a data source. A method and system have been developed for carrying out surveillance, testing and modification of an ongoing process or other source of data, such as a spectroscopic examination. A signal from the system under surveillance is collected and compared with a reference signal, a frequency domain transformation carried out for the system signal and reference signal, a frequency domain difference function established. The process is then repeated until a full range of data is accumulated over the time domain and a SPRT sequential probability ratio test methodology applied to determine a three-dimensional surface plot characteristic of the operating state of the system under surveillance.

  8. Explanatory Item Response Modeling of Children's Change on a Dynamic Test of Analogical Reasoning

    ERIC Educational Resources Information Center

    Stevenson, Claire E.; Hickendorff, Marian; Resing, Wilma C. M.; Heiser, Willem J.; de Boeck, Paul A. L.

    2013-01-01

    Dynamic testing is an assessment method in which training is incorporated into the procedure with the aim of gauging cognitive potential. Large individual differences are present in children's ability to profit from training in analogical reasoning. The aim of this experiment was to investigate sources of these differences on a dynamic test of…

  9. A New Method for Analyzing Content Validity Data Using Multidimensional Scaling

    ERIC Educational Resources Information Center

    Li, Xueming; Sireci, Stephen G.

    2013-01-01

    Validity evidence based on test content is of essential importance in educational testing. One source for such evidence is an alignment study, which helps evaluate the congruence between tested objectives and those specified in the curriculum. However, the results of an alignment study do not always sufficiently capture the degree to which a test…

  10. A Comparison of Domain-Referenced and Classic Psychometric Test Construction Methods.

    ERIC Educational Resources Information Center

    Willoughby, Lee; And Others

    This study compared a domain referenced approach with a traditional psychometric approach in the construction of a test. Results of the December, 1975 Quarterly Profile Exam (QPE) administered to 400 examinees at a university were the source of data. The 400 item QPE is a five alternative multiple choice test of information a "safe"…

  11. An Introduction to Graphical and Mathematical Methods for Detecting Heteroscedasticity in Linear Regression.

    ERIC Educational Resources Information Center

    Thompson, Russel L.

    Homoscedasticity is an important assumption of linear regression. This paper explains what it is and why it is important to the researcher. Graphical and mathematical methods for testing the homoscedasticity assumption are demonstrated. Sources of homoscedasticity and types of homoscedasticity are discussed, and methods for correction are…

  12. Test methods for environment-assisted cracking

    NASA Astrophysics Data System (ADS)

    Turnbull, A.

    1992-03-01

    The test methods for assessing environment assisted cracking of metals in aqueous solution are described. The advantages and disadvantages are examined and the interrelationship between results from different test methods is discussed. The source of differences in susceptibility to cracking occasionally observed from the varied mechanical test methods arises often from the variation between environmental parameters in the different test conditions and the lack of adequate specification, monitoring, and control of environmental variables. Time is also a significant factor when comparing results from short term tests with long exposure tests. In addition to these factors, the intrinsic difference in the important mechanical variables, such as strain rate, associated with the various mechanical tests methods can change the apparent sensitivity of the material to stress corrosion cracking. The increasing economic pressure for more accelerated testing is in conflict with the characteristic time dependence of corrosion processes. Unreliable results may be inevitable in some cases but improved understanding of mechanisms and the development of mechanistically based models of environment assisted cracking which incorporate the key mechanical, material, and environmental variables can provide the framework for a more realistic interpretation of short term data.

  13. Leak localization and quantification with a small unmanned aerial system

    NASA Astrophysics Data System (ADS)

    Golston, L.; Zondlo, M. A.; Frish, M. B.; Aubut, N. F.; Yang, S.; Talbot, R. W.

    2017-12-01

    Methane emissions from oil and gas facilities are a recognized source of greenhouse gas emissions, requiring cost-effective and reliable monitoring systems to support leak detection and repair programs. We describe a set of methods for locating and quantifying natural gas leaks using a small unmanned aerial system (sUAS) equipped with a path-integrated methane sensor along with ground-based wind measurements. The algorithms are developed as part of a system for continuous well pad scale (100 m2 area) monitoring, supported by a series of over 200 methane release trials covering multiple release locations and flow rates. Test measurements include data obtained on a rotating boom platform as well as flight tests on a sUAS. The system is found throughout the trials to reliably distinguish between cases with and without a methane release down to 6 scfh (0.032 g/s). Among several methods evaluated for horizontal localization, the location corresponding to the maximum integrated methane reading have performed best with a median error of ± 1 m if two or more flights are averaged, or ± 1.2 m for individual flights. Additionally, a method of rotating the data around the estimated leak location is developed, with the leak magnitude calculated as the average crosswind integrated flux in the region near the source location. Validation of these methods will be presented, including blind test results. Sources of error, including GPS uncertainty, meteorological variables, and flight pattern coverage, will be discussed.

  14. Empirical source noise prediction method with application to subsonic coaxial jet mixing noise

    NASA Technical Reports Server (NTRS)

    Zorumski, W. E.; Weir, D. S.

    1982-01-01

    A general empirical method, developed for source noise predictions, uses tensor splines to represent the dependence of the acoustic field on frequency and direction and Taylor's series to represent the dependence on source state parameters. The method is applied to prediction of mixing noise from subsonic circular and coaxial jets. A noise data base of 1/3-octave-band sound pressure levels (SPL's) from 540 tests was gathered from three countries: United States, United Kingdom, and France. The SPL's depend on seven variables: frequency, polar direction angle, and five source state parameters: inner and outer nozzle pressure ratios, inner and outer stream total temperatures, and nozzle area ratio. A least-squares seven-dimensional curve fit defines a table of constants which is used for the prediction method. The resulting prediction has a mean error of 0 dB and a standard deviation of 1.2 dB. The prediction method is used to search for a coaxial jet which has the greatest coaxial noise benefit as compared with an equivalent single jet. It is found that benefits of about 6 dB are possible.

  15. Hexavalent chromium emissions from aerospace operations: A case study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaurushia, A.; Bajza, C.

    1994-12-31

    Northrop Aircraft Division (NAD) is subject to several air toxic regulations such as EPA SARA Title 3, California Assembly Bill 2588 (AB2588), and Proposition 65 and is a voluntary participant in air toxic emissions reduction programs such as the EPA 33/50 and MERIT Program. To quantify emissions, NAD initially followed regulatory guidelines which recommend that emission inventories of air toxics be based on engineering assumptions and conservative emission factors in absence of specific source test data. NAD was concerned that Chromium VI emissions from NAD`s spray coating and chemical tank line operations were not representative due to these techniques. Moremore » recently, NAD has relied upon information from its ongoing source testing program to determine emission rates of Chromium VI. Based on these source test results, NAD revised emission calculations for use in Chromium VI inventories, impact assessments and control strategies. NAD has been successful in demonstrating a significant difference between emissions calculated utilizing the source test results and emissions based on the traditional mass balance using agency suggested methods.« less

  16. A Statistical Review of Alternative Zinc and Copper Extraction from Mineral Fertilizers and Industrial By-Products.

    PubMed

    Cenciani de Souza, Camila Prado; Aparecida de Abreu, Cleide; Coscione, Aline Renée; Alberto de Andrade, Cristiano; Teixeira, Luiz Antonio Junqueira; Consolini, Flavia

    2018-01-01

    Rapid, accurate, and low-cost alternative analytical methods for micronutrient quantification in fertilizers are fundamental in QC. The purpose of this study was to evaluate whether zinc (Zn) and copper (Cu) content in mineral fertilizers and industrial by-products determined by the alternative methods USEPA 3051a, 10% HCl, and 10% H2SO4 are statistically equivalent to the standard method, consisting of hot-plate digestion using concentrated HCl. The commercially marketed Zn and Cu sources in Brazil consisted of oxides, carbonate, and sulfate fertilizers and by-products consisting of galvanizing ash, galvanizing sludge, brass ash, and brass or scrap slag. The contents of sources ranged from 15 to 82% and 10 to 45%, respectively, for Zn and Cu. The Zn and Cu contents refer to the variation of the elements found in the different sources evaluated with the concentrated HCl method as shown in Table 1. A protocol based on the following criteria was used for the statistical analysis assessment of the methods: F-test modified by Graybill, t-test for the mean error, and linear correlation coefficient analysis. In terms of equivalents, 10% HCl extraction was equivalent to the standard method for Zn, and the results of the USEPA 3051a and 10% HCl methods indicated that these methods were equivalents for Cu. Therefore, these methods can be considered viable alternatives to the standard method of determination for Cu and Zn in mineral fertilizers and industrial by-products in future research for their complete validation.

  17. Addendum to the Final Audit Report on Procurement of Spare Parts and Supplies

    DTIC Science & Technology

    1993-06-04

    Contraves USA, Simulation and Systems Integration, Tampa, FL Contract No.: N00104-89-G-A050-4009 Awarded: April 20, 1990 Procurement Method...awarded sole source to Contraves . The amplifier is a source-controlled item, and Contraves is the only approved source. There are no commercial...The total contract value was $69,552. Contraves assembled and tested the amplifiers. For the subject procurement, Contraves paid $* per unit for the

  18. LED-based endoscopic light source for spectral imaging

    NASA Astrophysics Data System (ADS)

    Browning, Craig M.; Mayes, Samuel; Favreau, Peter; Rich, Thomas C.; Leavesley, Silas J.

    2016-03-01

    Colorectal cancer is the United States 3rd leading cancer in death rates.1 The current screening for colorectal cancer is an endoscopic procedure using white light endoscopy (WLE). There are multiple new methods testing to replace WLE, for example narrow band imaging and autofluorescence imaging.2 However, these methods do not meet the need for a higher specificity or sensitivity. The goal for this project is to modify the presently used endoscope light source to house 16 narrow wavelength LEDs for spectral imaging in real time while increasing sensitivity and specificity. The process to do such was to take an Olympus CLK-4 light source, replace the light and electronics with 16 LEDs and new circuitry. This allows control of the power and intensity of the LEDs. This required a larger enclosure to house a bracket system for the solid light guide (lightpipe), three new circuit boards, a power source and National Instruments hardware/software for computer control. The results were a successfully designed retrofit with all the new features. The LED testing resulted in the ability to control each wavelength's intensity. The measured intensity over the voltage range will provide the information needed to couple the camera for imaging. Overall the project was successful; the modifications to the light source added the controllable LEDs. This brings the research one step closer to the main goal of spectral imaging for early detection of colorectal cancer. Future goals will be to connect the camera and test the imaging process.

  19. The history and development of FETAX (ASTM standard guide, E-1439 on conducting the frog embryo teratogenesis Assay-Xenopus)

    USGS Publications Warehouse

    Dumont, J.N.; Bantle, J.A.; Linder, G.; ,

    2003-01-01

    The energy crisis of the 1970's and 1980's prompted the search for alternative sources of fuel. With development of alternate sources of energy, concerns for biological resources potentially adversely impacted by these alternative technologies also heightened. For example, few biological tests were available at the time to study toxic effects of effluents on surface waters likely to serve as receiving streams for energy-production facilities; hence, we began to use Xenopus laevis embryos as test organisms to examine potential toxic effects associated with these effluents upon entering aquatic systems. As studies focused on potential adverse effects on aquatic systems continued, a test procedure was developed that led to the initial standardization of FETAX. Other .than a limited number of aquatic toxicity tests that used fathead minnows and cold-water fishes such as rainbow trout, X. laevis represented the only other aquatic vertebrate test system readily available to evaluate complex effluents. With numerous laboratories collaborating, the test with X. laevis was refined, improved, and developed as ASTM E-1439, Standard Guide for the Conducting Frog Embryo Teratogenesis Assay-Xenopus (FETAX). Collabrative work in the 1990s yielded procedural enhancements, for example, development of standard test solutions and exposure methods to handle volatile organics and hydrophobic compounds. As part of the ASTM process, a collaborative interlaboratory study was performed to determine the repeatability and reliability of FETAX. Parallel to these efforts, methods were also developed to test sediments and soils, and in situ test methods were developed to address "lab-to-field extrapolation errors" that could influence the method's use in ecological risk assessments. Additionally, a metabolic activation system composed of rat liver microsomes was developed which made FETAX more relevant to mammalian studies.

  20. Specific Yields Estimated from Gravity Change during Pumping Test

    NASA Astrophysics Data System (ADS)

    Chen, K. H.; Hwang, C.; Chang, L. C.

    2017-12-01

    Specific yield (Sy) is the most important parameter to describe available groundwater capacity in an unconfined aquifer. When estimating Sy by a field pumping test, aquifer heterogeneity and well performers will cause a large uncertainty. In this study, we use a gravity-based method to estimate Sy. At the time of pumping test, amounts of mass (groundwater) are forced to be taken out. If drawdown corn is big and close enough to high precision gravimeter, the gravity change can be detected. The gravity-based method use gravity observations that are independent from traditional flow computation. Only the drawdown corn should be modeled with observed head and hydrogeology data. The gravity method can be used in most groundwater field tests, such as locally pumping/injection tests initiated by active man-made or annual variations due to natural sources. We apply our gravity method at few sites in Taiwan situated over different unconfined aquifer. Here pumping tests for Sy determinations were also carried out. We will discuss why the gravity method produces different results from traditional pumping test, field designs and limitations of the gravity method.

  1. Method and system using power modulation and velocity modulation producing sputtered thin films with sub-angstrom thickness uniformity or custom thickness gradients

    DOEpatents

    Montcalm, Claude [Livermore, CA; Folta, James Allen [Livermore, CA; Walton, Christopher Charles [Berkeley, CA

    2003-12-23

    A method and system for determining a source flux modulation recipe for achieving a selected thickness profile of a film to be deposited (e.g., with highly uniform or highly accurate custom graded thickness) over a flat or curved substrate (such as concave or convex optics) by exposing the substrate to a vapor deposition source operated with time-varying flux distribution as a function of time. Preferably, the source is operated with time-varying power applied thereto during each sweep of the substrate to achieve the time-varying flux distribution as a function of time. Preferably, the method includes the steps of measuring the source flux distribution (using a test piece held stationary while exposed to the source with the source operated at each of a number of different applied power levels), calculating a set of predicted film thickness profiles, each film thickness profile assuming the measured flux distribution and a different one of a set of source flux modulation recipes, and determining from the predicted film thickness profiles a source flux modulation recipe which is adequate to achieve a predetermined thickness profile. Aspects of the invention include a computer-implemented method employing a graphical user interface to facilitate convenient selection of an optimal or nearly optimal source flux modulation recipe to achieve a desired thickness profile on a substrate. The method enables precise modulation of the deposition flux to which a substrate is exposed to provide a desired coating thickness distribution.

  2. Fatigue crack localization with near-field acoustic emission signals

    NASA Astrophysics Data System (ADS)

    Zhou, Changjiang; Zhang, Yunfeng

    2013-04-01

    This paper presents an AE source localization technique using near-field acoustic emission (AE) signals induced by crack growth and propagation. The proposed AE source localization technique is based on the phase difference in the AE signals measured by two identical AE sensing elements spaced apart at a pre-specified distance. This phase difference results in canceling-out of certain frequency contents of signals, which can be related to AE source direction. Experimental data from simulated AE source such as pencil breaks was used along with analytical results from moment tensor analysis. It is observed that the theoretical predictions, numerical simulations and the experimental test results are in good agreement. Real data from field monitoring of an existing fatigue crack on a bridge was also used to test this system. Results show that the proposed method is fairly effective in determining the AE source direction in thick plates commonly encountered in civil engineering structures.

  3. Waveform inversion of volcano-seismic signals for an extended source

    USGS Publications Warehouse

    Nakano, M.; Kumagai, H.; Chouet, B.; Dawson, P.

    2007-01-01

    We propose a method to investigate the dimensions and oscillation characteristics of the source of volcano-seismic signals based on waveform inversion for an extended source. An extended source is realized by a set of point sources distributed on a grid surrounding the centroid of the source in accordance with the source geometry and orientation. The source-time functions for all point sources are estimated simultaneously by waveform inversion carried out in the frequency domain. We apply a smoothing constraint to suppress short-scale noisy fluctuations of source-time functions between adjacent sources. The strength of the smoothing constraint we select is that which minimizes the Akaike Bayesian Information Criterion (ABIC). We perform a series of numerical tests to investigate the capability of our method to recover the dimensions of the source and reconstruct its oscillation characteristics. First, we use synthesized waveforms radiated by a kinematic source model that mimics the radiation from an oscillating crack. Our results demonstrate almost complete recovery of the input source dimensions and source-time function of each point source, but also point to a weaker resolution of the higher modes of crack oscillation. Second, we use synthetic waveforms generated by the acoustic resonance of a fluid-filled crack, and consider two sets of waveforms dominated by the modes with wavelengths 2L/3 and 2W/3, or L and 2L/5, where W and L are the crack width and length, respectively. Results from these tests indicate that the oscillating signature of the 2L/3 and 2W/3 modes are successfully reconstructed. The oscillating signature of the L mode is also well recovered, in contrast to results obtained for a point source for which the moment tensor description is inadequate. However, the oscillating signature of the 2L/5 mode is poorly recovered owing to weaker resolution of short-scale crack wall motions. The triggering excitations of the oscillating cracks are successfully reconstructed. Copyright 2007 by the American Geophysical Union.

  4. Multicompare tests of the performance of different metaheuristics in EEG dipole source localization.

    PubMed

    Escalona-Vargas, Diana Irazú; Lopez-Arevalo, Ivan; Gutiérrez, David

    2014-01-01

    We study the use of nonparametric multicompare statistical tests on the performance of simulated annealing (SA), genetic algorithm (GA), particle swarm optimization (PSO), and differential evolution (DE), when used for electroencephalographic (EEG) source localization. Such task can be posed as an optimization problem for which the referred metaheuristic methods are well suited. Hence, we evaluate the localization's performance in terms of metaheuristics' operational parameters and for a fixed number of evaluations of the objective function. In this way, we are able to link the efficiency of the metaheuristics with a common measure of computational cost. Our results did not show significant differences in the metaheuristics' performance for the case of single source localization. In case of localizing two correlated sources, we found that PSO (ring and tree topologies) and DE performed the worst, then they should not be considered in large-scale EEG source localization problems. Overall, the multicompare tests allowed to demonstrate the little effect that the selection of a particular metaheuristic and the variations in their operational parameters have in this optimization problem.

  5. System and method for laser assisted sample transfer to solution for chemical analysis

    DOEpatents

    Van Berkel, Gary J; Kertesz, Vilmos

    2014-01-28

    A system and method for laser desorption of an analyte from a specimen and capturing of the analyte in a suspended solvent to form a testing solution are described. The method can include providing a specimen supported by a desorption region of a specimen stage and desorbing an analyte from a target site of the specimen with a laser beam centered at a radiation wavelength (.lamda.). The desorption region is transparent to the radiation wavelength (.lamda.) and the sampling probe and a laser source emitting the laser beam are on opposite sides of a primary surface of the specimen stage. The system can also be arranged where the laser source and the sampling probe are on the same side of a primary surface of the specimen stage. The testing solution can then be analyzed using an analytical instrument or undergo further processing.

  6. Leak Detection by Acoustic Emission Monitoring. Phase 1. Feasibility Study

    DTIC Science & Technology

    1994-05-26

    various signal processing and noise descrimInation techniques during the Data Processing task. C. TEST DESCRIPTION 1. Laboratory Tests Following normal...success in applying these methods to descriminating between the AE bursts generated by two close AE sources In a section of an aircraft structure

  7. Summary of Comments on Test Methods Amendments Proposed in the Federal Register on August 27, 1997

    EPA Pesticide Factsheets

    (EPA) proposed amendments to 40 CFR Parts 60, 61, and 63 to reflect miscellaneous editorial changes and technical corrections throughout the parts in sections pertaining to source testing or monitoring of emissions and operations and added Performance Spec

  8. 40 CFR 60.52Da - Recordkeeping requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Electric Utility... opacity field data sheets; (2) For each performance test conducted using Method 22 of appendix A-4 of this... performance test; (iii) Copies of all visible emission observer opacity field data sheets; and (iv...

  9. A weight-of-evidence approach to assess chemicals: case study on the assessment of persistence of 4,6-substituted phenolic benzotriazoles in the environment.

    PubMed

    Brandt, Marc; Becker, Eva; Jöhncke, Ulrich; Sättler, Daniel; Schulte, Christoph

    2016-01-01

    One important purpose of the European REACH Regulation (EC No. 1907/2006) is to promote the use of alternative methods for assessment of hazards of substances in order to avoid animal testing. Experience with environmental hazard assessment under REACH shows that efficient alternative methods are needed in order to assess chemicals when standard test data are missing. One such assessment method is the weight-of-evidence (WoE) approach. In this study, the WoE approach was used to assess the persistence of certain phenolic benzotriazoles, a group of substances including also such of very high concern (SVHC). For phenolic benzotriazoles, assessment of the environmental persistence is challenging as standard information, i.e. simulation tests on biodegradation are not available. Thus, the WoE approach was used: overall information resulting from many sources was considered, and individual uncertainties of each source analysed separately. In a second step, all information was aggregated giving an overall picture of persistence to assess the degradability of the phenolic benzotriazoles under consideration although the reliability of individual sources was incomplete. Overall, the evidence suggesting that phenolic benzotriazoles are very persistent in the environment is unambiguous. This was demonstrated by a WoE approach considering the prerequisites of REACH by combining several limited information sources. The combination enabled a clear overall assessment which can be reliably used for SVHC identification. Finally, it is recommended to include WoE approaches as an important tool in future environmental risk assessments.

  10. A SAS macro for testing differences among three or more independent groups using Kruskal-Wallis and Nemenyi tests.

    PubMed

    Liu, Yuewei; Chen, Weihong

    2012-02-01

    As a nonparametric method, the Kruskal-Wallis test is widely used to compare three or more independent groups when an ordinal or interval level of data is available, especially when the assumptions of analysis of variance (ANOVA) are not met. If the Kruskal-Wallis statistic is statistically significant, Nemenyi test is an alternative method for further pairwise multiple comparisons to locate the source of significance. Unfortunately, most popular statistical packages do not integrate the Nemenyi test, which is not easy to be calculated by hand. We described the theory and applications of the Kruskal-Wallis and Nemenyi tests, and presented a flexible SAS macro to implement the two tests. The SAS macro was demonstrated by two examples from our cohort study in occupational epidemiology. It provides a useful tool for SAS users to test the differences among three or more independent groups using a nonparametric method.

  11. Locating hazardous gas leaks in the atmosphere via modified genetic, MCMC and particle swarm optimization algorithms

    NASA Astrophysics Data System (ADS)

    Wang, Ji; Zhang, Ru; Yan, Yuting; Dong, Xiaoqiang; Li, Jun Ming

    2017-05-01

    Hazardous gas leaks in the atmosphere can cause significant economic losses in addition to environmental hazards, such as fires and explosions. A three-stage hazardous gas leak source localization method was developed that uses movable and stationary gas concentration sensors. The method calculates a preliminary source inversion with a modified genetic algorithm (MGA) and has the potential to crossover with eliminated individuals from the population, following the selection of the best candidate. The method then determines a search zone using Markov Chain Monte Carlo (MCMC) sampling, utilizing a partial evaluation strategy. The leak source is then accurately localized using a modified guaranteed convergence particle swarm optimization algorithm with several bad-performing individuals, following selection of the most successful individual with dynamic updates. The first two stages are based on data collected by motionless sensors, and the last stage is based on data from movable robots with sensors. The measurement error adaptability and the effect of the leak source location were analyzed. The test results showed that this three-stage localization process can localize a leak source within 1.0 m of the source for different leak source locations, with measurement error standard deviation smaller than 2.0.

  12. Implementation and Validation of an Impedance Eduction Technique

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.; Jones, Michael G.; Gerhold, Carl H.

    2011-01-01

    Implementation of a pressure gradient method of impedance eduction in two NASA Langley flow ducts is described. The Grazing Flow Impedance Tube only supports plane-wave sources, while the Curved Duct Test Rig supports sources that contain higher-order modes. Multiple exercises are used to validate this new impedance eduction method. First, synthesized data for a hard wall insert and a conventional liner mounted in the Grazing Flow Impedance Tube are used as input to the two impedance eduction methods, the pressure gradient method and a previously validated wall pressure method. Comparisons between the two results are excellent. Next, data measured in the Grazing Flow Impedance Tube are used as input to both methods. Results from the two methods compare quite favorably for sufficiently low Mach numbers but this comparison degrades at Mach 0.5, especially when the hard wall insert is used. Finally, data measured with a hard wall insert mounted in the Curved Duct Test Rig are used as input to the pressure gradient method. Significant deviation from the known solution is observed, which is believed to be largely due to 3-D effects in this flow duct. Potential solutions to this issue are currently being explored.

  13. Network sensitivity solutions for regional moment-tensor inversions

    DOE PAGES

    Ford, Sean R.; Dreger, Douglas S.; Walter, William R.

    2010-09-20

    Well-resolved moment-tensor solutions reveal information about the sources of seismic waves. In this paper,we introduce a newly of assessing confidence in the regional full moment-tensor inversion via the introduction of the network sensitivity solution (NSS). The NSS takes into account the unique station distribution, frequency band, and signal-to-noise ratio of a given event scenario. The NSS compares both a hypothetical pure source (for example, an explosion or an earthquake) and the actual data with several thousand sets of synthetic data from a uniform distribution of all possible sources. The comparison with a hypothetical pure source provides the theoretically best-constrained source-typemore » distribution for a given set of stations; and with it, one can determine whether further analysis with the data is warranted. The NSS that employs the actual data gives a direct comparison of all other source types with the best fit source. In this way, one can choose a threshold level of fit in which the solution is comfortably constrained. The method is tested for the well-recorded nuclear test, JUNCTION, at the Nevada Test Site. Sources that fit comparably well to a hypothetical pure explosion recorded with no noise at the JUNCTION data stations have a large volumetric component and are not described well by a double-couple (DC) source. The NSS using the real data from JUNCTION is even more tightly constrained to an explosion because the data contain some energy that precludes fitting with any type of deviator source. We also calculate the NSS for the October 2006 North Korea test and a nearby earthquake, where the station coverage is poor and the event magnitude is small. As a result, the earthquake solution is very well fit by a DC source, and the best-fit solution to the nuclear test (M w 4.1) is dominantly explosion.« less

  14. Development of EPA OTM 10 for Landfill Applications

    EPA Science Inventory

    In 2006, the U.S. Environmental Protection Agency posted a new test method on its website called OTM 10 which describes direct measurement of pollutant mass emission flux from area sources using ground-based optical remote sensing. The method has validated application to relative...

  15. MEASUREMENT OF VOCS DESORBED FROM BUILDING MATERIALS--A HIGH TEMPERATURE DYNAMIC CHAMBER METHOD

    EPA Science Inventory

    Mass balance is a commonly used approach for characterizing the source and sink behavior of building materials. Because the traditional sink test methods evaluate the adsorption and desorption of volatile organic compounds (VOC) at ambient temperatures, the desorption process is...

  16. Development and Testing of Novel Canine Fecal Source-Identification Assays

    EPA Science Inventory

    The extent to which dogs contribute to aquatic fecal contamination is unknown despite the potential for zoonotic transfer of harmful human pathogens. Recent method comparison studies have shown that available Bacteroidales 16S rRNA-based methods for the detection of canine fecal ...

  17. An engineering study of hybrid adaptation of wind tunnel walls for three-dimensional testing

    NASA Technical Reports Server (NTRS)

    Brown, Clinton; Kalumuck, Kenneth; Waxman, David

    1987-01-01

    Solid wall tunnels having only upper and lower walls flexing are described. An algorithm for selecting the wall contours for both 2 and 3 dimensional wall flexure is presented and numerical experiments are used to validate its applicability to the general test case of 3 dimensional lifting aircraft models in rectangular cross section wind tunnels. The method requires an initial approximate representation of the model flow field at a given lift with wallls absent. The numerical methods utilized are derived by use of Green's source solutions obtained using the method of images; first order linearized flow theory is employed with Prandtl-Glauert compressibility transformations. Equations are derived for the flexed shape of a simple constant thickness plate wall under the influence of a finite number of jacks in an axial row along the plate centerline. The Green's source methods are developed to provide estimations of residual flow distortion (interferences) with measured wall pressures and wall flow inclinations as inputs.

  18. Suggestibility and state anxiety: how the two concepts relate in a source identification paradigm.

    PubMed

    Ridley, Anne M; Clifford, Brian R

    2006-01-01

    Source identification tests provide a stringent method for testing the suggestibility of memory because they reduce response bias and experimental demand characteristics. Using the techniques and materials of Maria Zaragoza and her colleagues, we investigated how state anxiety affects the ability of undergraduates to identify correctly the source of misleading post-event information. The results showed that individuals high in state anxiety were less likely to make source misattributions of misleading information, indicating lower levels of suggestibility. This effect was strengthened when forgotten or non-recognised misleading items (for which a source identification task is not possible) were excluded from the analysis. Confidence in the correct attribution of misleading post-event information to its source was significantly less than confidence in source misattributions. Participants who were high in state anxiety tended to be less confident than those lower in state anxiety when they correctly identified the source of both misleading post-event information and non-misled items. The implications of these findings are discussed, drawing on the literature on anxiety and cognition as well as suggestibility.

  19. The Earthquake‐Source Inversion Validation (SIV) Project

    USGS Publications Warehouse

    Mai, P. Martin; Schorlemmer, Danijel; Page, Morgan T.; Ampuero, Jean-Paul; Asano, Kimiyuki; Causse, Mathieu; Custodio, Susana; Fan, Wenyuan; Festa, Gaetano; Galis, Martin; Gallovic, Frantisek; Imperatori, Walter; Käser, Martin; Malytskyy, Dmytro; Okuwaki, Ryo; Pollitz, Fred; Passone, Luca; Razafindrakoto, Hoby N. T.; Sekiguchi, Haruko; Song, Seok Goo; Somala, Surendra N.; Thingbaijam, Kiran K. S.; Twardzik, Cedric; van Driel, Martin; Vyas, Jagdish C.; Wang, Rongjiang; Yagi, Yuji; Zielke, Olaf

    2016-01-01

    Finite‐fault earthquake source inversions infer the (time‐dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, multiple source models for the same earthquake, obtained by different research teams, often exhibit remarkable dissimilarities. To address the uncertainties in earthquake‐source inversion methods and to understand strengths and weaknesses of the various approaches used, the Source Inversion Validation (SIV) project conducts a set of forward‐modeling exercises and inversion benchmarks. In this article, we describe the SIV strategy, the initial benchmarks, and current SIV results. Furthermore, we apply statistical tools for quantitative waveform comparison and for investigating source‐model (dis)similarities that enable us to rank the solutions, and to identify particularly promising source inversion approaches. All SIV exercises (with related data and descriptions) and statistical comparison tools are available via an online collaboration platform, and we encourage source modelers to use the SIV benchmarks for developing and testing new methods. We envision that the SIV efforts will lead to new developments for tackling the earthquake‐source imaging problem.

  20. [Research status and prospects of DNA test on difficult specimens].

    PubMed

    Dang, Hua-Wei; Mao, Jiong; Wang, Hui; Huang, Jiang-Ping; Bai, Xiao-Gang

    2012-02-01

    This paper reviews the advances of DNA detection on three types of difficult biological specimens including degraded samples, trace evidences and mixed samples. The source of different samples, processing methods and announcements were analyzed. New methods such as mitochondrial test system, changing the original experimental conditions, low-volume PCR amplification and new technologies such as whole genome amplification techniques, laser capture micro-dissection, and mini-STR technology in recent years are introduced.

  1. 40 CFR 61.164 - Test methods and procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... effective date of this subpart for a source that has an initial startup date preceding the effective date; or (2) No later than 90 days after startup for a source that has an initial startup date after the... glass type (i) produced during the 12-month period, as follows: ER17OC00.483 Where: Ti = The theoretical...

  2. 40 CFR 61.164 - Test methods and procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... effective date of this subpart for a source that has an initial startup date preceding the effective date; or (2) No later than 90 days after startup for a source that has an initial startup date after the... glass type (i) produced during the 12-month period, as follows: ER17OC00.483 Where: Ti = The theoretical...

  3. 40 CFR 61.164 - Test methods and procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... effective date of this subpart for a source that has an initial startup date preceding the effective date; or (2) No later than 90 days after startup for a source that has an initial startup date after the... glass type (i) produced during the 12-month period, as follows: ER17OC00.483 Where: Ti = The theoretical...

  4. 40 CFR 61.164 - Test methods and procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... effective date of this subpart for a source that has an initial startup date preceding the effective date; or (2) No later than 90 days after startup for a source that has an initial startup date after the... glass type (i) produced during the 12-month period, as follows: ER17OC00.483 Where: Ti = The theoretical...

  5. An Unsolved Electric Circuit: A Common Misconception

    ERIC Educational Resources Information Center

    Harsha, N. R. Sree; Sreedevi, A.; Prakash, Anupama

    2015-01-01

    Despite a number of theories in circuit analysis, little is known about the behaviour of ideal equal voltage sources in parallel, connected across a resistive load. We neither have any theory that can predict the voltage source that provides the load current, nor is there any method to test it experimentally. In a series of experiments performed…

  6. Modeling and observations of an elevated, moving infrasonic source: Eigenray methods.

    PubMed

    Blom, Philip; Waxler, Roger

    2017-04-01

    The acoustic ray tracing relations are extended by the inclusion of auxiliary parameters describing variations in the spatial ray coordinates and eikonal vector due to changes in the initial conditions. Computation of these parameters allows one to define the geometric spreading factor along individual ray paths and assists in identification of caustic surfaces so that phase shifts can be easily identified. A method is developed leveraging the auxiliary parameters to identify propagation paths connecting specific source-receiver geometries, termed eigenrays. The newly introduced method is found to be highly efficient in cases where propagation is non-planar due to horizontal variations in the propagation medium or the presence of cross winds. The eigenray method is utilized in analysis of infrasonic signals produced by a multi-stage sounding rocket launch with promising results for applications of tracking aeroacoustic sources in the atmosphere and specifically to analysis of motor performance during dynamic tests.

  7. Fluvial sediment fingerprinting: literature review and annotated bibliography

    USGS Publications Warehouse

    Williamson, Joyce E.; Haj, Adel E.; Stamm, John F.; Valder, Joshua F.; Prautzch, Vicki L.

    2014-01-01

    The U.S. Geological Survey has evaluated and adopted various field methods for collecting real-time sediment and nutrient data. These methods have proven to be valuable representations of sediment and nutrient concentrations and loads but are not able to accurately identify specific source areas. Recently, more advanced data collection and analysis techniques have been evaluated that show promise in identifying specific source areas. Application of field methods could include studies of sources of fluvial sediment, otherwise referred to as sediment “fingerprinting.” The identification of sediment is important, in part, because knowing the primary sediment source areas in watersheds ensures that best management practices are incorporated in areas that maximize reductions in sediment loadings. This report provides a literature review and annotated bibliography of existing methodologies applied in the field of fluvial sediment fingerprinting. This literature review provides a bibliography of publications where sediment fingerprinting methods have been used; however, this report is not assumed to provide an exhaustive listing. Selected publications were categorized by methodology with some additional summary information. The information contained in the summary may help researchers select methods better suited to their particular study or study area, and identify methods in need of more testing and application.

  8. Using a topographic index to distribute variable source area runoff predicted with the SCS curve-number equation

    NASA Astrophysics Data System (ADS)

    Lyon, Steve W.; Walter, M. Todd; Gérard-Marchant, Pierre; Steenhuis, Tammo S.

    2004-10-01

    Because the traditional Soil Conservation Service curve-number (SCS-CN) approach continues to be used ubiquitously in water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed and tested a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Predicting the location of source areas is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point-source pollution. The method presented here used the traditional SCS-CN approach to predict runoff volume and spatial extent of saturated areas and a topographic index, like that used in TOPMODEL, to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was applied to two subwatersheds of the Delaware basin in the Catskill Mountains region of New York State and one watershed in south-eastern Australia to produce runoff-probability maps. Observed saturated area locations in the watersheds agreed with the distributed CN-VSA method. Results showed good agreement with those obtained from the previously validated soil moisture routing (SMR) model. When compared with the traditional SCS-CN method, the distributed CN-VSA method predicted a similar total volume of runoff, but vastly different locations of runoff generation. Thus, the distributed CN-VSA approach provides a physically based method that is simple enough to be incorporated into water quality models, and other tools that currently use the traditional SCS-CN method, while still adhering to the principles of VSA hydrology.

  9. A Standard Platform for Testing and Comparison of MDAO Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Justin S.; Moore, Kenneth T.; Hearn, Tristan A.; Naylor, Bret A.

    2012-01-01

    The Multidisciplinary Design Analysis and Optimization (MDAO) community has developed a multitude of algorithms and techniques, called architectures, for performing optimizations on complex engineering systems which involve coupling between multiple discipline analyses. These architectures seek to efficiently handle optimizations with computationally expensive analyses including multiple disciplines. We propose a new testing procedure that can provide a quantitative and qualitative means of comparison among architectures. The proposed test procedure is implemented within the open source framework, OpenMDAO, and comparative results are presented for five well-known architectures: MDF, IDF, CO, BLISS, and BLISS-2000. We also demonstrate how using open source soft- ware development methods can allow the MDAO community to submit new problems and architectures to keep the test suite relevant.

  10. Cheating in Middle School and High School

    ERIC Educational Resources Information Center

    Strom, Paris S.; Strom, Robert D.

    2007-01-01

    There is increasing concern about cheating in the secondary schools. This article describes the prevalence of dishonesty in testing, motivation for student cheating, new forms of deception using technology tools, initiatives to protect security of tests, methods students use to obtain papers without crediting the original source, tools for…

  11. Source Testing for Particulate Matter.

    ERIC Educational Resources Information Center

    DeVorkin, Howard

    Developed for presentation at the 12th Conference on Methods in Air Pollution and Industrial Hygiene Studies, University of Southern California, April, 1971, this outline covers procedures for the testing of particulate matter. These are: (1) basic requirements, (2) information required, (3) collection of samples, (4) processing of samples, (5)…

  12. NEXT GENERATION LEACHING TESTS FOR EVALUATING LEACHING OF INORGANIC CONSTITUENTS

    EPA Science Inventory

    In the U.S. as in other countries, there is increased interest in using industrial by-products as alternative or secondary materials, helping to conserve virgin or raw materials. The LEAF and associated test methods are being used to develop the source term for leaching or any i...

  13. Student Laptop Use and Scores on Standardized Tests

    ERIC Educational Resources Information Center

    Kposowa, Augustine J.; Valdez, Amanda D.

    2013-01-01

    Objectives: The primary objective of the study was to investigate the relationship between ubiquitous laptop use and academic achievement. It was hypothesized that students with ubiquitous laptops would score on average higher on standardized tests than those without such computers. Methods: Data were obtained from two sources. First, demographic…

  14. 10 CFR 50.66 - Requirements for thermal annealing of the reactor pressure vessel.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... be determined using the same basis as that used for the pre-anneal operating period. (B) The post... Annealing Report must include: a Thermal Annealing Operating Plan; a Requalification Inspection and Test... operation using appropriate test data. (iii) The methods, including heat source, instrumentation and...

  15. 10 CFR 50.66 - Requirements for thermal annealing of the reactor pressure vessel.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... be determined using the same basis as that used for the pre-anneal operating period. (B) The post... Annealing Report must include: a Thermal Annealing Operating Plan; a Requalification Inspection and Test... operation using appropriate test data. (iii) The methods, including heat source, instrumentation and...

  16. 10 CFR 50.66 - Requirements for thermal annealing of the reactor pressure vessel.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... be determined using the same basis as that used for the pre-anneal operating period. (B) The post... Annealing Report must include: a Thermal Annealing Operating Plan; a Requalification Inspection and Test... operation using appropriate test data. (iii) The methods, including heat source, instrumentation and...

  17. 10 CFR 50.66 - Requirements for thermal annealing of the reactor pressure vessel.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... be determined using the same basis as that used for the pre-anneal operating period. (B) The post... Annealing Report must include: a Thermal Annealing Operating Plan; a Requalification Inspection and Test... operation using appropriate test data. (iii) The methods, including heat source, instrumentation and...

  18. Sample Size Estimation: The Easy Way

    ERIC Educational Resources Information Center

    Weller, Susan C.

    2015-01-01

    This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…

  19. 40 CFR 60.474 - Test methods and procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... compliance with the mass standard for the blowing still. If the Administrator finds that the facility was in compliance with the mass standard during the performance test but failed to meet the zero opacity standard... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Asphalt...

  20. 40 CFR 60.474 - Test methods and procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... compliance with the mass standard for the blowing still. If the Administrator finds that the facility was in compliance with the mass standard during the performance test but failed to meet the zero opacity standard... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Asphalt...

  1. Statistics of software vulnerability detection in certification testing

    NASA Astrophysics Data System (ADS)

    Barabanov, A. V.; Markov, A. S.; Tsirlov, V. L.

    2018-05-01

    The paper discusses practical aspects of introduction of the methods to detect software vulnerability in the day-to-day activities of the accredited testing laboratory. It presents the approval results of the vulnerability detection methods as part of the study of the open source software and the software that is a test object of the certification tests under information security requirements, including software for communication networks. Results of the study showing the allocation of identified vulnerabilities by types of attacks, country of origin, programming languages used in the development, methods for detecting vulnerability, etc. are given. The experience of foreign information security certification systems related to the detection of certified software vulnerabilities is analyzed. The main conclusion based on the study is the need to implement practices for developing secure software in the development life cycle processes. The conclusions and recommendations for the testing laboratories on the implementation of the vulnerability analysis methods are laid down.

  2. Characterization of various two-phase materials based on thermal conductivity using modified transient plane source method

    NASA Astrophysics Data System (ADS)

    Jayachandran, S.; Prithiviraajan, R. N.; Reddy, K. S.

    2017-07-01

    This paper presents the thermal conductivity of various two-phase materials using modified transient plane source (MTPS) technique. The values are determined by using commercially available C-Therm TCi apparatus. It is specially designed for testing of low to high thermal conductivity materials in the range of 0.02 to 100 Wm-1K-1 within a temperature range of 223-473 K. The results obtained for the two-phase materials (solids, powders and liquids) are having an accuracy better than 5%. The transient method is one of the easiest and less time consuming method to determine the thermal conductivity of the materials compared to steady state methods.

  3. Functional Performance of Pyrovalves

    NASA Technical Reports Server (NTRS)

    Bement, Laurence J.

    1996-01-01

    Following several flight and ground test failures of spacecraft systems using single-shot, 'normally closed' pyrotechnically actuated valves (pyrovalves), a government/industry cooperative program was initiated to assess the functional performance of five qualified designs. The goal of the program was to improve performance-based requirements for the procurement of pyrovalves. Specific objectives included the demonstration of performance test methods, the measurement of 'blowby' (the passage of gases from the pyrotechnic energy source around the activating piston into the valve's fluid path), and the quantification of functional margins for each design. Experiments were conducted in-house at NASA on several units each of the five valve designs. The test methods used for this program measured the forces and energies required to actuate the valves, as well as the energies and the pressures (where possible) delivered by the pyrotechnic sources. Functional performance ranged widely among the designs. Blowby cannot be prevented by o-ring seals; metal-to-metal seals were effective. Functional margin was determined by dividing the energy delivered by the pyrotechnic sources in excess to that required to accomplish the function by the energy required for that function. All but two designs had adequate functional margins with the pyrotechnic cartridges evaluated.

  4. A numerical study of some potential sources of error in side-by-side seismometer evaluations

    USGS Publications Warehouse

    Holcomb, L. Gary

    1990-01-01

    This report presents the results of a series of computer simulations of potential errors in test data, which might be obtained when conducting side-by-side comparisons of seismometers. These results can be used as guides in estimating potential sources and magnitudes of errors one might expect when analyzing real test data. First, the derivation of a direct method for calculating the noise levels of two sensors in a side-by-side evaluation is repeated and extended slightly herein. This bulk of this derivation was presented previously (see Holcomb 1989); it is repeated here for easy reference.This method is applied to the analysis of a simulated test of two sensors in a side-by-side test in which the outputs of both sensors consist of white noise spectra with known signal-tonoise ratios (SNR's). This report extends this analysis to high SNR's to determine the limitations of the direct method for calculating the noise levels at signal-to-noise levels which are much higher than presented previously (see Holcomb 1989). Next, the method is used to analyze a simulated test of two sensors in a side-by-side test in which the outputs of both sensors consist of bandshaped noise spectra with known signal-tonoise ratios. This is a much more realistic representation of real world data because the earth's background spectrum is certainly not flat.Finally, the results of the analysis of simulated white and bandshaped side-by-side test data are used to assist in interpreting the analysis of the effects of simulated azimuthal misalignment in side-by-side sensor evaluations. A thorough understanding of azimuthal misalignment errors is important because of the physical impossibility of perfectly aligning two sensors in a real world situation. The analysis herein indicates that alignment errors place lower limits on the levels of system noise which can be resolved in a side-by-side measurement It also indicates that alignment errors are the source of the fact that real data noise spectra tend to follow the earth's background spectra in shape.

  5. anisotropic microseismic focal mechanism inversion by waveform imaging matching

    NASA Astrophysics Data System (ADS)

    Wang, L.; Chang, X.; Wang, Y.; Xue, Z.

    2016-12-01

    The focal mechanism is one of the most important parameters in source inversion, for both natural earthquakes and human-induced seismic events. It has been reported to be useful for understanding stress distribution and evaluating the fracturing effect. The conventional focal mechanism inversion method picks the first arrival waveform of P wave. This method assumes the source as a Double Couple (DC) type and the media isotropic, which is usually not the case for induced seismic focal mechanism inversion. For induced seismic events, the inappropriate source and media model in inversion processing, by introducing ambiguity or strong simulation errors, will seriously reduce the inversion effectiveness. First, the focal mechanism contains significant non-DC source type. Generally, the source contains three components: DC, isotropic (ISO) and the compensated linear vector dipole (CLVD), which makes focal mechanisms more complicated. Second, the anisotropy of media will affect travel time and waveform to generate inversion bias. The common way to describe focal mechanism inversion is based on moment tensor (MT) inversion which can be decomposed into the combination of DC, ISO and CLVD components. There are two ways to achieve MT inversion. The wave-field migration method is applied to achieve moment tensor imaging. This method can construct elements imaging of MT in 3D space without picking the first arrival, but the retrieved MT value is influenced by imaging resolution. The full waveform inversion is employed to retrieve MT. In this method, the source position and MT can be reconstructed simultaneously. However, this method needs vast numerical calculation. Moreover, the source position and MT also influence each other in the inversion process. In this paper, the waveform imaging matching (WIM) method is proposed, which combines source imaging with waveform inversion for seismic focal mechanism inversion. Our method uses the 3D tilted transverse isotropic (TTI) elastic wave equation to approximate wave propagating in anisotropic media. First, a source imaging procedure is employed to obtain the source position. Second, we refine a waveform inversion algorithm to retrieve MT. We also use a microseismic data set recorded in surface acquisition to test our method.

  6. 40 CFR 60.704 - Test methods and procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Volatile Organic Compound Emissions From Synthetic Organic Chemical Manufacturing Industry (SOCMI) Reactor Processes § 60... compound j in ppm, as measured for organics by Method 18 and measured for hydrogen and carbon monoxide by...

  7. 40 CFR 60.704 - Test methods and procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Volatile Organic Compound Emissions From Synthetic Organic Chemical Manufacturing Industry (SOCMI) Reactor Processes § 60... basis of compound j in ppm, as measured for organics by Method 18 and measured for hydrogen and carbon...

  8. Time reversal imaging and cross-correlations techniques by normal mode theory

    NASA Astrophysics Data System (ADS)

    Montagner, J.; Fink, M.; Capdeville, Y.; Phung, H.; Larmat, C.

    2007-12-01

    Time-reversal methods were successfully applied in the past to acoustic waves in many fields such as medical imaging, underwater acoustics, non destructive testing and recently to seismic waves in seismology for earthquake imaging. The increasing power of computers and numerical methods (such as spectral element methods) enables one to simulate more and more accurately the propagation of seismic waves in heterogeneous media and to develop new applications, in particular time reversal in the three-dimensional Earth. Generalizing the scalar approach of Draeger and Fink (1999), the theoretical understanding of time-reversal method can be addressed for the 3D- elastic Earth by using normal mode theory. It is shown how to relate time- reversal methods on one hand, with auto-correlation of seismograms for source imaging and on the other hand, with cross-correlation between receivers for structural imaging and retrieving Green function. The loss of information will be discussed. In the case of source imaging, automatic location in time and space of earthquakes and unknown sources is obtained by time reversal technique. In the case of big earthquakes such as the Sumatra-Andaman earthquake of december 2004, we were able to reconstruct the spatio-temporal history of the rupture. We present here some new applications at the global scale of these techniques on synthetic tests and on real data.

  9. 3-D localization of virtual sound sources: effects of visual environment, pointing method, and training.

    PubMed

    Majdak, Piotr; Goupell, Matthew J; Laback, Bernhard

    2010-02-01

    The ability to localize sound sources in three-dimensional space was tested in humans. In Experiment 1, naive subjects listened to noises filtered with subject-specific head-related transfer functions. The tested conditions included the pointing method (head or manual pointing) and the visual environment (VE; darkness or virtual VE). The localization performance was not significantly different between the pointing methods. The virtual VE significantly improved the horizontal precision and reduced the number of front-back confusions. These results show the benefit of using a virtual VE in sound localization tasks. In Experiment 2, subjects were provided with sound localization training. Over the course of training, the performance improved for all subjects, with the largest improvements occurring during the first 400 trials. The improvements beyond the first 400 trials were smaller. After the training, there was still no significant effect of pointing method, showing that the choice of either head- or manual-pointing method plays a minor role in sound localization performance. The results of Experiment 2 reinforce the importance of perceptual training for at least 400 trials in sound localization studies.

  10. Automatic detection of multiple UXO-like targets using magnetic anomaly inversion and self-adaptive fuzzy c-means clustering

    NASA Astrophysics Data System (ADS)

    Yin, Gang; Zhang, Yingtang; Fan, Hongbo; Ren, Guoquan; Li, Zhining

    2017-12-01

    We have developed a method for automatically detecting UXO-like targets based on magnetic anomaly inversion and self-adaptive fuzzy c-means clustering. Magnetic anomaly inversion methods are used to estimate the initial locations of multiple UXO-like sources. Although these initial locations have some errors with respect to the real positions, they form dense clouds around the actual positions of the magnetic sources. Then we use the self-adaptive fuzzy c-means clustering algorithm to cluster these initial locations. The estimated number of cluster centroids represents the number of targets and the cluster centroids are regarded as the locations of magnetic targets. Effectiveness of the method has been demonstrated using synthetic datasets. Computational results show that the proposed method can be applied to the case of several UXO-like targets that are randomly scattered within in a confined, shallow subsurface, volume. A field test was carried out to test the validity of the proposed method and the experimental results show that the prearranged magnets can be detected unambiguously and located precisely.

  11. Ultra-performance liquid chromatography tandem mass spectrometry for simultaneous determination of natural steroid hormones in sea lamprey (Petromyzon marinus) plasma and tissues.

    PubMed

    Wang, Huiyong; Bussy, Ugo; Chung-Davidson, Yu-Wen; Li, Weiming

    2016-01-15

    This study aims to provide a rapid, sensitive and precise UPLC-MS/MS method for target steroid quantitation in biological matrices. We developed and validated an UPLC-MS/MS method to simultaneously determine 16 steroids in plasma and tissue samples. Ionization sources of Electrospray Ionization (ESI) and Atmospheric Pressure Chemical Ionization (APCI) were compared in this study by testing their spectrometry performances at the same chromatographic conditions, and the ESI source was found up to five times more sensitive than the APCI. Different sample preparation techniques were investigated for an optimal extraction of steroids from the biological matrices. The developed method exhibited excellent linearity for all analytes with regression coefficients higher than 0.99 in broad concentration ranges. The limit of detection (LOD) was from 0.003 to 0.1ng/mL. The method was validated according to FDA guidance and applied to determine steroids in sea lamprey plasma and tissues (fat and testes) by the developed method. Copyright © 2015. Published by Elsevier B.V.

  12. Feature Vector Construction Method for IRIS Recognition

    NASA Astrophysics Data System (ADS)

    Odinokikh, G.; Fartukov, A.; Korobkin, M.; Yoo, J.

    2017-05-01

    One of the basic stages of iris recognition pipeline is iris feature vector construction procedure. The procedure represents the extraction of iris texture information relevant to its subsequent comparison. Thorough investigation of feature vectors obtained from iris showed that not all the vector elements are equally relevant. There are two characteristics which determine the vector element utility: fragility and discriminability. Conventional iris feature extraction methods consider the concept of fragility as the feature vector instability without respect to the nature of such instability appearance. This work separates sources of the instability into natural and encodinginduced which helps deeply investigate each source of instability independently. According to the separation concept, a novel approach of iris feature vector construction is proposed. The approach consists of two steps: iris feature extraction using Gabor filtering with optimal parameters and quantization with separated preliminary optimized fragility thresholds. The proposed method has been tested on two different datasets of iris images captured under changing environmental conditions. The testing results show that the proposed method surpasses all the methods considered as a prior art by recognition accuracy on both datasets.

  13. Measuring Spatial Variability of Vapor Flux to Characterize Vadose-zone VOC Sources: Flow-cell Experiments

    DOE PAGES

    Mainhagu, Jon; Morrison, C.; Truex, Michael J.; ...

    2014-08-05

    A method termed vapor-phase tomography has recently been proposed to characterize the distribution of volatile organic contaminant mass in vadose-zone source areas, and to measure associated three-dimensional distributions of local contaminant mass discharge. The method is based on measuring the spatial variability of vapor flux, and thus inherent to its effectiveness is the premise that the magnitudes and temporal variability of vapor concentrations measured at different monitoring points within the interrogated area will be a function of the geospatial positions of the points relative to the source location. A series of flow-cell experiments was conducted to evaluate this premise. Amore » well-defined source zone was created by injection and extraction of a non-reactive gas (SF6). Spatial and temporal concentration distributions obtained from the tests were compared to simulations produced with a mathematical model describing advective and diffusive transport. Tests were conducted to characterize both areal and vertical components of the application. Decreases in concentration over time were observed for monitoring points located on the opposite side of the source zone from the local–extraction point, whereas increases were observed for monitoring points located between the local–extraction point and the source zone. We found that the results illustrate that comparison of temporal concentration profiles obtained at various monitoring points gives a general indication of the source location with respect to the extraction and monitoring points.« less

  14. Method and system for producing sputtered thin films with sub-angstrom thickness uniformity or custom thickness gradients

    DOEpatents

    Folta, James A.; Montcalm, Claude; Walton, Christopher

    2003-01-01

    A method and system for producing a thin film with highly uniform (or highly accurate custom graded) thickness on a flat or graded substrate (such as concave or convex optics), by sweeping the substrate across a vapor deposition source with controlled (and generally, time-varying) velocity. In preferred embodiments, the method includes the steps of measuring the source flux distribution (using a test piece that is held stationary while exposed to the source), calculating a set of predicted film thickness profiles, each film thickness profile assuming the measured flux distribution and a different one of a set of sweep velocity modulation recipes, and determining from the predicted film thickness profiles a sweep velocity modulation recipe which is adequate to achieve a predetermined thickness profile. Aspects of the invention include a practical method of accurately measuring source flux distribution, and a computer-implemented method employing a graphical user interface to facilitate convenient selection of an optimal or nearly optimal sweep velocity modulation recipe to achieve a desired thickness profile on a substrate. Preferably, the computer implements an algorithm in which many sweep velocity function parameters (for example, the speed at which each substrate spins about its center as it sweeps across the source) can be varied or set to zero.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kreuzer, Helen W.; West, Jason B.; Ehleringer, James

    Seeds of the castor plant Ricinus communis, also known as castor beans, are of forensic interest because they are the source of the poison ricin. We have tested whether stable isotope ratios of castor seeds and ricin prepared by various methods can be used as a forensic signature. We collected over 300 castor seed samples from locations around the world and measured the C, N, O, and H stable isotope ratios of the whole seeds, oil, and three types of ricin preparations. Our results demonstrate that N isotope ratios can be used to correlate ricin prepared by any of thesemore » methods to source seeds. Further, stable isotope ratios distinguished >99% of crude and purified ricin protein samples in pair-wise comparison tests. Stable isotope ratios therefore constitute a valuable forensic signature for ricin preparations.« less

  16. Sensitivity of a Bayesian atmospheric-transport inversion model to spatio-temporal sensor resolution applied to the 2006 North Korean nuclear test

    NASA Astrophysics Data System (ADS)

    Lundquist, K. A.; Jensen, D. D.; Lucas, D. D.

    2017-12-01

    Atmospheric source reconstruction allows for the probabilistic estimate of source characteristics of an atmospheric release using observations of the release. Performance of the inversion depends partially on the temporal frequency and spatial scale of the observations. The objective of this study is to quantify the sensitivity of the source reconstruction method to sparse spatial and temporal observations. To this end, simulations of atmospheric transport of noble gasses are created for the 2006 nuclear test at the Punggye-ri nuclear test site. Synthetic observations are collected from the simulation, and are taken as "ground truth". Data denial techniques are used to progressively coarsen the temporal and spatial resolution of the synthetic observations, while the source reconstruction model seeks to recover the true input parameters from the synthetic observations. Reconstructed parameters considered here are source location, source timing and source quantity. Reconstruction is achieved by running an ensemble of thousands of dispersion model runs that sample from a uniform distribution of the input parameters. Machine learning is used to train a computationally-efficient surrogate model from the ensemble simulations. Monte Carlo sampling and Bayesian inversion are then used in conjunction with the surrogate model to quantify the posterior probability density functions of source input parameters. This research seeks to inform decision makers of the tradeoffs between more expensive, high frequency observations and less expensive, low frequency observations.

  17. Joint Inversion of Source Location and Source Mechanism of Induced Microseismics

    NASA Astrophysics Data System (ADS)

    Liang, C.

    2014-12-01

    Seismic source mechanism is a useful property to indicate the source physics and stress and strain distribution in regional, local and micro scales. In this study we jointly invert source mechanisms and locations for microseismics induced in fluid fracturing treatment in the oil and gas industry. For the events that are big enough to see waveforms, there are quite a few techniques can be applied to invert the source mechanism including waveform inversion, first polarity inversion and many other methods and variants based on these methods. However, for events that are too small to identify in seismic traces such as the microseismics induced by the fluid fracturing in the Oil and Gas industry, a source scanning algorithms (SSA for short) with waveform stacking are usually applied. At the same time, a joint inversion of location and source mechanism are possible but at a cost of high computation budget. The algorithm is thereby called Source Location and Mechanism Scanning Algorithm, SLMSA for short. In this case, for given velocity structure, all possible combinations of source locations (X,Y and Z) and source mechanism (Strike, Dip and Rake) are used to compute travel-times and polarities of waveforms. Correcting Normal moveout times and polarities, and stacking all waveforms, the (X, Y, Z , strike, dip, rake) combination that gives the strongest stacking waveform is identified as the solution. To solve the problem of high computation problem, CPU-GPU programing is applied. Numerical datasets are used to test the algorithm. The SLMSA has also been applied to a fluid fracturing datasets and reveal several advantages against the location only method: (1) for shear sources, the source only program can hardly locate them because of the canceling out of positive and negative polarized traces, but the SLMSA method can successfully pick up those events; (2) microseismic locations alone may not be enough to indicate the directionality of micro-fractures. The statistics of source mechanisms can certainly provide more knowledges on the orientation of fractures; (3) in our practice, the joint inversion method almost always yield more events than the source only method and for those events that are also picked by the SSA method, the stacking power of SLMSA are always higher than the ones obtained in SSA.

  18. Phase contrast imaging simulation and measurements using polychromatic sources with small source-object distances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Golosio, Bruno; Carpinelli, Massimo; Masala, Giovanni Luca

    Phase contrast imaging is a technique widely used in synchrotron facilities for nondestructive analysis. Such technique can also be implemented through microfocus x-ray tube systems. Recently, a relatively new type of compact, quasimonochromatic x-ray sources based on Compton backscattering has been proposed for phase contrast imaging applications. In order to plan a phase contrast imaging system setup, to evaluate the system performance and to choose the experimental parameters that optimize the image quality, it is important to have reliable software for phase contrast imaging simulation. Several software tools have been developed and tested against experimental measurements at synchrotron facilities devotedmore » to phase contrast imaging. However, many approximations that are valid in such conditions (e.g., large source-object distance, small transverse size of the object, plane wave approximation, monochromatic beam, and Gaussian-shaped source focal spot) are not generally suitable for x-ray tubes and other compact systems. In this work we describe a general method for the simulation of phase contrast imaging using polychromatic sources based on a spherical wave description of the beam and on a double-Gaussian model of the source focal spot, we discuss the validity of some possible approximations, and we test the simulations against experimental measurements using a microfocus x-ray tube on three types of polymers (nylon, poly-ethylene-terephthalate, and poly-methyl-methacrylate) at varying source-object distance. It will be shown that, as long as all experimental conditions are described accurately in the simulations, the described method yields results that are in good agreement with experimental measurements.« less

  19. PhySIC_IST: cleaning source trees to infer more informative supertrees

    PubMed Central

    Scornavacca, Celine; Berry, Vincent; Lefort, Vincent; Douzery, Emmanuel JP; Ranwez, Vincent

    2008-01-01

    Background Supertree methods combine phylogenies with overlapping sets of taxa into a larger one. Topological conflicts frequently arise among source trees for methodological or biological reasons, such as long branch attraction, lateral gene transfers, gene duplication/loss or deep gene coalescence. When topological conflicts occur among source trees, liberal methods infer supertrees containing the most frequent alternative, while veto methods infer supertrees not contradicting any source tree, i.e. discard all conflicting resolutions. When the source trees host a significant number of topological conflicts or have a small taxon overlap, supertree methods of both kinds can propose poorly resolved, hence uninformative, supertrees. Results To overcome this problem, we propose to infer non-plenary supertrees, i.e. supertrees that do not necessarily contain all the taxa present in the source trees, discarding those whose position greatly differs among source trees or for which insufficient information is provided. We detail a variant of the PhySIC veto method called PhySIC_IST that can infer non-plenary supertrees. PhySIC_IST aims at inferring supertrees that satisfy the same appealing theoretical properties as with PhySIC, while being as informative as possible under this constraint. The informativeness of a supertree is estimated using a variation of the CIC (Cladistic Information Content) criterion, that takes into account both the presence of multifurcations and the absence of some taxa. Additionally, we propose a statistical preprocessing step called STC (Source Trees Correction) to correct the source trees prior to the supertree inference. STC is a liberal step that removes the parts of each source tree that significantly conflict with other source trees. Combining STC with a veto method allows an explicit trade-off between veto and liberal approaches, tuned by a single parameter. Performing large-scale simulations, we observe that STC+PhySIC_IST infers much more informative supertrees than PhySIC, while preserving low type I error compared to the well-known MRP method. Two biological case studies on animals confirm that the STC preprocess successfully detects anomalies in the source trees while STC+PhySIC_IST provides well-resolved supertrees agreeing with current knowledge in systematics. Conclusion The paper introduces and tests two new methodologies, PhySIC_IST and STC, that demonstrate the interest in inferring non-plenary supertrees as well as preprocessing the source trees. An implementation of the methods is available at: . PMID:18834542

  20. PhySIC_IST: cleaning source trees to infer more informative supertrees.

    PubMed

    Scornavacca, Celine; Berry, Vincent; Lefort, Vincent; Douzery, Emmanuel J P; Ranwez, Vincent

    2008-10-04

    Supertree methods combine phylogenies with overlapping sets of taxa into a larger one. Topological conflicts frequently arise among source trees for methodological or biological reasons, such as long branch attraction, lateral gene transfers, gene duplication/loss or deep gene coalescence. When topological conflicts occur among source trees, liberal methods infer supertrees containing the most frequent alternative, while veto methods infer supertrees not contradicting any source tree, i.e. discard all conflicting resolutions. When the source trees host a significant number of topological conflicts or have a small taxon overlap, supertree methods of both kinds can propose poorly resolved, hence uninformative, supertrees. To overcome this problem, we propose to infer non-plenary supertrees, i.e. supertrees that do not necessarily contain all the taxa present in the source trees, discarding those whose position greatly differs among source trees or for which insufficient information is provided. We detail a variant of the PhySIC veto method called PhySIC_IST that can infer non-plenary supertrees. PhySIC_IST aims at inferring supertrees that satisfy the same appealing theoretical properties as with PhySIC, while being as informative as possible under this constraint. The informativeness of a supertree is estimated using a variation of the CIC (Cladistic Information Content) criterion, that takes into account both the presence of multifurcations and the absence of some taxa. Additionally, we propose a statistical preprocessing step called STC (Source Trees Correction) to correct the source trees prior to the supertree inference. STC is a liberal step that removes the parts of each source tree that significantly conflict with other source trees. Combining STC with a veto method allows an explicit trade-off between veto and liberal approaches, tuned by a single parameter.Performing large-scale simulations, we observe that STC+PhySIC_IST infers much more informative supertrees than PhySIC, while preserving low type I error compared to the well-known MRP method. Two biological case studies on animals confirm that the STC preprocess successfully detects anomalies in the source trees while STC+PhySIC_IST provides well-resolved supertrees agreeing with current knowledge in systematics. The paper introduces and tests two new methodologies, PhySIC_IST and STC, that demonstrate the interest in inferring non-plenary supertrees as well as preprocessing the source trees. An implementation of the methods is available at: http://www.atgc-montpellier.fr/physic_ist/.

  1. Simulation of the pulse propagation by the interacting mode parabolic equation method

    NASA Astrophysics Data System (ADS)

    Trofimov, M. Yu.; Kozitskiy, S. B.; Zakharenko, A. D.

    2018-07-01

    A broadband modeling of pulses has been performed by using the previously derived interacting mode parabolic equation through the Fourier synthesis. Test examples on the wedge with the angle 2.86∘ (known as the ASA benchmark) show excellent agreement with the source images method.

  2. DESIGN OF A HIGH COMPRESSION, DIRECT INJECTION, SPARK-IGNITION, METHANOL FUELED RESEARCH ENGINE WITH AN INTEGRAL INJECTOR-IGNITION SOURCE INSERT, SAE PAPER 2001-01-3651

    EPA Science Inventory

    A stratified charge research engine and test stand were designed and built for this work. The primary goal of this project was to evaluate the feasibility of using a removal integral injector ignition source insert which allows a convenient method of charging the relative locat...

  3. The Associations between Health Literacy, Reasons for Seeking Health Information, and Information Sources Utilized by Taiwanese Adults

    ERIC Educational Resources Information Center

    Wei, Mi-Hsiu

    2014-01-01

    Objective: To determine the associations between health literacy, the reasons for seeking health information, and the information sources utilized by Taiwanese adults. Method: A cross-sectional survey of 752 adults residing in rural and urban areas of Taiwan was conducted via questionnaires. Chi-squared tests and logistic regression were used for…

  4. 40 CFR Table 9 to Subpart Xxxx of... - Minimum Data for Continuous Compliance With the Emission Limits for Tire Production Affected Sources

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES National Emissions Standards for..., appendix A), or approved alternative method, test results indicating the mass percent of each HAP for each... mass percent of each HAP for each cement and solvent, as purchased.b. The mass of each cement and...

  5. Ultra-portable field transfer radiometer for vicarious calibration of earth imaging sensors

    NASA Astrophysics Data System (ADS)

    Thome, Kurtis; Wenny, Brian; Anderson, Nikolaus; McCorkel, Joel; Czapla-Myers, Jeffrey; Biggar, Stuart

    2018-06-01

    A small portable transfer radiometer has been developed as part of an effort to ensure the quality of upwelling radiance from test sites used for vicarious calibration in the solar reflective. The test sites are used to predict top-of-atmosphere reflectance relying on ground-based measurements of the atmosphere and surface. The portable transfer radiometer is designed for one-person operation for on-site field calibration of instrumentation used to determine ground-leaving radiance. The current work describes the detector- and source-based radiometric calibration of the transfer radiometer highlighting the expected accuracy and SI-traceability. The results indicate differences between the detector-based and source-based results greater than the combined uncertainties of the approaches. Results from recent field deployments of the transfer radiometer using a solar radiation based calibration agree with the source-based laboratory calibration within the combined uncertainties of the methods. The detector-based results show a significant difference to the solar-based calibration. The source-based calibration is used as the basis for a radiance-based calibration of the Landsat-8 Operational Land Imager that agrees with the OLI calibration to within the uncertainties of the methods.

  6. Radio spectra of bright compact sources at z > 4.5

    NASA Astrophysics Data System (ADS)

    Coppejans, Rocco; van Velzen, Sjoert; Intema, Huib T.; Müller, Cornelia; Frey, Sándor; Coppejans, Deanne L.; Cseh, Dávid; Williams, Wendy L.; Falcke, Heino; Körding, Elmar G.; Orrú, Emanuela; Paragi, Zsolt; Gabányi, Krisztina É.

    2017-05-01

    High-redshift quasars are important to study galaxy and active galactic nuclei evolution, test cosmological models and study supermassive black hole growth. Optical searches for high-redshift sources have been very successful, but radio searches are not hampered by dust obscuration and should be more effective at finding sources at even higher redshifts. Identifying high-redshift sources based on radio data is, however, not trivial. Here we report on new multifrequency Giant Metrewave Radio Telescope observations of eight z > 4.5 sources previously studied at high angular resolution with very long baseline interferometry (VLBI). Combining these observations with those from the literature, we construct broad-band radio spectra of all 30 z > 4.5 sources that have been observed with VLBI. In the sample we found flat, steep and peaked spectra in approximately equal proportions. Despite several selection effects, we conclude that the z > 4.5 VLBI (and likely also non-VLBI) sources have diverse spectra and that only about a quarter of the sources in the sample have flat spectra. Previously, the majority of high-redshift radio sources were identified based on their ultrasteep spectra. Recently, a new method has been proposed to identify these objects based on their megahertz-peaked spectra. No method would have identified more than 18 per cent of the high-redshift sources in this sample. More effective methods are necessary to reliably identify complete samples of high-redshift sources based on radio data.

  7. After the flood: an evaluation of in-home drinking water treatment with combined flocculent-disinfectant following Tropical Storm Jeanne -- Gonaives, Haiti, 2004.

    PubMed

    Colindres, Romulo E; Jain, Seema; Bowen, Anna; Mintz, Eric; Domond, Polyana

    2007-09-01

    Tropical Storm Jeanne struck Haiti in September 2004, causing widespread flooding which contaminated water sources, displaced thousands of families and killed approximately 2,800 people. Local leaders distributed PūR, a flocculent-disinfectant product for household water treatment, to affected populations. We evaluated knowledge, attitudes, practices, and drinking water quality among a sample of PūR recipients. We interviewed representatives of 100 households in three rural communities who received PūR and PūR-related education. Water sources were tested for fecal contamination and turbidity; stored household water was tested for residual chlorine. All households relied on untreated water sources (springs [66%], wells [15%], community taps [13%], and rivers [6%]). After distribution, PūR was the most common in-home treatment method (58%) followed by chlorination (30%), plant-based flocculation (6%), boiling (5%), and filtration (1%). Seventy-eight percent of respondents correctly answered five questions about how to use PūR; 81% reported PūR easy to use; and 97% reported that PūR-treated water appears, tastes, and smells better than untreated water. Although water sources tested appeared clear, fecal coliform bacteria were detected in all sources (range 1 - >200 cfu/100 ml). Chlorine was present in 10 (45%) of 22 stored drinking water samples in households using PūR. PūR was well-accepted and properly used in remote communities where local leaders helped with distribution and education. This highly effective water purification method can help protect disaster-affected communities from waterborne disease.

  8. Calcium absorbability from milk products, an imitation milk, and calcium carbonate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Recker, R.R.; Bammi, A.; Barger-Lux, M.J.

    Whole milk, chocolate milk, yogurt, imitation milk (prepared from dairy and nondairy products), cheese, and calcium carbonate were labeled with /sup 45/Ca and administered as a series of test meals to 10 healthy postmenopausal women. Carrier Ca content of the test meals was held constant at 250 mg and subjects fasted before each meal. The absorbability of Ca from the six sources was compared by measuring fractional absorption by the double isotope method. The mean absorption values for all six sources were tightly clustered between 21 and 26% and none was significantly different from the others using one-way analysis ofmore » variance. We conclude that none of the sources was significantly superior or inferior to the others.« less

  9. Linking ceragenins to water-treatment membranes to minimize biofouling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hibbs, Michael R.; Altman, Susan Jeanne; Feng, Yanshu

    Ceragenins were used to create biofouling resistant water-treatment membranes. Ceragenins are synthetically produced antimicrobial peptide mimics that display broad-spectrum bactericidal activity. While ceragenins have been used on bio-medical devices, use of ceragenins on water-treatment membranes is novel. Biofouling impacts membrane separation processes for many industrial applications such as desalination, waste-water treatment, oil and gas extraction, and power generation. Biofouling results in a loss of permeate flux and increase in energy use. Creation of biofouling resistant membranes will assist in creation of clean water with lower energy usage and energy with lower water usage. Five methods of attaching three different cerageninmore » molecules were conducted and tested. Biofouling reduction was observed in the majority of the tests, indicating the ceragenins are a viable solution to biofouling on water treatment membranes. Silane direct attachment appears to be the most promising attachment method if a high concentration of CSA-121a is used. Additional refinement of the attachment methods are needed in order to achieve our goal of several log-reduction in biofilm cell density without impacting the membrane flux. Concurrently, biofilm forming bacteria were isolated from source waters relevant for water treatment: wastewater, agricultural drainage, river water, seawater, and brackish groundwater. These isolates can be used for future testing of methods to control biofouling. Once isolated, the ability of the isolates to grow biofilms was tested with high-throughput multiwell methods. Based on these tests, the following species were selected for further testing in tube reactors and CDC reactors: Pseudomonas ssp. (wastewater, agricultural drainage, and Colorado River water), Nocardia coeliaca or Rhodococcus spp. (wastewater), Pseudomonas fluorescens and Hydrogenophaga palleronii (agricultural drainage), Sulfitobacter donghicola, Rhodococcus fascians, Rhodobacter katedanii, and Paracoccus marcusii (seawater), and Sphingopyxis spp. (groundwater). The testing demonstrated the ability of these isolates to be used for biofouling control testing under laboratory conditions. Biofilm forming bacteria were obtained from all the source water samples.« less

  10. Measurements of Parameters Controlling the Emissions of Organophosphate Flame Retardants in Indoor Environments.

    PubMed

    Liang, Yirui; Liu, Xiaoyu; Allen, Matthew R

    2018-05-15

    Emission of semivolatile organic compounds (SVOCs) from source materials usually occurs very slowly in indoor environments due to their low volatility. When the SVOC emission process is controlled by external mass transfer, the gas-phase concentration in equilibrium with the material ( y 0 ) is used as a key parameter to simplify the source models that are based on solid-phase diffusion. A material-air-material (M-A-M) configured microchamber method was developed to rapidly measure y 0 for a polyisocyanurate rigid foam material containing organophosphate flame retardants (OPRFs). The emission test was conducted in 44 mL microchambers for target OPFRs, including tris(2-chloroethyl) phosphate (CASRN: 115-96-8), tris(1-chloro-2-propyl) phosphate (CASRN: 13674-84-5), and tris(1,3-dichloro-2-propyl) phosphate (CASRN: 13674-87-8). In addition to the microchamber emission test, two other types of tests were conducted to determine y 0 for the same foam material: OPFR diffusive tube sampling tests from the OPFR source foam using stainless-steel thermal desorption tubes and sorption tests of OPFR on an OPFR-free foam in a 53 L small chamber. Comparison of parameters obtained from the three methods suggests that the discrepancy could be caused by a combination of theoretical, experimental, and computational differences. Based on the y 0 measurements, a linear relationship between the ratio of y 0 to saturated vapor pressure concentration and material-phase mass fractions has been found for phthalates and OPFRs.

  11. Automated source classification of new transient sources

    NASA Astrophysics Data System (ADS)

    Oertel, M.; Kreikenbohm, A.; Wilms, J.; DeLuca, A.

    2017-10-01

    The EXTraS project harvests the hitherto unexplored temporal domain information buried in the serendipitous data collected by the European Photon Imaging Camera (EPIC) onboard the ESA XMM-Newton mission since its launch. This includes a search for fast transients, missed by standard image analysis, and a search and characterization of variability in hundreds of thousands of sources. We present an automated classification scheme for new transient sources in the EXTraS project. The method is as follows: source classification features of a training sample are used to train machine learning algorithms (performed in R; randomForest (Breiman, 2001) in supervised mode) which are then tested on a sample of known source classes and used for classification.

  12. Search for Spatially Extended Fermi-LAT Sources Using Two Years of Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lande, Joshua; Ackermann, Markus; Allafort, Alice

    2012-07-13

    Spatial extension is an important characteristic for correctly associating {gamma}-ray-emitting sources with their counterparts at other wavelengths and for obtaining an unbiased model of their spectra. We present a new method for quantifying the spatial extension of sources detected by the Large Area Telescope (LAT), the primary science instrument on the Fermi Gamma-ray Space Telescope (Fermi). We perform a series of Monte Carlo simulations to validate this tool and calculate the LAT threshold for detecting the spatial extension of sources. We then test all sources in the second Fermi -LAT catalog (2FGL) for extension. We report the detection of sevenmore » new spatially extended sources.« less

  13. 2D joint inversion of CSAMT and magnetic data based on cross-gradient theory

    NASA Astrophysics Data System (ADS)

    Wang, Kun-Peng; Tan, Han-Dong; Wang, Tao

    2017-06-01

    A two-dimensional forward and backward algorithm for the controlled-source audio-frequency magnetotelluric (CSAMT) method is developed to invert data in the entire region (near, transition, and far) and deal with the effects of artificial sources. First, a regularization factor is introduced in the 2D magnetic inversion, and the magnetic susceptibility is updated in logarithmic form so that the inversion magnetic susceptibility is always positive. Second, the joint inversion of the CSAMT and magnetic methods is completed with the introduction of the cross gradient. By searching for the weight of the cross-gradient term in the objective function, the mutual influence between two different physical properties at different locations are avoided. Model tests show that the joint inversion based on cross-gradient theory offers better results than the single-method inversion. The 2D forward and inverse algorithm for CSAMT with source can effectively deal with artificial sources and ensures the reliability of the final joint inversion algorithm.

  14. NEXT GENERATION LEACHING TESTS FOR EVALUATING ...

    EPA Pesticide Factsheets

    In the U.S. as in other countries, there is increased interest in using industrial by-products as alternative or secondary materials, helping to conserve virgin or raw materials. The LEAF and associated test methods are being used to develop the source term for leaching or any inorganic constituents of potential concern (COPC) in determining what is environmentally acceptable. The leaching test methods include batch equilibrium, percolation column and semi-dynamic mass transport tests for monolithic and compacted granular materials. By testing over a range of values for pH, liquid/solid ratio, and physical form of the material, this approach allows one data set to be used to evaluate a range of management scenarios for a material, representing different environmental conditions (e.g., disposal or beneficial use). The results from these tests may be interpreted individually or integrated to identify a solid material’s characteristic leaching behavior. Furthermore the LEAF approach provides the ability to make meaningful comparisons of leaching between similar and dissimilar materials from national and worldwide origins. To present EPA's research under SHC to implement validated leaching tests referred to as the Leaching Environmental Assessment Framework (LEAF). The primary focus will be on the guidance for implementation of LEAF describing three case studies for developing source terms for evaluating inorganic constituents.

  15. Comparison of methods to estimate water access: a pilot study of a GPS-based approach in low resource settings.

    PubMed

    Pearson, Amber L

    2016-09-20

    Most water access studies involve self-reported measures such as time spent or simple spatial measures such as Euclidean distance from home to source. GPS-based measures of access are often considered actual access and have shown little correlation with self-reported measures. One main obstacle to widespread use of GPS-based measurement of access to water has been technological limitations (e.g., battery life). As such, GPS-based measures have been limited by time and in sample size. The aim of this pilot study was to develop and test a novel GPS unit, (≤4-week battery life, waterproof) to measure access to water. The GPS-based method was pilot-tested to estimate number of trips per day, time spent and distance traveled to source for all water collected over a 3-day period in five households in south-western Uganda. This method was then compared to self-reported measures and commonly used spatial measures of access for the same households. Time spent collecting water was significantly overestimated using a self-reported measure, compared to GPS-based (p < 0.05). In contrast, both the GIS Euclidean distances to nearest and actual primary source significantly underestimated distances traveled, compared to the GPS-based measurement of actual travel paths to water source (p < 0.05). Households did not consistently collect water from the source nearest their home. Comparisons between the GPS-based measure and self-reported meters traveled were not made, as respondents did not feel that they could accurately estimate distance. However, there was complete agreement between self-reported primary source and GPS-based. Reliance on cross-sectional self-reported or simple GIS measures leads to misclassification in water access measurement. This new method offers reductions in such errors and may aid in understanding dynamic measures of access to water for health studies.

  16. 40 CFR 435.11 - Specialized definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Extraction Point Source Category,” EPA-821-R-11-004. See paragraph (uu) of this section. (e) Biodegradation... Bottle Biodegradation Test System: Modified ISO 11734:1995,” EPA Method 1647, supplemented with...

  17. 40 CFR 435.11 - Specialized definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Extraction Point Source Category,” EPA-821-R-11-004. See paragraph (uu) of this section. (e) Biodegradation... Bottle Biodegradation Test System: Modified ISO 11734:1995,” EPA Method 1647, supplemented with...

  18. 40 CFR 435.11 - Specialized definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Extraction Point Source Category,” EPA-821-R-11-004. See paragraph (uu) of this section. (e) Biodegradation... Bottle Biodegradation Test System: Modified ISO 11734:1995,” EPA Method 1647, supplemented with...

  19. 40 CFR 63.1312 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ....111) Owner or operator (§ 63.2) Performance evaluation (§ 63.2) Performance test (§ 63.2) Permitting...-up, shutdown, and malfunction plan (§ 63.101) State (§ 63.2) Stationary Source (§ 63.2) Surge control vessel (§ 63.161) Temperature monitoring device (§ 63.111) Test method (§ 63.2) Treatment process (§ 63...

  20. Comparing Single species Toxicity Tests to Mesocosm Community-Level Responses to Total Dissolved Solids Comprised of Different Major Ions

    EPA Science Inventory

    Total Dissolved Solids (TDS) dosing studies representing different sources of ions were conducted from 2011-2015. Emergence responses in stream mesocosms were compared to single-species exposures using a whole effluent testing (WET) format and an ex-situ method (single species te...

  1. Development and Initial Testing of a Structured Clinical Observation Tool to Assess Pharmacotherapy Competence

    ERIC Educational Resources Information Center

    Young, John Q.; Lieu, Sandra; O'Sullivan, Patricia; Tong, Lowell

    2011-01-01

    Objective: The authors developed and tested the feasibility and utility of a new direct-observation instrument to assess trainee performance of a medication management session. Methods: The Psychopharmacotherapy-Structured Clinical Observation (P-SCO) instrument was developed based on multiple sources of expertise and then implemented in 4…

  2. Independent assessment of source position for gynecological applicator in high-dose-rate brachytherapy

    PubMed Central

    Nakamura, Satoshi; Nishioka, Shie; Iijima, Kotaro; Wakita, Akihisa; Abe, Yukinao; Tohyama, Naoki; Kawamura, Shinji; Minemura, Toshiyuki; Itami, Jun

    2017-01-01

    Purpose The aim of this study is to describe a phantom designed for independent examination of a source position in brachytherapy that is suitable for inclusion in an external auditing program. Material and methods We developed a phantom that has a special design and a simple mechanism, capable of firmly fixing a radiochromic film and tandem-ovoid applicators to assess discrepancies in source positions between the measurements and treatment planning system (TPS). Three tests were conducted: 1) reproducibility of the source positions (n = 5); 2) source movements inside the applicator tube; 3) changing source position by changing curvature of the transfer tubes. In addition, as a trial study, the phantom was mailed to 12 institutions, and 23 trial data sets were examined. The source displacement ΔX and ΔY (reference = TPS) were expressed according to the coordinates, in which the positive direction on the X-axis corresponds to the external side of the applicator perpendicular to source transfer direction Y-axis. Results Test 1: The 1σ fell within 1 mm irrespective of the dwell positions. Test 2: ΔX were greater around the tip of the applicator owing to the source cable. Test 3: All of the source position changes fell within 1 mm. For postal audit, the mean and 1.96σ in ΔX were 0.8 and 0.8 mm, respectively. Almost all data were located within a positive region along the X-axis due to the source cable. The mean and 1.96σ in ΔY were 0.3 and 1.6 mm, respectively. The variance in ΔY was greater than that in ΔX, and large uncertainties exist in the determination of the first dwell position. The 95% confidence limit was 2.1 mm. Conclusions In HDR brachytherapy, an effectiveness of independent source position assessment could be demonstrated. The 95% confidence limit was 2.1 mm for a tandem-ovoids applicator. PMID:29204169

  3. Monte Carlo Perturbation Theory Estimates of Sensitivities to System Dimensions

    DOE PAGES

    Burke, Timothy P.; Kiedrowski, Brian C.

    2017-12-11

    Here, Monte Carlo methods are developed using adjoint-based perturbation theory and the differential operator method to compute the sensitivities of the k-eigenvalue, linear functions of the flux (reaction rates), and bilinear functions of the forward and adjoint flux (kinetics parameters) to system dimensions for uniform expansions or contractions. The calculation of sensitivities to system dimensions requires computing scattering and fission sources at material interfaces using collisions occurring at the interface—which is a set of events with infinitesimal probability. Kernel density estimators are used to estimate the source at interfaces using collisions occurring near the interface. The methods for computing sensitivitiesmore » of linear and bilinear ratios are derived using the differential operator method and adjoint-based perturbation theory and are shown to be equivalent to methods previously developed using a collision history–based approach. The methods for determining sensitivities to system dimensions are tested on a series of fast, intermediate, and thermal critical benchmarks as well as a pressurized water reactor benchmark problem with iterated fission probability used for adjoint-weighting. The estimators are shown to agree within 5% and 3σ of reference solutions obtained using direct perturbations with central differences for the majority of test problems.« less

  4. An novel identification method of the environmental risk sources for surface water pollution accidents in chemical industrial parks.

    PubMed

    Peng, Jianfeng; Song, Yonghui; Yuan, Peng; Xiao, Shuhu; Han, Lu

    2013-07-01

    The chemical industry is a major source of various pollution accidents. Improving the management level of risk sources for pollution accidents has become an urgent demand for most industrialized countries. In pollution accidents, the released chemicals harm the receptors to some extent depending on their sensitivity or susceptibility. Therefore, identifying the potential risk sources from such a large number of chemical enterprises has become pressingly urgent. Based on the simulation of the whole accident process, a novel and expandable identification method for risk sources causing water pollution accidents is presented. The newly developed approach, by analyzing and stimulating the whole process of a pollution accident between sources and receptors, can be applied to identify risk sources, especially on the nationwide scale. Three major types of losses, such as social, economic and ecological losses, were normalized, analyzed and used for overall consequence modeling. A specific case study area, located in a chemical industry park (CIP) along the Yangtze River in Jiangsu Province, China, was selected to test the potential of the identification method. The results showed that there were four risk sources for pollution accidents in this CIP. Aniline leakage in the HS Chemical Plant would lead to the most serious impact on the surrounding water environment. This potential accident would severely damage the ecosystem up to 3.8 km downstream of Yangtze River, and lead to pollution over a distance stretching to 73.7 km downstream. The proposed method is easily extended to the nationwide identification of potential risk sources.

  5. CERTS Microgrid Laboratory Test Bed - PIER Final Project Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eto, Joseph H.; Eto, Joseph H.; Lasseter, Robert

    2008-07-25

    The objective of the CERTS Microgrid Laboratory Test Bed project was to enhance the ease of integrating small energy sources into a microgrid. The project accomplished this objective by developing and demonstrating three advanced techniques, collectively referred to as the CERTS Microgrid concept, that significantly reduce the level of custom field engineering needed to operate microgrids consisting of small generating sources. The techniques comprising the CERTS Microgrid concept are: 1) a method for effecting automatic and seamless transitions between grid-connected and islanded modes of operation; 2) an approach to electrical protection within the microgrid that does not depend on highmore » fault currents; and 3) a method for microgrid control that achieves voltage and frequency stability under islanded conditions without requiring high-speed communications. The techniques were demonstrated at a full-scale test bed built near Columbus, Ohio and operated by American Electric Power. The testing fully confirmed earlier research that had been conducted initially through analytical simulations, then through laboratory emulations, and finally through factory acceptance testing of individual microgrid components. The islanding and resychronization method met all Institute of Electrical and Electronics Engineers 1547 and power quality requirements. The electrical protections system was able to distinguish between normal and faulted operation. The controls were found to be robust and under all conditions, including difficult motor starts. The results from these test are expected to lead to additional testing of enhancements to the basic techniques at the test bed to improve the business case for microgrid technologies, as well to field demonstrations involving microgrids that involve one or mroe of the CERTS Microgrid concepts.« less

  6. Application of the Bootstrap Statistical Method in Deriving Vibroacoustic Specifications

    NASA Technical Reports Server (NTRS)

    Hughes, William O.; Paez, Thomas L.

    2006-01-01

    This paper discusses the Bootstrap Method for specification of vibroacoustic test specifications. Vibroacoustic test specifications are necessary to properly accept or qualify a spacecraft and its components for the expected acoustic, random vibration and shock environments seen on an expendable launch vehicle. Traditionally, NASA and the U.S. Air Force have employed methods of Normal Tolerance Limits to derive these test levels based upon the amount of data available, and the probability and confidence levels desired. The Normal Tolerance Limit method contains inherent assumptions about the distribution of the data. The Bootstrap is a distribution-free statistical subsampling method which uses the measured data themselves to establish estimates of statistical measures of random sources. This is achieved through the computation of large numbers of Bootstrap replicates of a data measure of interest and the use of these replicates to derive test levels consistent with the probability and confidence desired. The comparison of the results of these two methods is illustrated via an example utilizing actual spacecraft vibroacoustic data.

  7. How Big Was It? Getting at Yield

    NASA Astrophysics Data System (ADS)

    Pasyanos, M.; Walter, W. R.; Ford, S. R.

    2013-12-01

    One of the most coveted pieces of information in the wake of a nuclear test is the explosive yield. Determining the yield from remote observations, however, is not necessarily a trivial thing. For instance, recorded observations of seismic amplitudes, used to estimate the yield, are significantly modified by the intervening media, which varies widely, and needs to be properly accounted for. Even after correcting for propagation effects such as geometrical spreading, attenuation, and station site terms, getting from the resulting source term to a yield depends on the specifics of the explosion source model, including material properties, and depth. Some formulas are based on assumptions of the explosion having a standard depth-of-burial and observed amplitudes can vary if the actual test is either significantly overburied or underburied. We will consider the complications and challenges of making these determinations using a number of standard, more traditional methods and a more recent method that we have developed using regional waveform envelopes. We will do this comparison for recent declared nuclear tests from the DPRK. We will also compare the methods using older explosions at the Nevada Test Site with announced yields, material and depths, so that actual performance can be measured. In all cases, we also strive to quantify realistic uncertainties on the yield estimation.

  8. What Is the Reference? An Examination of Alternatives to the Reference Sources Used in IES TM-30-15

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Royer, Michael P.

    A study was undertaken to document the role of the reference illuminant in the IES TM-30-15 method for evaluating color rendition. TM-30-15 relies on a relative reference scheme; that is, the reference illuminant and test source always have the same correlated color temperature (CCT). The reference illuminant is a Planckian radiator, model of daylight, or combination of those two, depending on the exact CCT of the test source. Three alternative reference schemes were considered: 1) either using all Planckian radiators or all daylight models; 2) using only one of ten possible illuminants (Planckian, daylight, or equal energy), regardless of themore » CCT of the test source; 3) using an off-Planckian reference illuminant (i.e., a source with a negative Duv). No reference scheme is inherently superior to another, with differences in metric values largely a result of small differences in gamut shape of the reference alternatives. While using any of the alternative schemes is more reasonable in the TM-30-15 evaluation framework than it was with the CIE CRI framework, the differences still ultimately manifest only as changes in interpretation of the results. References are employed in color rendering measures to provide a familiar point of comparison, not to establish an ideal source.« less

  9. Cartridge output testing - Methods to overcome closed-bomb shortcomings

    NASA Technical Reports Server (NTRS)

    Bement, Laurence J.; Schimmel, Morry L.

    1991-01-01

    Although the closed-bomb test has achieved virtually universal acceptance for measuring the output performance of pyrotechnic cartridges, there are serious shortcomings in its ability to quantify the performance of cartridges used as energy sources for pyrotechnic-activated mechanical devices. This paper presents several examples of cartridges (including the NASA Standard Initiator NSI) that successfully met closed-bomb performance requirements, but resulted in functional failures in mechanisms. To resolve these failures, test methods were developed to demonstrate a functional margin, based on comparing energy required to accomplish the function to energy deliverable by the cartridge.

  10. AN OPEN-SOURCE NEUTRINO RADIATION HYDRODYNAMICS CODE FOR CORE-COLLAPSE SUPERNOVAE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Connor, Evan, E-mail: evanoconnor@ncsu.edu; CITA, Canadian Institute for Theoretical Astrophysics, Toronto, M5S 3H8

    2015-08-15

    We present an open-source update to the spherically symmetric, general-relativistic hydrodynamics, core-collapse supernova (CCSN) code GR1D. The source code is available at http://www.GR1Dcode.org. We extend its capabilities to include a general-relativistic treatment of neutrino transport based on the moment formalisms of Shibata et al. and Cardall et al. We pay special attention to implementing and testing numerical methods and approximations that lessen the computational demand of the transport scheme by removing the need to invert large matrices. This is especially important for the implementation and development of moment-like transport methods in two and three dimensions. A critical component of neutrinomore » transport calculations is the neutrino–matter interaction coefficients that describe the production, absorption, scattering, and annihilation of neutrinos. In this article we also describe our open-source neutrino interaction library NuLib (available at http://www.nulib.org). We believe that an open-source approach to describing these interactions is one of the major steps needed to progress toward robust models of CCSNe and robust predictions of the neutrino signal. We show, via comparisons to full Boltzmann neutrino-transport simulations of CCSNe, that our neutrino transport code performs remarkably well. Furthermore, we show that the methods and approximations we employ to increase efficiency do not decrease the fidelity of our results. We also test the ability of our general-relativistic transport code to model failed CCSNe by evolving a 40-solar-mass progenitor to the onset of collapse to a black hole.« less

  11. An optimized inverse modelling method for determining the location and strength of a point source releasing airborne material in urban environment

    NASA Astrophysics Data System (ADS)

    Efthimiou, George C.; Kovalets, Ivan V.; Venetsanos, Alexandros; Andronopoulos, Spyros; Argyropoulos, Christos D.; Kakosimos, Konstantinos

    2017-12-01

    An improved inverse modelling method to estimate the location and the emission rate of an unknown point stationary source of passive atmospheric pollutant in a complex urban geometry is incorporated in the Computational Fluid Dynamics code ADREA-HF and presented in this paper. The key improvement in relation to the previous version of the method lies in a two-step segregated approach. At first only the source coordinates are analysed using a correlation function of measured and calculated concentrations. In the second step the source rate is identified by minimizing a quadratic cost function. The validation of the new algorithm is performed by simulating the MUST wind tunnel experiment. A grid-independent flow field solution is firstly attained by applying successive refinements of the computational mesh and the final wind flow is validated against the measurements quantitatively and qualitatively. The old and new versions of the source term estimation method are tested on a coarse and a fine mesh. The new method appeared to be more robust, giving satisfactory estimations of source location and emission rate on both grids. The performance of the old version of the method varied between failure and success and appeared to be sensitive to the selection of model error magnitude that needs to be inserted in its quadratic cost function. The performance of the method depends also on the number and the placement of sensors constituting the measurement network. Of significant interest for the practical application of the method in urban settings is the number of concentration sensors required to obtain a ;satisfactory; determination of the source. The probability of obtaining a satisfactory solution - according to specified criteria -by the new method has been assessed as function of the number of sensors that constitute the measurement network.

  12. Multipath interference test method for distributed amplifiers

    NASA Astrophysics Data System (ADS)

    Okada, Takahiro; Aida, Kazuo

    2005-12-01

    A method for testing distributed amplifiers is presented; the multipath interference (MPI) is detected as a beat spectrum between the multipath signal and the direct signal using a binary frequency shifted keying (FSK) test signal. The lightwave source is composed of a DFB-LD that is directly modulated by a pulse stream passing through an equalizer, and emits the FSK signal of the frequency deviation of about 430MHz at repetition rate of 80-100 kHz. The receiver consists of a photo-diode and an electrical spectrum analyzer (ESA). The base-band power spectrum peak appeared at the frequency of the FSK frequency deviation can be converted to amount of MPI using a calibration chart. The test method has improved the minimum detectable MPI as low as -70 dB, compared to that of -50 dB of the conventional test method. The detailed design and performance of the proposed method are discussed, including the MPI simulator for calibration procedure, computer simulations for evaluating the error caused by the FSK repetition rate and the fiber length under test and experiments on singlemode fibers and distributed Raman amplifier.

  13. A new method of field MRTD test

    NASA Astrophysics Data System (ADS)

    Chen, Zhibin; Song, Yan; Liu, Xianhong; Xiao, Wenjian

    2014-09-01

    MRTD is an important indicator to measure the imaging performance of infrared camera. In the traditional laboratory test, blackbody is used as simulated heat source which is not only expensive and bulky but also difficult to meet field testing requirements of online automatic infrared camera MRTD. To solve this problem, this paper introduces a new detection device for MRTD, which uses LED as a simulation heat source and branded plated zinc sulfide glass carved four-bar target as a simulation target. By using high temperature adaptability cassegrain collimation system, the target is simulated to be distance-infinite so that it can be observed by the human eyes to complete the subjective test, or collected to complete objective measurement by image processing. This method will use LED to replace blackbody. The color temperature of LED is calibrated by thermal imager, thereby, the relation curve between the LED temperature controlling current and the blackbody simulation temperature difference is established, accurately achieved the temperature control of the infrared target. Experimental results show that the accuracy of the device in field testing of thermal imager MRTD can be limited within 0.1K, which greatly reduces the cost to meet the project requirements with a wide application value.

  14. Testing biological liquid samples using modified m-line spectroscopy method

    NASA Astrophysics Data System (ADS)

    Augusciuk, Elzbieta; Rybiński, Grzegorz

    2005-09-01

    Non-chemical method of detection of sugar concentration in biological (animal and plant source) liquids has been investigated. Simplified set was build to show the easy way of carrying out the survey and to make easy to gather multiple measurements for error detecting and statistics. Method is suggested as easy and cheap alternative for chemical methods of measuring sugar concentration, but needing a lot effort to be made precise.

  15. Acoustic Source Localization in Aircraft Interiors Using Microphone Array Technologies

    NASA Technical Reports Server (NTRS)

    Sklanka, Bernard J.; Tuss, Joel R.; Buehrle, Ralph D.; Klos, Jacob; Williams, Earl G.; Valdivia, Nicolas

    2006-01-01

    Using three microphone array configurations at two aircraft body stations on a Boeing 777-300ER flight test, the acoustic radiation characteristics of the sidewall and outboard floor system are investigated by experimental measurement. Analysis of the experimental data is performed using sound intensity calculations for closely spaced microphones, PATCH Inverse Boundary Element Nearfield Acoustic Holography, and Spherical Nearfield Acoustic Holography. Each method is compared assessing strengths and weaknesses, evaluating source identification capability for both broadband and narrowband sources, evaluating sources during transient and steady-state conditions, and quantifying field reconstruction continuity using multiple array positions.

  16. Identifying sources of heterogeneity in capture probabilities: An example using the Great Tit Parus major

    USGS Publications Warehouse

    Senar, J.C.; Conroy, M.J.; Carrascal, L.M.; Domenech, J.; Mozetich, I.; Uribe, F.

    1999-01-01

    Heterogeneous capture probabilities are a common problem in many capture-recapture studies. Several methods of detecting the presence of such heterogeneity are currently available, and stratification of data has been suggested as the standard method to avoid its effects. However, few studies have tried to identify sources of heterogeneity, or whether there are interactions among sources. The aim of this paper is to suggest an analytical procedure to identify sources of capture heterogeneity. We use data on the sex and age of Great Tits captured in baited funnel traps, at two localities differing in average temperature. We additionally use 'recapture' data obtained by videotaping at feeder (with no associated trap), where the tits ringed with different colours were recorded. This allowed us to test whether individuals in different classes (age, sex and condition) are not trapped because of trap shyness or because o a reduced use of the bait. We used logistic regression analysis of the capture probabilities to test for the effects of age, sex, condition, location and 'recapture method. The results showed a higher recapture probability in the colder locality. Yearling birds (either males or females) had the highest recapture prob abilities, followed by adult males, while adult females had the lowest recapture probabilities. There was no effect of the method of 'recapture' (trap or video tape), which suggests that adult females are less often captured in traps no because of trap-shyness but because of less dependence on supplementary food. The potential use of this methodological approach in other studies is discussed.

  17. True versus Apparent Malaria Infection Prevalence: The Contribution of a Bayesian Approach

    PubMed Central

    Claes, Filip; Van Hong, Nguyen; Torres, Kathy; Mao, Sokny; Van den Eede, Peter; Thi Thinh, Ta; Gamboa, Dioni; Sochantha, Tho; Thang, Ngo Duc; Coosemans, Marc; Büscher, Philippe; D'Alessandro, Umberto; Berkvens, Dirk; Erhart, Annette

    2011-01-01

    Aims To present a new approach for estimating the “true prevalence” of malaria and apply it to datasets from Peru, Vietnam, and Cambodia. Methods Bayesian models were developed for estimating both the malaria prevalence using different diagnostic tests (microscopy, PCR & ELISA), without the need of a gold standard, and the tests' characteristics. Several sources of information, i.e. data, expert opinions and other sources of knowledge can be integrated into the model. This approach resulting in an optimal and harmonized estimate of malaria infection prevalence, with no conflict between the different sources of information, was tested on data from Peru, Vietnam and Cambodia. Results Malaria sero-prevalence was relatively low in all sites, with ELISA showing the highest estimates. The sensitivity of microscopy and ELISA were statistically lower in Vietnam than in the other sites. Similarly, the specificities of microscopy, ELISA and PCR were significantly lower in Vietnam than in the other sites. In Vietnam and Peru, microscopy was closer to the “true” estimate than the other 2 tests while as expected ELISA, with its lower specificity, usually overestimated the prevalence. Conclusions Bayesian methods are useful for analyzing prevalence results when no gold standard diagnostic test is available. Though some results are expected, e.g. PCR more sensitive than microscopy, a standardized and context-independent quantification of the diagnostic tests' characteristics (sensitivity and specificity) and the underlying malaria prevalence may be useful for comparing different sites. Indeed, the use of a single diagnostic technique could strongly bias the prevalence estimation. This limitation can be circumvented by using a Bayesian framework taking into account the imperfect characteristics of the currently available diagnostic tests. As discussed in the paper, this approach may further support global malaria burden estimation initiatives. PMID:21364745

  18. Source apportionment of airborne particulate matter using organic compounds as tracers

    NASA Astrophysics Data System (ADS)

    Schauer, James J.; Rogge, Wolfgang F.; Hildemann, Lynn M.; Mazurek, Monica A.; Cass, Glen R.; Simoneit, Bernd R. T.

    A chemical mass balance receptor model based on organic compounds has been developed that relates source contributions to airborne fine particle mass concentrations. Source contributions to the concentrations of specific organic compounds are revealed as well. The model is applied to four air quality monitoring sites in southern California using atmospheric organic compound concentration data and source test data collected specifically for the purpose of testing this model. The contributions of up to nine primary particle source types can be separately identified in ambient samples based on this method, and approximately 85% of the organic fine aerosol is assigned to primary sources on an annual average basis. The model provides information on source contributions to fine mass concentrations, fine organic aerosol concentrations and individual organic compound concentrations. The largest primary source contributors to fine particle mass concentrations in Los Angeles are found to include diesel engine exhaust, paved road dust, gasoline-powered vehicle exhaust, plus emissions from food cooking and wood smoke, with smaller contribution from tire dust, plant fragments, natural gas combustion aerosol, and cigarette smoke. Once these primary aerosol source contributions are added to the secondary sulfates, nitrates and organics present, virtually all of the annual average fine particle mass at Los Angeles area monitoring sites can be assigned to its source.

  19. Tracing catchment fine sediment sources using the new SIFT (SedIment Fingerprinting Tool) open source software.

    PubMed

    Pulley, S; Collins, A L

    2018-09-01

    The mitigation of diffuse sediment pollution requires reliable provenance information so that measures can be targeted. Sediment source fingerprinting represents one approach for supporting these needs, but recent methodological developments have resulted in an increasing complexity of data processing methods rendering the approach less accessible to non-specialists. A comprehensive new software programme (SIFT; SedIment Fingerprinting Tool) has therefore been developed which guides the user through critical data analysis decisions and automates all calculations. Multiple source group configurations and composite fingerprints are identified and tested using multiple methods of uncertainty analysis. This aims to explore the sediment provenance information provided by the tracers more comprehensively than a single model, and allows for model configurations with high uncertainties to be rejected. This paper provides an overview of its application to an agricultural catchment in the UK to determine if the approach used can provide a reduction in uncertainty and increase in precision. Five source group classifications were used; three formed using a k-means cluster analysis containing 2, 3 and 4 clusters, and two a-priori groups based upon catchment geology. Three different composite fingerprints were used for each classification and bi-plots, range tests, tracer variability ratios and virtual mixtures tested the reliability of each model configuration. Some model configurations performed poorly when apportioning the composition of virtual mixtures, and different model configurations could produce different sediment provenance results despite using composite fingerprints able to discriminate robustly between the source groups. Despite this uncertainty, dominant sediment sources were identified, and those in close proximity to each sediment sampling location were found to be of greatest importance. This new software, by integrating recent methodological developments in tracer data processing, guides users through key steps. Critically, by applying multiple model configurations and uncertainty assessment, it delivers more robust solutions for informing catchment management of the sediment problem than many previously used approaches. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  20. Assessment of Technologies for the Space Shuttle External Tank Thermal Protection System and Recommendations for Technology Improvement - Part III: Material Property Characterization, Analysis, and Test Methods

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.; Johnson, Theodore F.; Whitley, Karen S.

    2005-01-01

    The objective of this report is to contribute to the independent assessment of the Space Shuttle External Tank Foam Material. This report specifically addresses material modeling, characterization testing, data reduction methods, and data pedigree. A brief description of the External Tank foam materials, locations, and standard failure modes is provided to develop suitable background information. A review of mechanics based analysis methods from the open literature is used to provide an assessment of the state-of-the-art in material modeling of closed cell foams. Further, this report assesses the existing material property database and investigates sources of material property variability. The report presents identified deficiencies in testing methods and procedures, recommendations for additional testing as required, identification of near-term improvements that should be pursued, and long-term capabilities or enhancements that should be developed.

  1. The Source Inversion Validation (SIV) Initiative: A Collaborative Study on Uncertainty Quantification in Earthquake Source Inversions

    NASA Astrophysics Data System (ADS)

    Mai, P. M.; Schorlemmer, D.; Page, M.

    2012-04-01

    Earthquake source inversions image the spatio-temporal rupture evolution on one or more fault planes using seismic and/or geodetic data. Such studies are critically important for earthquake seismology in general, and for advancing seismic hazard analysis in particular, as they reveal earthquake source complexity and help (i) to investigate earthquake mechanics; (ii) to develop spontaneous dynamic rupture models; (iii) to build models for generating rupture realizations for ground-motion simulations. In applications (i - iii), the underlying finite-fault source models are regarded as "data" (input information), but their uncertainties are essentially unknown. After all, source models are obtained from solving an inherently ill-posed inverse problem to which many a priori assumptions and uncertain observations are applied. The Source Inversion Validation (SIV) project is a collaborative effort to better understand the variability between rupture models for a single earthquake (as manifested in the finite-source rupture model database) and to develop robust uncertainty quantification for earthquake source inversions. The SIV project highlights the need to develop a long-standing and rigorous testing platform to examine the current state-of-the-art in earthquake source inversion, and to develop and test novel source inversion approaches. We will review the current status of the SIV project, and report the findings and conclusions of the recent workshops. We will briefly discuss several source-inversion methods, how they treat uncertainties in data, and assess the posterior model uncertainty. Case studies include initial forward-modeling tests on Green's function calculations, and inversion results for synthetic data from spontaneous dynamic crack-like strike-slip earthquake on steeply dipping fault, embedded in a layered crustal velocity-density structure.

  2. Image fusion based on Bandelet and sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Jiuxing; Zhang, Wei; Li, Xuzhi

    2018-04-01

    Bandelet transform could acquire geometric regular direction and geometric flow, sparse representation could represent signals with as little as possible atoms on over-complete dictionary, both of which could be used to image fusion. Therefore, a new fusion method is proposed based on Bandelet and Sparse Representation, to fuse Bandelet coefficients of multi-source images and obtain high quality fusion effects. The test are performed on remote sensing images and simulated multi-focus images, experimental results show that the performance of new method is better than tested methods according to objective evaluation indexes and subjective visual effects.

  3. 21 CFR 58.120 - Protocol.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ..., body weight range, sex, source of supply, species, strain, substrain, and age of the test system. (5... methods to be used. (b) All changes in or revisions of an approved protocol and the reasons therefore...

  4. 21 CFR 58.120 - Protocol.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ..., body weight range, sex, source of supply, species, strain, substrain, and age of the test system. (5... methods to be used. (b) All changes in or revisions of an approved protocol and the reasons therefore...

  5. 21 CFR 58.120 - Protocol.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ..., body weight range, sex, source of supply, species, strain, substrain, and age of the test system. (5... methods to be used. (b) All changes in or revisions of an approved protocol and the reasons therefore...

  6. 40 CFR 60.46 - Test methods and procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Fossil-Fuel... fossil fuels or fossil fuel and wood residue are fired, the owner or operator (in order to compute the...

  7. 40 CFR 60.46 - Test methods and procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Fossil-Fuel... fossil fuels or fossil fuel and wood residue are fired, the owner or operator (in order to compute the...

  8. 40 CFR 60.46 - Test methods and procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Fossil-Fuel... fossil fuels or fossil fuel and wood residue are fired, the owner or operator (in order to compute the...

  9. 40 CFR 60.46 - Test methods and procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Fossil-Fuel... the results of the four pairs of samples. (c) When combinations of fossil fuels or fossil fuel and...

  10. 40 CFR 60.46 - Test methods and procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Fossil-Fuel... the results of the four pairs of samples. (c) When combinations of fossil fuels or fossil fuel and...

  11. Ultra-performance liquid chromatography/tandem mass spectrometric quantification of structurally diverse drug mixtures using an ESI-APCI multimode ionization source.

    PubMed

    Yu, Kate; Di, Li; Kerns, Edward; Li, Susan Q; Alden, Peter; Plumb, Robert S

    2007-01-01

    We report in this paper an ultra-performance liquid chromatography/tandem mass spectrometric (UPLC(R)/MS/MS) method utilizing an ESI-APCI multimode ionization source to quantify structurally diverse analytes. Eight commercial drugs were used as test compounds. Each LC injection was completed in 1 min using a UPLC system coupled with MS/MS multiple reaction monitoring (MRM) detection. Results from three separate sets of experiments are reported. In the first set of experiments, the eight test compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes (ESI+, ESI-, APCI-, and APCI+) during an LC run. Approximately 8-10 data points were collected across each LC peak. This was insufficient for a quantitative analysis. In the second set of experiments, four compounds were analyzed as a single mixture. The mass spectrometer was switching rapidly among four ionization modes during an LC run. Approximately 15 data points were obtained for each LC peak. Quantification results were obtained with a limit of detection (LOD) as low as 0.01 ng/mL. For the third set of experiments, the eight test compounds were analyzed as a batch. During each LC injection, a single compound was analyzed. The mass spectrometer was detecting at a particular ionization mode during each LC injection. More than 20 data points were obtained for each LC peak. Quantification results were also obtained. This single-compound analytical method was applied to a microsomal stability test. Compared with a typical HPLC method currently used for the microsomal stability test, the injection-to-injection cycle time was reduced to 1.5 min (UPLC method) from 3.5 min (HPLC method). The microsome stability results were comparable with those obtained by traditional HPLC/MS/MS.

  12. Non-Invasive Seismic Methods for Earthquake Site Classification Applied to Ontario Bridge Sites

    NASA Astrophysics Data System (ADS)

    Bilson Darko, A.; Molnar, S.; Sadrekarimi, A.

    2017-12-01

    How a site responds to earthquake shaking and its corresponding damage is largely influenced by the underlying ground conditions through which it propagates. The effects of site conditions on propagating seismic waves can be predicted from measurements of the shear wave velocity (Vs) of the soil layer(s) and the impedance ratio between bedrock and soil. Currently the seismic design of new buildings and bridges (2015 Canadian building and bridge codes) requires determination of the time-averaged shear-wave velocity of the upper 30 metres (Vs30) of a given site. In this study, two in situ Vs profiling methods; Multichannel Analysis of Surface Waves (MASW) and Ambient Vibration Array (AVA) methods are used to determine Vs30 at chosen bridge sites in Ontario, Canada. Both active-source (MASW) and passive-source (AVA) surface wave methods are used at each bridge site to obtain Rayleigh-wave phase velocities over a wide frequency bandwidth. The dispersion curve is jointly inverted with each site's amplification function (microtremor horizontal-to-vertical spectral ratio) to obtain shear-wave velocity profile(s). We apply our non-invasive testing at three major infrastructure projects, e.g., five bridge sites along the Rt. Hon. Herb Gray Parkway in Windsor, Ontario. Our non-invasive testing is co-located with previous invasive testing, including Standard Penetration Test (SPT), Cone Penetration Test and downhole Vs data. Correlations between SPT blowcount and Vs are developed for the different soil types sampled at our Ontario bridge sites. A robust earthquake site classification procedure (reliable Vs30 estimates) for bridge sites across Ontario is evaluated from available combinations of invasive and non-invasive site characterization methods.

  13. Gas Production Strategy of Underground Coal Gasification Based on Multiple Gas Sources

    PubMed Central

    Tianhong, Duan; Zuotang, Wang; Limin, Zhou; Dongdong, Li

    2014-01-01

    To lower stability requirement of gas production in UCG (underground coal gasification), create better space and opportunities of development for UCG, an emerging sunrise industry, in its initial stage, and reduce the emission of blast furnace gas, converter gas, and coke oven gas, this paper, for the first time, puts forward a new mode of utilization of multiple gas sources mainly including ground gasifier gas, UCG gas, blast furnace gas, converter gas, and coke oven gas and the new mode was demonstrated by field tests. According to the field tests, the existing power generation technology can fully adapt to situation of high hydrogen, low calorific value, and gas output fluctuation in the gas production in UCG in multiple-gas-sources power generation; there are large fluctuations and air can serve as a gasifying agent; the gas production of UCG in the mode of both power and methanol based on multiple gas sources has a strict requirement for stability. It was demonstrated by the field tests that the fluctuations in gas production in UCG can be well monitored through a quality control chart method. PMID:25114953

  14. Gas production strategy of underground coal gasification based on multiple gas sources.

    PubMed

    Tianhong, Duan; Zuotang, Wang; Limin, Zhou; Dongdong, Li

    2014-01-01

    To lower stability requirement of gas production in UCG (underground coal gasification), create better space and opportunities of development for UCG, an emerging sunrise industry, in its initial stage, and reduce the emission of blast furnace gas, converter gas, and coke oven gas, this paper, for the first time, puts forward a new mode of utilization of multiple gas sources mainly including ground gasifier gas, UCG gas, blast furnace gas, converter gas, and coke oven gas and the new mode was demonstrated by field tests. According to the field tests, the existing power generation technology can fully adapt to situation of high hydrogen, low calorific value, and gas output fluctuation in the gas production in UCG in multiple-gas-sources power generation; there are large fluctuations and air can serve as a gasifying agent; the gas production of UCG in the mode of both power and methanol based on multiple gas sources has a strict requirement for stability. It was demonstrated by the field tests that the fluctuations in gas production in UCG can be well monitored through a quality control chart method.

  15. Domain Regeneration for Cross-Database Micro-Expression Recognition

    NASA Astrophysics Data System (ADS)

    Zong, Yuan; Zheng, Wenming; Huang, Xiaohua; Shi, Jingang; Cui, Zhen; Zhao, Guoying

    2018-05-01

    In this paper, we investigate the cross-database micro-expression recognition problem, where the training and testing samples are from two different micro-expression databases. Under this setting, the training and testing samples would have different feature distributions and hence the performance of most existing micro-expression recognition methods may decrease greatly. To solve this problem, we propose a simple yet effective method called Target Sample Re-Generator (TSRG) in this paper. By using TSRG, we are able to re-generate the samples from target micro-expression database and the re-generated target samples would share same or similar feature distributions with the original source samples. For this reason, we can then use the classifier learned based on the labeled source samples to accurately predict the micro-expression categories of the unlabeled target samples. To evaluate the performance of the proposed TSRG method, extensive cross-database micro-expression recognition experiments designed based on SMIC and CASME II databases are conducted. Compared with recent state-of-the-art cross-database emotion recognition methods, the proposed TSRG achieves more promising results.

  16. Potential microbial risk factors related to soil amendments and irrigation water of potato crops.

    PubMed

    Selma, M V; Allende, A; López-Gálvez, F; Elizaquível, P; Aznar, R; Gil, M I

    2007-12-01

    This study assesses the potential microbial risk factors related to the use of soil amendments and irrigation water on potato crops, cultivated in one traditional and two intensive farms during two harvest seasons. The natural microbiota and potentially pathogenic micro-organisms were evaluated in the soil amendment, irrigation water, soil and produce. Uncomposted amendments and residual and creek water samples showed the highest microbial counts. The microbial load of potatoes harvested in spring was similar among the tested farms despite the diverse microbial levels of Listeria spp. and faecal coliforms in the potential risk sources. However, differences in total coliform load of potato were found between farms cultivated in the autumn. Immunochromatographic rapid tests and the BAM's reference method (Bacteriological Analytical Manual; AOAC International) were used to detect Escherichia coli O157:H7 from the potential risk sources and produce. Confirmation of the positive results by polymerase chain reaction procedures showed that the immunochromatographic assay was not reliable as it led to false-positive results. The potentially pathogenic micro-organisms of soil amendment, irrigation water and soil samples changed with the harvest seasons and the use of different agricultural practices. However, the microbial load of the produce was not always influenced by these risk sources. Improvements in environmental sample preparation are needed to avoid interferences in the use of immunochromatographic rapid tests. The potential microbial risk sources of fresh produce should be regularly controlled using reliable detection methods to guarantee their microbial safety.

  17. Application of the Approximate Bayesian Computation methods in the stochastic estimation of atmospheric contamination parameters for mobile sources

    NASA Astrophysics Data System (ADS)

    Kopka, Piotr; Wawrzynczak, Anna; Borysiewicz, Mieczyslaw

    2016-11-01

    In this paper the Bayesian methodology, known as Approximate Bayesian Computation (ABC), is applied to the problem of the atmospheric contamination source identification. The algorithm input data are on-line arriving concentrations of the released substance registered by the distributed sensors network. This paper presents the Sequential ABC algorithm in detail and tests its efficiency in estimation of probabilistic distributions of atmospheric release parameters of a mobile contamination source. The developed algorithms are tested using the data from Over-Land Atmospheric Diffusion (OLAD) field tracer experiment. The paper demonstrates estimation of seven parameters characterizing the contamination source, i.e.: contamination source starting position (x,y), the direction of the motion of the source (d), its velocity (v), release rate (q), start time of release (ts) and its duration (td). The online-arriving new concentrations dynamically update the probability distributions of search parameters. The atmospheric dispersion Second-order Closure Integrated PUFF (SCIPUFF) Model is used as the forward model to predict the concentrations at the sensors locations.

  18. Searches for point sources in the Galactic Center region

    NASA Astrophysics Data System (ADS)

    di Mauro, Mattia; Fermi-LAT Collaboration

    2017-01-01

    Several groups have demonstrated the existence of an excess in the gamma-ray emission around the Galactic Center (GC) with respect to the predictions from a variety of Galactic Interstellar Emission Models (GIEMs) and point source catalogs. The origin of this excess, peaked at a few GeV, is still under debate. A possible interpretation is that it comes from a population of unresolved Millisecond Pulsars (MSPs) in the Galactic bulge. We investigate the detection of point sources in the GC region using new tools which the Fermi-LAT Collaboration is developing in the context of searches for Dark Matter (DM) signals. These new tools perform very fast scans iteratively testing for additional point sources at each of the pixels of the region of interest. We show also how to discriminate between point sources and structural residuals from the GIEM. We apply these methods to the GC region considering different GIEMs and testing the DM and MSPs intepretations for the GC excess. Additionally, we create a list of promising MSP candidates that could represent the brightest sources of a MSP bulge population.

  19. Detection of adenoviruses and rotaviruses in drinking water sources used in rural areas of Benin, West Africa.

    PubMed

    Verheyen, Jens; Timmen-Wego, Monika; Laudien, Rainer; Boussaad, Ibrahim; Sen, Sibel; Koc, Aynur; Uesbeck, Alexandra; Mazou, Farouk; Pfister, Herbert

    2009-05-01

    Diseases associated with viruses also found in environmental samples cause major health problems in developing countries. Little is known about the frequency and pattern of viral contamination of drinking water sources in these resource-poor settings. We established a method to analyze 10 liters of water from drinking water sources in a rural area of Benin for the presence of adenoviruses and rotaviruses. Overall, 541 samples from 287 drinking water sources were tested. A total of 12.9% of the sources were positive for adenoviruses and 2.1% of the sources were positive for rotaviruses at least once. Due to the temporary nature of viral contamination in drinking water sources, the probability of virus detection increased with the number of samples taken at one test site over time. No seasonal pattern for viral contaminations was found after samples obtained during the dry and wet seasons were compared. Overall, 3 of 15 surface water samples (20%) and 35 of 247 wells (14.2%) but also 2 of 25 pumps (8%) tested positive for adenoviruses or rotaviruses. The presence of latrines within a radius of 50 m in the vicinity of pumps or wells was identified as being a risk factor for virus detection. In summary, viral contamination was correlated with the presence of latrines in the vicinity of drinking water sources, indicating the importance of appropriate decision support systems in these socioeconomic prospering regions.

  20. Studies on Beam Formation in an Atomic Beam Source

    NASA Astrophysics Data System (ADS)

    Nass, A.; Stancari, M.; Steffens, E.

    2009-08-01

    Atomic beam sources (ABS) are widely used workhorses producing polarized atomic beams for polarized gas targets and polarized ion sources. Although they have been used for decades the understanding of the beam formation processes is crude. Models were used more or less successfully to describe the measured intensity and beam parameters. ABS's are also foreseen for future experiments, such as PAX [1]. An increase of intensity at a high polarization would be beneficial. A direct simulation Monte-Carlo method (DSMC) [2] was used to describe the beam formation of a hydrogen or deuterium beam in an ABS. For the first time a simulation of a supersonic gas expansion on a molecular level for this application was performed. Beam profile and Time-of-Flight measurements confirmed the simulation results. Furthermore a new method of beam formation was tested, the Carrier Jet method [3], based on an expanded beam surrounded by an over-expanded carrier jet.

  1. Classical-processing and quantum-processing signal separation methods for qubit uncoupling

    NASA Astrophysics Data System (ADS)

    Deville, Yannick; Deville, Alain

    2012-12-01

    The Blind Source Separation problem consists in estimating a set of unknown source signals from their measured combinations. It was only investigated in a non-quantum framework up to now. We propose its first quantum extensions. We thus introduce the Quantum Source Separation field, investigating both its blind and non-blind configurations. More precisely, we show how to retrieve individual quantum bits (qubits) only from the global state resulting from their undesired coupling. We consider cylindrical-symmetry Heisenberg coupling, which e.g. occurs when two electron spins interact through exchange. We first propose several qubit uncoupling methods which typically measure repeatedly the coupled quantum states resulting from individual qubits preparations, and which then statistically process the classical data provided by these measurements. Numerical tests prove the effectiveness of these methods. We then derive a combination of quantum gates for performing qubit uncoupling, thus avoiding repeated qubit preparations and irreversible measurements.

  2. Functional performance of pyrovalves

    NASA Technical Reports Server (NTRS)

    Bement, Laurence J.

    1996-01-01

    Following several flight and ground test failures of spacecraft systems using single-shot, 'normally closed' pyrotechnically actuated valves (pyrovalves), a Government/Industry cooperative program was initiated to assess the functional performance of five qualified designs. The goal of the program was to provide information on functional performance of pyrovalves to allow users the opportunity to improve procurement requirements. Specific objectives included the demonstration of performance test methods, the seating; these gases/particles entered the fluid path of measurement of 'blowby' (the passage of gases from the pyrotechnic energy source around the activating piston into the valve's fluid path), and the quantification of functional margins for each design. Experiments were conducted at NASA's Langley Research Center on several units for each of the five valve designs. The test methods used for this program measured the forces and energies required to actuate the valves, as well as the energies and the pressures (where possible) delivered by the pyrotechnic sources. Functional performance ranged widely among the designs. Blowby cannot be prevented by o-ring seals; metal-to-metal seals were effective. Functional margin was determined by dividing the energy delivered by the pyrotechnic sources in excess to that required to accomplish the function by the energy required for that function. Two of the five designs had inadequate functional margins with the pyrotechnic cartridges evaluated.

  3. Evaluation of Delamination Onset and Growth Characterization Methods under Mode I Fatigue Loading

    NASA Technical Reports Server (NTRS)

    Murri, Gretchen B.

    2013-01-01

    Double-cantilevered beam specimens of IM7/8552 graphite/epoxy from two different manufacturers were tested in static and fatigue to compare the material characterization data and to evaluate a proposed ASTM standard for generating Paris Law equations for delamination growth. Static results were used to generate compliance calibration constants for reducing the fatigue data, and a delamination resistance curve, GIR, for each material. Specimens were tested in fatigue at different initial cyclic GImax levels to determine a delamination onset curve and the delamination growth rate. The delamination onset curve equations were similar for the two sources. Delamination growth rate was calculated by plotting da/dN versus GImax on a log-log scale and fitting a Paris Law. Two different data reduction methods were used to calculate da/dN. To determine the effects of fiber-bridging, growth results were normalized by the delamination resistance curves. Paris Law exponents decreased by 31% to 37% after normalizing the data. Visual data records from the fatigue tests were used to calculate individual compliance constants from the fatigue data. The resulting da/dN versus GImax plots showed improved repeatability for each source, compared to using averaged static data. The Paris Law expressions for the two sources showed the closest agreement using the individually fit compliance data.

  4. PyHLA: tests for the association between HLA alleles and diseases.

    PubMed

    Fan, Yanhui; Song, You-Qiang

    2017-02-06

    Recently, several tools have been designed for human leukocyte antigen (HLA) typing using single nucleotide polymorphism (SNP) array and next-generation sequencing (NGS) data. These tools provide high-throughput and cost-effective approaches for identifying HLA types. Therefore, tools for downstream association analysis are highly desirable. Although several tools have been designed for multi-allelic marker association analysis, they were designed only for microsatellite markers and do not scale well with increasing data volumes, or they were designed for large-scale data but provided a limited number of tests. We have developed a Python package called PyHLA, which implements several methods for HLA association analysis, to fill the gap. PyHLA is a tailor-made, easy to use, and flexible tool designed specifically for the association analysis of the HLA types imputed from genome-wide genotyping and NGS data. PyHLA provides functions for association analysis, zygosity tests, and interaction tests between HLA alleles and diseases. Monte Carlo permutation and several methods for multiple testing corrections have also been implemented. PyHLA provides a convenient and powerful tool for HLA analysis. Existing methods have been integrated and desired methods have been added in PyHLA. Furthermore, PyHLA is applicable to small and large sample sizes and can finish the analysis in a timely manner on a personal computer with different platforms. PyHLA is implemented in Python. PyHLA is a free, open source software distributed under the GPLv2 license. The source code, tutorial, and examples are available at https://github.com/felixfan/PyHLA.

  5. Development of the Vertical Electro Magnetic Profiling (VEMP) method

    NASA Astrophysics Data System (ADS)

    Miura, Yasuo; Osato, Kazumi; Takasugi, Shinji; Muraoka, Hirofumi; Yasukawa, Kasumi

    1996-09-01

    As a part of the "Deep-Seated Geothermal Resources Survey (DSGR)" project being undertaken by the New Energy and Industrial Technology Development Organization (NEDO), the "Vertical Electro Magnetic Profiling (VEMP)" method is being developed to accurately obtain deep resistivity structures. The VEMP method takes multi-frequency three-component magnetic field data in an open hole well using controlled source transmitters emitted at the surface (either loop or grounded-wire sources). Numerical simulations using EM3D have demonstrated that phase data of the VEMP method is not only very sensitive to the general resistivity structure, but will also indicate the presence of deeper anomalies. Forward modelling was used to determine the required transmitter moments for various grounded-wire and loop sources for a field test using the WD-1 well in the Kakkonda geothermal area. VEMP logging of the WD-1 well was carried out in May 1994 and the processed field data matches the computer simulations quite well.

  6. Interpretation of Trace Gas Data Using Inverse Methods and Global Chemical Transport Models

    NASA Technical Reports Server (NTRS)

    Prinn, Ronald G.

    1997-01-01

    This is a theoretical research project aimed at: (1) development, testing, and refining of inverse methods for determining regional and global transient source and sink strengths for long lived gases important in ozone depletion and climate forcing, (2) utilization of inverse methods to determine these source/sink strengths which use the NCAR/Boulder CCM2-T42 3-D model and a global 3-D Model for Atmospheric Transport and Chemistry (MATCH) which is based on analyzed observed wind fields (developed in collaboration by MIT and NCAR/Boulder), (3) determination of global (and perhaps regional) average hydroxyl radical concentrations using inverse methods with multiple titrating gases, and, (4) computation of the lifetimes and spatially resolved destruction rates of trace gases using 3-D models. Important goals include determination of regional source strengths of methane, nitrous oxide, and other climatically and chemically important biogenic trace gases and also of halocarbons restricted by the Montreal Protocol and its follow-on agreements and hydrohalocarbons used as alternatives to the restricted halocarbons.

  7. 40 CFR 63.9621 - What test methods and other procedures must I use to demonstrate initial compliance with the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... must I use to demonstrate initial compliance with the emission limits for particulate matter? 63.9621... the emission limits for particulate matter? (a) You must conduct each performance test that applies to... source, you must determine compliance with the applicable emission limit for particulate matter in Table...

  8. 40 CFR 63.9621 - What test methods and other procedures must I use to demonstrate initial compliance with the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... must I use to demonstrate initial compliance with the emission limits for particulate matter? 63.9621... the emission limits for particulate matter? (a) You must conduct each performance test that applies to... source, you must determine compliance with the applicable emission limit for particulate matter in Table...

  9. Formaldehyde emission from particleboard and plywood paneling : measurement, mechanism, and product standards

    Treesearch

    George E. Myers

    1983-01-01

    A number of commercial panel products, primarily particleboard and hardwood plywood, were tested for their formaldehyde emission behavior using desiccator, perforator, and dynamic chamber methods. The results were analyzed in terms of the source of formaldehyde observed in the tests (free vs. hydrolytically produced) and the potential utility of the testa as product...

  10. Automated lung sound analysis for detecting pulmonary abnormalities.

    PubMed

    Datta, Shreyasi; Dutta Choudhury, Anirban; Deshpande, Parijat; Bhattacharya, Sakyajit; Pal, Arpan

    2017-07-01

    Identification of pulmonary diseases comprises of accurate auscultation as well as elaborate and expensive pulmonary function tests. Prior arts have shown that pulmonary diseases lead to abnormal lung sounds such as wheezes and crackles. This paper introduces novel spectral and spectrogram features, which are further refined by Maximal Information Coefficient, leading to the classification of healthy and abnormal lung sounds. A balanced lung sound dataset, consisting of publicly available data and data collected with a low-cost in-house digital stethoscope are used. The performance of the classifier is validated over several randomly selected non-overlapping training and validation samples and tested on separate subjects for two separate test cases: (a) overlapping and (b) non-overlapping data sources in training and testing. The results reveal that the proposed method sustains an accuracy of 80% even for non-overlapping data sources in training and testing.

  11. Aeronautic Instruments. Section V : Power Plant Instruments

    NASA Technical Reports Server (NTRS)

    Washburn, G E; Sylvander, R C; Mueller, E F; Wilhelm, R M; Eaton, H N; Warner, John A C

    1923-01-01

    Part 1 gives a general discussion of the uses, principles, construction, and operation of airplane tachometers. Detailed description of all available instruments, both foreign and domestic, are given. Part 2 describes methods of tests and effect of various conditions encountered in airplane flight such as change of temperature, vibration, tilting, and reduced air pressure. Part 3 describes the principal types of distance reading thermometers for aircraft engines, including an explanation of the physical principles involved in the functioning of the instruments and proper filling of the bulbs. Performance requirements and testing methods are given and a discussion of the source of error and results of tests. Part 4 gives methods of tests and calibration, also requirements of gauges of this type for the pressure measurement of the air pressure in gasoline tanks and the engine oil pressure on airplanes. Part 5 describes two types of gasoline gauges, the float type and the pressure type. Methods of testing and calibrating gasoline depth gauges are given. The Schroeder, R. A. E., and the Mark II flowmeters are described.

  12. Methodology to improve design of accelerated life tests in civil engineering projects.

    PubMed

    Lin, Jing; Yuan, Yongbo; Zhou, Jilai; Gao, Jie

    2014-01-01

    For reliability testing an Energy Expansion Tree (EET) and a companion Energy Function Model (EFM) are proposed and described in this paper. Different from conventional approaches, the EET provides a more comprehensive and objective way to systematically identify external energy factors affecting reliability. The EFM introduces energy loss into a traditional Function Model to identify internal energy sources affecting reliability. The combination creates a sound way to enumerate the energies to which a system may be exposed during its lifetime. We input these energies into planning an accelerated life test, a Multi Environment Over Stress Test. The test objective is to discover weak links and interactions among the system and the energies to which it is exposed, and design them out. As an example, the methods are applied to the pipe in subsea pipeline. However, they can be widely used in other civil engineering industries as well. The proposed method is compared with current methods.

  13. Constraints on Galactic Neutrino Emission with Seven Years of IceCube Data

    NASA Astrophysics Data System (ADS)

    Aartsen, M. G.; Ackermann, M.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Ahrens, M.; Samarai, I. Al; Altmann, D.; Andeen, K.; Anderson, T.; Ansseau, I.; Anton, G.; Argüelles, C.; Auffenberg, J.; Axani, S.; Bagherpour, H.; Bai, X.; Barron, J. P.; Barwick, S. W.; Baum, V.; Bay, R.; Beatty, J. J.; Becker Tjus, J.; Becker, K.-H.; BenZvi, S.; Berley, D.; Bernardini, E.; Besson, D. Z.; Binder, G.; Bindig, D.; Blaufuss, E.; Blot, S.; Bohm, C.; Börner, M.; Bos, F.; Bose, D.; Böser, S.; Botner, O.; Bourbeau, J.; Bradascio, F.; Braun, J.; Brayeur, L.; Brenzke, M.; Bretz, H.-P.; Bron, S.; Burgman, A.; Carver, T.; Casey, J.; Casier, M.; Cheung, E.; Chirkin, D.; Christov, A.; Clark, K.; Classen, L.; Coenders, S.; Collin, G. H.; Conrad, J. M.; Cowen, D. F.; Cross, R.; Day, M.; de André, J. P. A. M.; De Clercq, C.; DeLaunay, J. J.; Dembinski, H.; De Ridder, S.; Desiati, P.; de Vries, K. D.; de Wasseige, G.; de With, M.; DeYoung, T.; Díaz-Vélez, J. C.; di Lorenzo, V.; Dujmovic, H.; Dumm, J. P.; Dunkman, M.; Eberhardt, B.; Ehrhardt, T.; Eichmann, B.; Eller, P.; Evenson, P. A.; Fahey, S.; Fazely, A. R.; Felde, J.; Filimonov, K.; Finley, C.; Flis, S.; Franckowiak, A.; Friedman, E.; Fuchs, T.; Gaisser, T. K.; Gallagher, J.; Gerhardt, L.; Ghorbani, K.; Giang, W.; Glauch, T.; Glüsenkamp, T.; Goldschmidt, A.; Gonzalez, J. G.; Grant, D.; Griffith, Z.; Haack, C.; Hallgren, A.; Halzen, F.; Hanson, K.; Hebecker, D.; Heereman, D.; Helbing, K.; Hellauer, R.; Hickford, S.; Hignight, J.; Hill, G. C.; Hoffman, K. D.; Hoffmann, R.; Hokanson-Fasig, B.; Hoshina, K.; Huang, F.; Huber, M.; Hultqvist, K.; In, S.; Ishihara, A.; Jacobi, E.; Japaridze, G. S.; Jeong, M.; Jero, K.; Jones, B. J. P.; Kalacynski, P.; Kang, W.; Kappes, A.; Karg, T.; Karle, A.; Katz, U.; Kauer, M.; Keivani, A.; Kelley, J. L.; Kheirandish, A.; Kim, J.; Kim, M.; Kintscher, T.; Kiryluk, J.; Kittler, T.; Klein, S. R.; Kohnen, G.; Koirala, R.; Kolanoski, H.; Köpke, L.; Kopper, C.; Kopper, S.; Koschinsky, J. P.; Koskinen, D. J.; Kowalski, M.; Krings, K.; Kroll, M.; Krückl, G.; Kunnen, J.; Kunwar, S.; Kurahashi, N.; Kuwabara, T.; Kyriacou, A.; Labare, M.; Lanfranchi, J. L.; Larson, M. J.; Lauber, F.; Lennarz, D.; Lesiak-Bzdak, M.; Leuermann, M.; Liu, Q. R.; Lu, L.; Lünemann, J.; Luszczak, W.; Madsen, J.; Maggi, G.; Mahn, K. B. M.; Mancina, S.; Maruyama, R.; Mase, K.; Maunu, R.; McNally, F.; Meagher, K.; Medici, M.; Meier, M.; Menne, T.; Merino, G.; Meures, T.; Miarecki, S.; Micallef, J.; Momenté, G.; Montaruli, T.; Moore, R. W.; Moulai, M.; Nahnhauer, R.; Nakarmi, P.; Naumann, U.; Neer, G.; Niederhausen, H.; Nowicki, S. C.; Nygren, D. R.; Obertacke Pollmann, A.; Olivas, A.; O'Murchadha, A.; Palczewski, T.; Pandya, H.; Pankova, D. V.; Peiffer, P.; Pepper, J. A.; Pérez de los Heros, C.; Pieloth, D.; Pinat, E.; Plum, M.; Price, P. B.; Przybylski, G. T.; Raab, C.; Rädel, L.; Rameez, M.; Rawlins, K.; Reimann, R.; Relethford, B.; Relich, M.; Resconi, E.; Rhode, W.; Richman, M.; Robertson, S.; Rongen, M.; Rott, C.; Ruhe, T.; Ryckbosch, D.; Rysewyk, D.; Sälzer, T.; Sanchez Herrera, S. E.; Sandrock, A.; Sandroos, J.; Sarkar, S.; Sarkar, S.; Satalecka, K.; Schlunder, P.; Schmidt, T.; Schneider, A.; Schoenen, S.; Schöneberg, S.; Schumacher, L.; Seckel, D.; Seunarine, S.; Soldin, D.; Song, M.; Spiczak, G. M.; Spiering, C.; Stachurska, J.; Stanev, T.; Stasik, A.; Stettner, J.; Steuer, A.; Stezelberger, T.; Stokstad, R. G.; Stößl, A.; Strotjohann, N. L.; Sullivan, G. W.; Sutherland, M.; Taboada, I.; Tatar, J.; Tenholt, F.; Ter-Antonyan, S.; Terliuk, A.; Tešić, G.; Tilav, S.; Toale, P. A.; Tobin, M. N.; Toscano, S.; Tosi, D.; Tselengidou, M.; Tung, C. F.; Turcati, A.; Turley, C. F.; Ty, B.; Unger, E.; Usner, M.; Vandenbroucke, J.; Van Driessche, W.; van Eijndhoven, N.; Vanheule, S.; van Santen, J.; Vehring, M.; Vogel, E.; Vraeghe, M.; Walck, C.; Wallace, A.; Wallraff, M.; Wandler, F. D.; Wandkowsky, N.; Waza, A.; Weaver, C.; Weiss, M. J.; Wendt, C.; Westerhoff, S.; Whelan, B. J.; Wickmann, S.; Wiebe, K.; Wiebusch, C. H.; Wille, L.; Williams, D. R.; Wills, L.; Wolf, M.; Wood, J.; Wood, T. R.; Woolsey, E.; Woschnagg, K.; Xu, D. L.; Xu, X. W.; Xu, Y.; Yanez, J. P.; Yodh, G.; Yoshida, S.; Yuan, T.; Zoll, M.; IceCube Collaboration

    2017-11-01

    The origins of high-energy astrophysical neutrinos remain a mystery despite extensive searches for their sources. We present constraints from seven years of IceCube Neutrino Observatory muon data on the neutrino flux coming from the Galactic plane. This flux is expected from cosmic-ray interactions with the interstellar medium or near localized sources. Two methods were developed to test for a spatially extended flux from the entire plane, both of which are maximum likelihood fits but with different signal and background modeling techniques. We consider three templates for Galactic neutrino emission based primarily on gamma-ray observations and models that cover a wide range of possibilities. Based on these templates and in the benchmark case of an unbroken {E}-2.5 power-law energy spectrum, we set 90% confidence level upper limits, constraining the possible Galactic contribution to the diffuse neutrino flux to be relatively small, less than 14% of the flux reported in Aartsen et al. above 1 TeV. A stacking method is also used to test catalogs of known high-energy Galactic gamma-ray sources.

  14. Three dimensional volcano-acoustic source localization at Karymsky Volcano, Kamchatka, Russia

    NASA Astrophysics Data System (ADS)

    Rowell, Colin

    We test two methods of 3-D acoustic source localization on volcanic explosions and small-scale jetting events at Karymsky Volcano, Kamchatka, Russia. Recent infrasound studies have provided evidence that volcanic jets produce low-frequency aerodynamic sound (jet noise) similar to that from man-made jet engines. Man-made jets are known to produce sound through turbulence along the jet axis, but discrimination of sources along the axis of a volcanic jet requires a network of sufficient topographic relief to attain resolution in the vertical dimension. At Karymsky Volcano, the topography of an eroded edifice adjacent to the active cone provided a platform for the atypical deployment of five infrasound sensors with intra-network relief of ˜600 m in July 2012. A novel 3-D inverse localization method, srcLoc, is tested and compared against a more common grid-search semblance technique. Simulations using synthetic signals indicate that srcLoc is capable of determining vertical source locations for this network configuration to within +/-150 m or better. However, srcLoc locations for explosions and jetting at Karymsky Volcano show a persistent overestimation of source elevation and underestimation of sound speed by an average of ˜330 m and 25 m/s, respectively. The semblance method is able to produce more realistic source locations by fixing the sound speed to expected values of 335 - 340 m/s. The consistency of location errors for both explosions and jetting activity over a wide range of wind and temperature conditions points to the influence of topography. Explosion waveforms exhibit amplitude relationships and waveform distortion strikingly similar to those theorized by modeling studies of wave diffraction around the crater rim. We suggest delay of signals and apparent elevated source locations are due to altered raypaths and/or crater diffraction effects. Our results suggest the influence of topography in the vent region must be accounted for when attempting 3-D volcano acoustic source localization. Though the data presented here are insufficient to resolve noise sources for these jets, which are much smaller in scale than those of previous volcanic jet noise studies, similar techniques may be successfully applied to large volcanic jets in the future.

  15. Earthquake Source Inversion Blindtest: Initial Results and Further Developments

    NASA Astrophysics Data System (ADS)

    Mai, P.; Burjanek, J.; Delouis, B.; Festa, G.; Francois-Holden, C.; Monelli, D.; Uchide, T.; Zahradnik, J.

    2007-12-01

    Images of earthquake ruptures, obtained from modelling/inverting seismic and/or geodetic data exhibit a high degree in spatial complexity. This earthquake source heterogeneity controls seismic radiation, and is determined by the details of the dynamic rupture process. In turn, such rupture models are used for studying source dynamics and for ground-motion prediction. But how reliable and trustworthy are these earthquake source inversions? Rupture models for a given earthquake, obtained by different research teams, often display striking disparities (see http://www.seismo.ethz.ch/srcmod) However, well resolved, robust, and hence reliable source-rupture models are an integral part to better understand earthquake source physics and to improve seismic hazard assessment. Therefore it is timely to conduct a large-scale validation exercise for comparing the methods, parameterization and data-handling in earthquake source inversions.We recently started a blind test in which several research groups derive a kinematic rupture model from synthetic seismograms calculated for an input model unknown to the source modelers. The first results, for an input rupture model with heterogeneous slip but constant rise time and rupture velocity, reveal large differences between the input and inverted model in some cases, while a few studies achieve high correlation between the input and inferred model. Here we report on the statistical assessment of the set of inverted rupture models to quantitatively investigate their degree of (dis-)similarity. We briefly discuss the different inversion approaches, their possible strength and weaknesses, and the use of appropriate misfit criteria. Finally we present new blind-test models, with increasing source complexity and ambient noise on the synthetics. The goal is to attract a large group of source modelers to join this source-inversion blindtest in order to conduct a large-scale validation exercise to rigorously asses the performance and reliability of current inversion methods and to discuss future developments.

  16. Time reversal imaging, Inverse problems and Adjoint Tomography}

    NASA Astrophysics Data System (ADS)

    Montagner, J.; Larmat, C. S.; Capdeville, Y.; Kawakatsu, H.; Fink, M.

    2010-12-01

    With the increasing power of computers and numerical techniques (such as spectral element methods), it is possible to address a new class of seismological problems. The propagation of seismic waves in heterogeneous media is simulated more and more accurately and new applications developed, in particular time reversal methods and adjoint tomography in the three-dimensional Earth. Since the pioneering work of J. Claerbout, theorized by A. Tarantola, many similarities were found between time-reversal methods, cross-correlations techniques, inverse problems and adjoint tomography. By using normal mode theory, we generalize the scalar approach of Draeger and Fink (1999) and Lobkis and Weaver (2001) to the 3D- elastic Earth, for theoretically understanding time-reversal method on global scale. It is shown how to relate time-reversal methods on one hand, with auto-correlations of seismograms for source imaging and on the other hand, with cross-correlations between receivers for structural imaging and retrieving Green function. Time-reversal methods were successfully applied in the past to acoustic waves in many fields such as medical imaging, underwater acoustics, non destructive testing and to seismic waves in seismology for earthquake imaging. In the case of source imaging, time reversal techniques make it possible an automatic location in time and space as well as the retrieval of focal mechanism of earthquakes or unknown environmental sources . We present here some applications at the global scale of these techniques on synthetic tests and on real data, such as Sumatra-Andaman (Dec. 2004), Haiti (Jan. 2010), as well as glacial earthquakes and seismic hum.

  17. Remote sensing as a source of land cover information utilized in the universal soil loss equation

    NASA Technical Reports Server (NTRS)

    Morris-Jones, D. R.; Morgan, K. M.; Kiefer, R. W.; Scarpace, F. L.

    1979-01-01

    In this study, methods for gathering the land use/land cover information required by the USLE were investigated with medium altitude, multi-date color and color infrared 70-mm positive transparencies using human and computer-based interpretation techniques. Successful results, which compare favorably with traditional field study methods, were obtained within the test site watershed with airphoto data sources and human airphoto interpretation techniques. Computer-based interpretation techniques were not capable of identifying soil conservation practices but were successful to varying degrees in gathering other types of desired land use/land cover information.

  18. Auralization of vibroacoustic models in engineering using Wave Field Synthesis: Application to plates and transmission loss

    NASA Astrophysics Data System (ADS)

    Bolduc, A.; Gauthier, P.-A.; Berry, A.

    2017-12-01

    While perceptual evaluation and sound quality testing with jury are now recognized as essential parts of acoustical product development, they are rarely implemented with spatial sound field reproduction. Instead, monophonic, stereophonic or binaural presentations are used. This paper investigates the workability and interest of a method to use complete vibroacoustic engineering models for auralization based on 2.5D Wave Field Synthesis (WFS). This method is proposed in order that spatial characteristics such as directivity patterns and direction-of-arrival are part of the reproduced sound field while preserving the model complete formulation that coherently combines frequency and spatial responses. Modifications to the standard 2.5D WFS operators are proposed for extended primary sources, affecting the reference line definition and compensating for out-of-plane elementary primary sources. Reported simulations and experiments of reproductions of two physically-accurate vibroacoustic models of thin plates show that the proposed method allows for an effective reproduction in the horizontal plane: Spatial and frequency domains features are recreated. Application of the method to the sound rendering of a virtual transmission loss measurement setup shows the potential of the method for use in virtual acoustical prototyping for jury testing.

  19. Recent Development of IMP LECR3 Ion Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Z.M.; Zhao, H.W.; Li, J.Y.

    2005-03-15

    18GHz microwave has been fed to the LECR3 ion source to produce intense highly charged ion beams although this ion source was designed for 14.5GHz. Then 1.1 emA Ar8+ and 325 e{mu}A Ar11+ were obtained at 18GHz. During the source running for atomic physics experiment, some higher charge state ion beams such as Ar17+ and Ar18+ were detected and have been validated by atomic physics method. Furthermore, a few special gases, e.g. SiH4 and SF6, were tested on LECR3 ion source to produce required ion beams to satisfy the requirements of atomic physics experiments.

  20. Source apportionment of airborne particulate matter using organic compounds as tracers

    NASA Astrophysics Data System (ADS)

    Schauer, James J.; Rogge, Wolfgang F.; Hildemann, Lynn M.; Mazurek, Monica A.; Cass, Glen R.; Simoneit, Bernd R. T.

    A chemical mass balance receptor model based on organic compounds has been developed that relates sours; contributions to airborne fine particle mass concentrations. Source contributions to the concentrations of specific organic compounds are revealed as well. The model is applied to four air quality monitoring sites in southern California using atmospheric organic compound concentration data and source test data collected specifically for the purpose of testing this model. The contributions of up to nine primary particle source types can be separately identified in ambient samples based on this method, and approximately 85% of the organic fine aerosol is assigned to primary sources on an annual average basis. The model provides information on source contributions to fine mass concentrations, fine organic aerosol concentrations and individual organic compound concentrations. The largest primary source contributors to fine particle mass concentrations in Los Angeles are found to include diesel engine exhaust, paved road dust, gasoline-powered vehicle exhaust, plus emissions from food cooking and wood smoke, with smaller contribution:; from tire dust, plant fragments, natural gas combustion aerosol, and cigarette smoke. Once these primary aerosol source contributions are added to the secondary sulfates, nitrates and organics present, virtually all of the annual average fine particle mass at Los Angeles area monitoring sites can be assigned to its source.

  1. 40 CFR 60.754 - Test methods and procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... calculating emissions for PSD purposes, the owner or operator of each MSW landfill subject to the provisions of this subpart shall estimate the NMOC emission rate for comparison to the PSD major source and...

  2. 40 CFR 60.754 - Test methods and procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... calculating emissions for PSD purposes, the owner or operator of each MSW landfill subject to the provisions of this subpart shall estimate the NMOC emission rate for comparison to the PSD major source and...

  3. 40 CFR 60.754 - Test methods and procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... calculating emissions for PSD purposes, the owner or operator of each MSW landfill subject to the provisions of this subpart shall estimate the NMOC emission rate for comparison to the PSD major source and...

  4. 40 CFR 60.754 - Test methods and procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... calculating emissions for PSD purposes, the owner or operator of each MSW landfill subject to the provisions of this subpart shall estimate the NMOC emission rate for comparison to the PSD major source and...

  5. 40 CFR 60.754 - Test methods and procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... calculating emissions for PSD purposes, the owner or operator of each MSW landfill subject to the provisions of this subpart shall estimate the NMOC emission rate for comparison to the PSD major source and...

  6. 40 CFR 60.66 - Delegation of authority.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Portland Cement Plants § 60... alternative to any non-opacity emissions standard. (2) Approval of a major change to test methods under § 60.8...

  7. Technique to determine location of radio sources from measurements taken on spinning spacecraft

    NASA Technical Reports Server (NTRS)

    Fainberg, J.

    1979-01-01

    The procedure developed to extract average source direction and average source size from spin-modulated radio astronomy data measured on the IMP-6 spacecraft is described. Because all measurements are used, rather than just finding maxima or minima in the data, the method is very sensitive, even in the presence of large amounts of noise. The technique is applicable to all experiments with directivity characteristics. It is suitable for onboard processing on satellites to reduce the data flow to Earth. The application to spin-modulated nonpolarized radio astronomy data is made and includes the effects of noise, background, and second source interference. The analysis was tested with computer simulated data and the results agree with analytic predictions. Applications of this method with IMP-6 radio data have led to: (1) determination of source positions of traveling solar radio bursts at large distances from the Sun; (2) mapping of magnetospheric radio emissions by radio triangulation; and (3) detection of low frequency radio emissions from Jupiter and Saturn.

  8. Localization of incipient tip vortex cavitation using ray based matched field inversion method

    NASA Astrophysics Data System (ADS)

    Kim, Dongho; Seong, Woojae; Choo, Youngmin; Lee, Jeunghoon

    2015-10-01

    Cavitation of marine propeller is one of the main contributing factors of broadband radiated ship noise. In this research, an algorithm for the source localization of incipient vortex cavitation is suggested. Incipient cavitation is modeled as monopole type source and matched-field inversion method is applied to find the source position by comparing the spatial correlation between measured and replicated pressure fields at the receiver array. The accuracy of source localization is improved by broadband matched-field inversion technique that enhances correlation by incoherently averaging correlations of individual frequencies. Suggested localization algorithm is verified through known virtual source and model test conducted in Samsung ship model basin cavitation tunnel. It is found that suggested localization algorithm enables efficient localization of incipient tip vortex cavitation using a few pressure data measured on the outer hull above the propeller and practically applicable to the typically performed model scale experiment in a cavitation tunnel at the early design stage.

  9. Current and future trends in fecal source tracking and deployment in the Lake Taihu Region of China

    NASA Astrophysics Data System (ADS)

    Hagedorn, Charles; Liang, Xinqiang

    The emerging discipline of microbial and/or chemical source tracking (collectively termed fecal source tracking (FST)) is being used to identify origins of fecal contamination in polluted waters in many countries around the world. FST has developed rapidly because standard methods of measuring contamination in water by enumerating fecal indicator bacteria (FIB) such as fecal coliforms and enterococci do not identify the sources of the contamination. FST is an active area of research and development in both the academic and private sectors and includes: Developing and testing new microbial and chemical FST methods. Determining the geographic application and animal host ranges of existing and emerging FST techniques. Conducting experimental comparisons of FST techniques. Combining direct monitoring of human pathogens associated with waterborne outbreaks and zoonotic pathogens responsible for infections among people, wildlife, or domesticated animals with the use of FST techniques. Applying FST to watershed analysis and coastal environments. Designing appropriate statistical and probability analysis of FST data and developing models for mass loadings of host-specific fecal contamination. This paper includes a critical review of FST with emphasis on the extent to which methods have been tested (especially in comparison with other methods and/or with blind samples), which methods are applicable to different situations, their shortcomings, and their usefulness in predicting public health risk or pathogen occurrence. In addition, the paper addresses the broader question of whether FST and fecal indicator monitoring is the best approach to regulate water quality and protect human health. Many FST methods have only been tested against sewage or fecal samples or isolates in laboratory studies (proof of concept testing) and/or applied in field studies where the “real” answer is not known, so their comparative performance and accuracy cannot be assessed. For FST to be quantitative, stability of ratios between host-specific markers in the environment must be established. In addition, research is needed on the correlation between host-specific markers and pathogens, and survival of markers after waste treatments. As a result of the exclusive emphasis on FIB by regulatory agencies, monitoring and FST development has concentrated on FIB rather than the actual pathogens. A more rational approach to regulating water quality might be to use available epidemiological data to identify pathogens of concern in a particular water body, and then use targeted pathogen monitoring coupled with very specific FST approaches to control the pathogens. Baseline monitoring of FIB would be just one tool among many in this example.

  10. STATCONT: A statistical continuum level determination method for line-rich sources

    NASA Astrophysics Data System (ADS)

    Sánchez-Monge, Á.; Schilke, P.; Ginsburg, A.; Cesaroni, R.; Schmiedeke, A.

    2018-01-01

    STATCONT is a python-based tool designed to determine the continuum emission level in spectral data, in particular for sources with a line-rich spectrum. The tool inspects the intensity distribution of a given spectrum and automatically determines the continuum level by using different statistical approaches. The different methods included in STATCONT are tested against synthetic data. We conclude that the sigma-clipping algorithm provides the most accurate continuum level determination, together with information on the uncertainty in its determination. This uncertainty can be used to correct the final continuum emission level, resulting in the here called `corrected sigma-clipping method' or c-SCM. The c-SCM has been tested against more than 750 different synthetic spectra reproducing typical conditions found towards astronomical sources. The continuum level is determined with a discrepancy of less than 1% in 50% of the cases, and less than 5% in 90% of the cases, provided at least 10% of the channels are line free. The main products of STATCONT are the continuum emission level, together with a conservative value of its uncertainty, and datacubes containing only spectral line emission, i.e., continuum-subtracted datacubes. STATCONT also includes the option to estimate the spectral index, when different files covering different frequency ranges are provided.

  11. Source Stacking for Numerical Wavefield Computations - Application to Global Scale Seismic Mantle Tomography

    NASA Astrophysics Data System (ADS)

    MacLean, L. S.; Romanowicz, B. A.; French, S.

    2015-12-01

    Seismic wavefield computations using the Spectral Element Method are now regularly used to recover tomographic images of the upper mantle and crust at the local, regional, and global scales (e.g. Fichtner et al., GJI, 2009; Tape et al., Science 2010; Lekic and Romanowicz, GJI, 2011; French and Romanowicz, GJI, 2014). However, the heaviness of the computations remains a challenge, and contributes to limiting the resolution of the produced images. Using source stacking, as suggested by Capdeville et al. (GJI,2005), can considerably speed up the process by reducing the wavefield computations to only one per each set of N sources. This method was demonstrated through synthetic tests on low frequency datasets, and therefore should work for global mantle tomography. However, the large amplitudes of surface waves dominates the stacked seismograms and these cases can no longer be separated by windowing in the time domain. We have developed a processing approach that helps address this issue and demonstrate its usefulness through a series of synthetic tests performed at long periods (T >60 s) on toy upper mantle models. The summed synthetics are computed using the CSEM code (Capdeville et al., 2002). As for the inverse part of the procedure, we use a quasi-Newton method, computing Frechet derivatives and Hessian using normal mode perturbation theory.

  12. True Randomness from Big Data.

    PubMed

    Papakonstantinou, Periklis A; Woodruff, David P; Yang, Guang

    2016-09-26

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.

  13. True Randomness from Big Data

    NASA Astrophysics Data System (ADS)

    Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang

    2016-09-01

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.

  14. True Randomness from Big Data

    PubMed Central

    Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang

    2016-01-01

    Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests. PMID:27666514

  15. A new method and device of aligning patient setup lasers in radiation therapy.

    PubMed

    Hwang, Ui-Jung; Jo, Kwanghyun; Lim, Young Kyung; Kwak, Jung Won; Choi, Sang Hyuon; Jeong, Chiyoung; Kim, Mi Young; Jeong, Jong Hwi; Shin, Dongho; Lee, Se Byeong; Park, Jeong-Hoon; Park, Sung Yong; Kim, Siyong

    2016-01-08

    The aim of this study is to develop a new method to align the patient setup lasers in a radiation therapy treatment room and examine its validity and efficiency. The new laser alignment method is realized by a device composed of both a metallic base plate and a few acrylic transparent plates. Except one, every plate has either a crosshair line (CHL) or a single vertical line that is used for alignment. Two holders for radiochromic film insertion are prepared in the device to find a radiation isocenter. The right laser positions can be found optically by matching the shadows of all the CHLs in the gantry head and the device. The reproducibility, accuracy, and efficiency of laser alignment and the dependency on the position error of the light source were evaluated by comparing the means and the standard deviations of the measured laser positions. After the optical alignment of the lasers, the radiation isocenter was found by the gantry and collimator star shots, and then the lasers were translated parallel to the isocenter. In the laser position reproducibility test, the mean and standard deviation on the wall of treatment room were 32.3 ± 0.93 mm for the new method whereas they were 33.4 ± 1.49 mm for the conventional method. The mean alignment accuracy was 1.4 mm for the new method, and 2.1 mm for the conventional method on the walls. In the test of the dependency on the light source position error, the mean laser position was shifted just by a similar amount of the shift of the light source in the new method, but it was greatly magnified in the conventional method. In this study, a new laser alignment method was devised and evaluated successfully. The new method provided more accurate, more reproducible, and faster alignment of the lasers than the conventional method.

  16. Borehole prototype for seismic high-resolution exploration

    NASA Astrophysics Data System (ADS)

    Giese, Rüdiger; Jaksch, Katrin; Krauß, Felix; Krüger, Kay; Groh, Marco; Jurczyk, Andreas

    2014-05-01

    Target reservoirs for the exploitation of hydrocarbons or hot water for geothermal energy supply can comprise small layered structures, for instance thin layers or faults. The resolution of 2D and 3D surface seismic methods is often not sufficient to determine and locate these structures. Borehole seismic methods like vertical seismic profiling (VSP) and seismic while drilling (SWD) use either receivers or sources within the borehole. Thus, the distance to the target horizon is reduced and higher resolution images of the geological structures can be achieved. Even these methods are limited in their resolution capabilities with increasing target depth. To localize structures more accuracy methods with higher resolution in the range of meters are necessary. The project SPWD -- Seismic Prediction While Drilling aims at s the development of a borehole prototype which combines seismic sources and receivers in one device to improve the seismic resolution. Within SPWD such a prototype has been designed, manufactured and tested. The SPWD-wireline prototype is divided into three main parts. The upper section comprises the electronic unit. The middle section includes the upper receiver, the upper clamping unit as well as the source unit and the lower clamping unit. The lower section consists of the lower receiver unit and the hydraulic unit. The total length of the prototype is nearly seven meters and its weight is about 750 kg. For focusing the seismic waves in predefined directions of the borehole axis the method of phased array is used. The source unit is equipped with four magnetostrictive vibrators. Each can be controlled independently to get a common wave front in the desired direction of exploration. Source signal frequencies up to 5000 Hz are used, which allows resolutions up to one meter. In May and September 2013 field tests with the SPWD-wireline prototype have been carried out at the KTB Deep Crustal Lab in Windischeschenbach (Bavaria). The aim was to proof the pressure-tightness and the functionality of the hydraulic system components of the borehole device. To monitor the prototype four cameras and several moisture sensors were installed along the source and receiver units close to the extendable coupling stamps where an infiltration of fluid is most probably. The tests lasted about 48 hours each. It was possible to extend and to retract the coupling stamps of the prototype up to a depth of 2100 m. No infiltration of borehole fluids in the SPWD-tool was observed. In preparation of the acoustic calibration measurements in the research and education mine of the TU Bergakademie Freiberg seismic sources and receivers as well as the recording electronic devices were installed in the SPWD-wireline prototype at the GFZ. Afterwards, the SPWD-borehole device was transported to the GFZ-Underground-Lab and preliminary test measurements to characterize the radiation pattern characteristics have been carried out in the newly drilled vertical borehole in December 2013. Previous measurements with a laboratory borehole prototype have demonstrated a dependency of the radiated seismic energy from the predefined amplification direction, the wave type and the signal frequencies. SPWD is funded by the German Federal Environment Ministry

  17. Monitoring of diesel engine combustions based on the acoustic source characterisation of the exhaust system

    NASA Astrophysics Data System (ADS)

    Jiang, J.; Gu, F.; Gennish, R.; Moore, D. J.; Harris, G.; Ball, A. D.

    2008-08-01

    Acoustic methods are among the most useful techniques for monitoring the condition of machines. However, the influence of background noise is a major issue in implementing this method. This paper introduces an effective monitoring approach to diesel engine combustion based on acoustic one-port source theory and exhaust acoustic measurements. It has been found that the strength, in terms of pressure, of the engine acoustic source is able to provide a more accurate representation of the engine combustion because it is obtained by minimising the reflection effects in the exhaust system. A multi-load acoustic method was then developed to determine the pressure signal when a four-cylinder diesel engine was tested with faults in the fuel injector and exhaust valve. From the experimental results, it is shown that a two-load acoustic method is sufficient to permit the detection and diagnosis of abnormalities in the pressure signal, caused by the faults. This then provides a novel and yet reliable method to achieve condition monitoring of diesel engines even if they operate in high noise environments such as standby power stations and vessel chambers.

  18. Automated detection of ice cliffs within supraglacial debris cover

    NASA Astrophysics Data System (ADS)

    Herreid, Sam; Pellicciotti, Francesca

    2018-05-01

    Ice cliffs within a supraglacial debris cover have been identified as a source for high ablation relative to the surrounding debris-covered area. Due to their small relative size and steep orientation, ice cliffs are difficult to detect using nadir-looking space borne sensors. The method presented here uses surface slopes calculated from digital elevation model (DEM) data to map ice cliff geometry and produce an ice cliff probability map. Surface slope thresholds, which can be sensitive to geographic location and/or data quality, are selected automatically. The method also attempts to include area at the (often narrowing) ends of ice cliffs which could otherwise be neglected due to signal saturation in surface slope data. The method was calibrated in the eastern Alaska Range, Alaska, USA, against a control ice cliff dataset derived from high-resolution visible and thermal data. Using the same input parameter set that performed best in Alaska, the method was tested against ice cliffs manually mapped in the Khumbu Himal, Nepal. Our results suggest the method can accommodate different glaciological settings and different DEM data sources without a data intensive (high-resolution, multi-data source) recalibration.

  19. Estimation of Deeper Structure at the Soultz Hot Dry Rock Field by Means of Reflection Method Using 3C AE as Wave Source

    NASA Astrophysics Data System (ADS)

    Soma, N.; Niitsuma, H.; Baria, R.

    1997-12-01

    We investigate the deep subsurface structure below the artificial reservoir at the Soultz Hot Dry Rock (HDR) site in France by a reflection method which uses acoustic emission (AE) as a wave source. In this method, we can detect reflected waves by examining the linearity of a three-dimensional hodogram. Additionally for imaging a deep subsurface structure, we employ a three-dimensional inversion with a restriction of wave polarization angles and with a compensation for a heterogeneous source distribution.¶We analyzed 101 AE wave forms observed at the Soultz site during the hydraulic testing in 1993. Some deep reflectors were revealed by this method. The bottom of the artificial reservoir that is presumed from all of the AE locations in 1993 was delineated at the depth of about 3900 m as a reflector. Other deeper reflectors were detected below the reservoir, which would not have been detected using conventional methods. Furthermore these reflectors agreed with the results of the tri-axial drill-bit VSP (Asanuma et al., 1996).

  20. Waste Characterization Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vigil-Holterman, Luciana R.; Naranjo, Felicia Danielle

    2016-02-02

    This report discusses ways to classify waste as outlined by LANL. Waste Generators must make a waste determination and characterize regulated waste by appropriate analytical testing or use of acceptable knowledge (AK). Use of AK for characterization requires several source documents. Waste characterization documentation must be accurate, sufficient, and current (i.e., updated); relevant and traceable to the waste stream’s generation, characterization, and management; and not merely a list of information sources.

  1. 40 CFR 63.1450 - What test methods and other procedures must I use to demonstrate initial compliance with the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... emissions from an affected source and subject to operating limits in § 63.1444(g) or § 63.1446(d) for... applied to emissions from an affected source and subject to site-specific operating limit(s) in § 63.1444... two or more segments performed on the same day or on different days if conditions prevent the required...

  2. 40 CFR 63.1450 - What test methods and other procedures must I use to demonstrate initial compliance with the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... emissions from an affected source and subject to operating limits in § 63.1444(g) or § 63.1446(d) for... applied to emissions from an affected source and subject to site-specific operating limit(s) in § 63.1444... two or more segments performed on the same day or on different days if conditions prevent the required...

  3. Methods for detection, identification and specification of listerias

    DOEpatents

    Bochner, Barry

    1992-01-01

    The present invention relates generally to differential carbon source metabolism in the genus Listeria, metabolic, biochemical, immunological and genetic procedures to measure said differential carbon source metabolism and the use of these produces to detect, isolate and/or distinguish species of the genus Listeria as well as detect, isolate and/or distinguish strains of species of Listeria. The present invention also contemplates test kits and enrichment media to facilitate these procedures.

  4. A new method of optimal capacitor switching based on minimum spanning tree theory in distribution systems

    NASA Astrophysics Data System (ADS)

    Li, H. W.; Pan, Z. Y.; Ren, Y. B.; Wang, J.; Gan, Y. L.; Zheng, Z. Z.; Wang, W.

    2018-03-01

    According to the radial operation characteristics in distribution systems, this paper proposes a new method based on minimum spanning trees method for optimal capacitor switching. Firstly, taking the minimal active power loss as objective function and not considering the capacity constraints of capacitors and source, this paper uses Prim algorithm among minimum spanning trees algorithms to get the power supply ranges of capacitors and source. Then with the capacity constraints of capacitors considered, capacitors are ranked by the method of breadth-first search. In term of the order from high to low of capacitor ranking, capacitor compensation capacity based on their power supply range is calculated. Finally, IEEE 69 bus system is adopted to test the accuracy and practicality of the proposed algorithm.

  5. Small Hot Jet Acoustic Rig Validation

    NASA Technical Reports Server (NTRS)

    Brown, Cliff; Bridges, James

    2006-01-01

    The Small Hot Jet Acoustic Rig (SHJAR), located in the Aeroacoustic Propulsion Laboratory (AAPL) at the NASA Glenn Research Center in Cleveland, Ohio, was commissioned in 2001 to test jet noise reduction concepts at low technology readiness levels (TRL 1-3) and develop advanced measurement techniques. The first series of tests on the SHJAR were designed to prove its capabilities and establish the quality of the jet noise data produced. Towards this goal, a methodology was employed dividing all noise sources into three categories: background noise, jet noise, and rig noise. Background noise was directly measured. Jet noise and rig noise were separated by using the distance and velocity scaling properties of jet noise. Effectively, any noise source that did not follow these rules of jet noise was labeled as rig noise. This method led to the identification of a high frequency noise source related to the Reynolds number. Experiments using boundary layer treatment and hot wire probes documented this noise source and its removal, allowing clean testing of low Reynolds number jets. Other tests performed characterized the amplitude and frequency of the valve noise, confirmed the location of the acoustic far field, and documented the background noise levels under several conditions. Finally, a full set of baseline data was acquired. This paper contains the methodology and test results used to verify the quality of the SHJAR rig.

  6. A Recording-Based Method for Auralization of Rotorcraft Flyover Noise

    NASA Technical Reports Server (NTRS)

    Pera, Nicholas M.; Rizzi, Stephen A.; Krishnamurthy, Siddhartha; Fuller, Christopher R.; Christian, Andrew

    2018-01-01

    Rotorcraft noise is an active field of study as the sound produced by these vehicles is often found to be annoying. A means to auralize rotorcraft flyover noise is sought to help understand the factors leading to annoyance. Previous work by the authors focused on auralization of rotorcraft fly-in noise, in which a simplification was made that enabled the source noise synthesis to be based on a single emission angle. Here, the goal is to auralize a complete flyover event, so the source noise synthesis must be capable of traversing a range of emission angles. The synthesis uses a source noise definition process that yields periodic and aperiodic (modulation) components at a set of discrete emission angles. In this work, only the periodic components are used for the source noise synthesis for the flyover; the inclusion of modulation components is the subject of ongoing research. Propagation of the synthesized source noise to a ground observer is performed using the NASA Auralization Framework. The method is demonstrated using ground recordings from a flight test of the AS350 helicopter for the source noise definition.

  7. Acoustic Emission Source Location Using a Distributed Feedback Fiber Laser Rosette

    PubMed Central

    Huang, Wenzhu; Zhang, Wentao; Li, Fang

    2013-01-01

    This paper proposes an approach for acoustic emission (AE) source localization in a large marble stone using distributed feedback (DFB) fiber lasers. The aim of this study is to detect damage in structures such as those found in civil applications. The directional sensitivity of DFB fiber laser is investigated by calculating location coefficient using a method of digital signal analysis. In this, autocorrelation is used to extract the location coefficient from the periodic AE signal and wavelet packet energy is calculated to get the location coefficient of a burst AE source. Normalization is processed to eliminate the influence of distance and intensity of AE source. Then a new location algorithm based on the location coefficient is presented and tested to determine the location of AE source using a Delta (Δ) DFB fiber laser rosette configuration. The advantage of the proposed algorithm over the traditional methods based on fiber Bragg Grating (FBG) include the capability of: having higher strain resolution for AE detection and taking into account two different types of AE source for location. PMID:24141266

  8. A generic high-dose rate {sup 192}Ir brachytherapy source for evaluation of model-based dose calculations beyond the TG-43 formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballester, Facundo, E-mail: Facundo.Ballester@uv.es; Carlsson Tedgren, Åsa; Granero, Domingo

    Purpose: In order to facilitate a smooth transition for brachytherapy dose calculations from the American Association of Physicists in Medicine (AAPM) Task Group No. 43 (TG-43) formalism to model-based dose calculation algorithms (MBDCAs), treatment planning systems (TPSs) using a MBDCA require a set of well-defined test case plans characterized by Monte Carlo (MC) methods. This also permits direct dose comparison to TG-43 reference data. Such test case plans should be made available for use in the software commissioning process performed by clinical end users. To this end, a hypothetical, generic high-dose rate (HDR) {sup 192}Ir source and a virtual watermore » phantom were designed, which can be imported into a TPS. Methods: A hypothetical, generic HDR {sup 192}Ir source was designed based on commercially available sources as well as a virtual, cubic water phantom that can be imported into any TPS in DICOM format. The dose distribution of the generic {sup 192}Ir source when placed at the center of the cubic phantom, and away from the center under altered scatter conditions, was evaluated using two commercial MBDCAs [Oncentra{sup ®} Brachy with advanced collapsed-cone engine (ACE) and BrachyVision ACUROS{sup TM}]. Dose comparisons were performed using state-of-the-art MC codes for radiation transport, including ALGEBRA, BrachyDose, GEANT4, MCNP5, MCNP6, and PENELOPE2008. The methodologies adhered to recommendations in the AAPM TG-229 report on high-energy brachytherapy source dosimetry. TG-43 dosimetry parameters, an along-away dose-rate table, and primary and scatter separated (PSS) data were obtained. The virtual water phantom of (201){sup 3} voxels (1 mm sides) was used to evaluate the calculated dose distributions. Two test case plans involving a single position of the generic HDR {sup 192}Ir source in this phantom were prepared: (i) source centered in the phantom and (ii) source displaced 7 cm laterally from the center. Datasets were independently produced by different investigators. MC results were then compared against dose calculated using TG-43 and MBDCA methods. Results: TG-43 and PSS datasets were generated for the generic source, the PSS data for use with the ACE algorithm. The dose-rate constant values obtained from seven MC simulations, performed independently using different codes, were in excellent agreement, yielding an average of 1.1109 ± 0.0004 cGy/(h U) (k = 1, Type A uncertainty). MC calculated dose-rate distributions for the two plans were also found to be in excellent agreement, with differences within type A uncertainties. Differences between commercial MBDCA and MC results were test, position, and calculation parameter dependent. On average, however, these differences were within 1% for ACUROS and 2% for ACE at clinically relevant distances. Conclusions: A hypothetical, generic HDR {sup 192}Ir source was designed and implemented in two commercially available TPSs employing different MBDCAs. Reference dose distributions for this source were benchmarked and used for the evaluation of MBDCA calculations employing a virtual, cubic water phantom in the form of a CT DICOM image series. The implementation of a generic source of identical design in all TPSs using MBDCAs is an important step toward supporting univocal commissioning procedures and direct comparisons between TPSs.« less

  9. Bayesian statistics applied to the location of the source of explosions at Stromboli Volcano, Italy

    USGS Publications Warehouse

    Saccorotti, G.; Chouet, B.; Martini, M.; Scarpa, R.

    1998-01-01

    We present a method for determining the location and spatial extent of the source of explosions at Stromboli Volcano, Italy, based on a Bayesian inversion of the slowness vector derived from frequency-slowness analyses of array data. The method searches for source locations that minimize the error between the expected and observed slowness vectors. For a given set of model parameters, the conditional probability density function of slowness vectors is approximated by a Gaussian distribution of expected errors. The method is tested with synthetics using a five-layer velocity model derived for the north flank of Stromboli and a smoothed velocity model derived from a power-law approximation of the layered structure. Application to data from Stromboli allows for a detailed examination of uncertainties in source location due to experimental errors and incomplete knowledge of the Earth model. Although the solutions are not constrained in the radial direction, excellent resolution is achieved in both transverse and depth directions. Under the assumption that the horizontal extent of the source does not exceed the crater dimension, the 90% confidence region in the estimate of the explosive source location corresponds to a small volume extending from a depth of about 100 m to a maximum depth of about 300 m beneath the active vents, with a maximum likelihood source region located in the 120- to 180-m-depth interval.

  10. Antecedents and Consequences of Supplier Performance Evaluation Efficacy

    DTIC Science & Technology

    2016-06-30

    forming groups of high and low values. These tests are contingent on the reliable and valid measure of high and low rating inflation and high and...year)? Future research could deploy a SPM system as a test case on a limited set of transactions. Using a quasi-experimental design , comparisons...single source, common method bias must be of concern. Harmon’s one -factor test showed that when latent-indicator items were forced onto a single

  11. Network Algorithms for Detection of Radiation Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rao, Nageswara S; Brooks, Richard R; Wu, Qishi

    In support of national defense, Domestic Nuclear Detection Office s (DNDO) Intelligent Radiation Sensor Systems (IRSS) program supported the development of networks of radiation counters for detecting, localizing and identifying low-level, hazardous radiation sources. Industry teams developed the first generation of such networks with tens of counters, and demonstrated several of their capabilities in indoor and outdoor characterization tests. Subsequently, these test measurements have been used in algorithm replays using various sub-networks of counters. Test measurements combined with algorithm outputs are used to extract Key Measurements and Benchmark (KMB) datasets. We present two selective analyses of these datasets: (a) amore » notional border monitoring scenario that highlights the benefits of a network of counters compared to individual detectors, and (b) new insights into the Sequential Probability Ratio Test (SPRT) detection method, which lead to its adaptations for improved detection. Using KMB datasets from an outdoor test, we construct a notional border monitoring scenario, wherein twelve 2 *2 NaI detectors are deployed on the periphery of 21*21meter square region. A Cs-137 (175 uCi) source is moved across this region, starting several meters from outside and finally moving away. The measurements from individual counters and the network were processed using replays of a particle filter algorithm developed under IRSS program. The algorithm outputs from KMB datasets clearly illustrate the benefits of combining measurements from all networked counters: the source was detected before it entered the region, during its trajectory inside, and until it moved several meters away. When individual counters are used for detection, the source was detected for much shorter durations, and sometimes was missed in the interior region. The application of SPRT for detecting radiation sources requires choosing the detection threshold, which in turn requires a source strength estimate, typically specified as a multiplier of the background radiation level. A judicious selection of this source multiplier is essential to achieve optimal detection probability at a specified false alarm rate. Typically, this threshold is chosen from the Receiver Operating Characteristic (ROC) by varying the source multiplier estimate. ROC is expected to have a monotonically increasing profile between the detection probability and false alarm rate. We derived ROCs for multiple indoor tests using KMB datasets, which revealed an unexpected loop shape: as the multiplier increases, detection probability and false alarm rate both increase until a limit, and then both contract. Consequently, two detection probabilities correspond to the same false alarm rate, and the higher is achieved at a lower multiplier, which is the desired operating point. Using the Chebyshev s inequality we analytically confirm this shape. Then, we present two improved network-SPRT methods by (a) using the threshold off-set as a weighting factor for the binary decisions from individual detectors in a weighted majority voting fusion rule, and (b) applying a composite SPRT derived using measurements from all counters.« less

  12. Development and testing of real-time PCR assays for determining fecal loading and source identification (cattle, human, etc.) in surface water and groundwater

    NASA Astrophysics Data System (ADS)

    McKay, L. D.; Layton, A.; Gentry, R.

    2004-12-01

    A multi-disciplinary group of researchers at the University of Tennessee is developing and testing a series of microbial assay methods based on real-time PCR to detect fecal bacterial concentrations and host sources in water samples. Real-time PCR is an enumeration technique based on the unique and conserved nucleic acid sequences present in all organisms. The first research task was development of an assay (AllBac) to detect total amount of Bacteroides, which represents up to 30 percent of fecal mass. Subsequent assays were developed to detect Bacteroides from cattle (BoBac) and humans (HuBac) using 16sRNA genes based on DNA sequences in the national GenBank, as well as sequences from local fecal samples. The assays potentially have significant advantages over conventional bacterial source tracking methods because: 1. unlike traditional enumeration methods, they do not require bacterial cultivation; 2. there are no known non-fecal sources of Bacteroides; 3. the assays are quantitative with results for total concentration and for each species expressed in mg/l; and 4. they show little regional variation within host species, meaning that they do not require development of extensive local gene libraries. The AllBac and BoBac assays have been used in a study of fecal contamination in a small rural watershed (Stock Creek) near Knoxville, TN, and have proven useful in identification of areas where cattle represent a significant fecal input and in development of BMPs. It is expected that these types of assays (and future assays for birds, hogs, etc.) could have broad applications in monitoring fecal impacts from Animal Feeding Operations, as well as from wildlife and human sources.

  13. Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image

    NASA Astrophysics Data System (ADS)

    Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.

    2018-04-01

    At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.

  14. A hybrid phase-space and histogram source model for GPU-based Monte Carlo radiotherapy dose calculation

    NASA Astrophysics Data System (ADS)

    Townson, Reid W.; Zavgorodni, Sergei

    2014-12-01

    In GPU-based Monte Carlo simulations for radiotherapy dose calculation, source modelling from a phase-space source can be an efficiency bottleneck. Previously, this has been addressed using phase-space-let (PSL) sources, which provided significant efficiency enhancement. We propose that additional speed-up can be achieved through the use of a hybrid primary photon point source model combined with a secondary PSL source. A novel phase-space derived and histogram-based implementation of this model has been integrated into gDPM v3.0. Additionally, a simple method for approximately deriving target photon source characteristics from a phase-space that does not contain inheritable particle history variables (LATCH) has been demonstrated to succeed in selecting over 99% of the true target photons with only ~0.3% contamination (for a Varian 21EX 18 MV machine). The hybrid source model was tested using an array of open fields for various Varian 21EX and TrueBeam energies, and all cases achieved greater than 97% chi-test agreement (the mean was 99%) above the 2% isodose with 1% / 1 mm criteria. The root mean square deviations (RMSDs) were less than 1%, with a mean of 0.5%, and the source generation time was 4-5 times faster. A seven-field intensity modulated radiation therapy patient treatment achieved 95% chi-test agreement above the 10% isodose with 1% / 1 mm criteria, 99.8% for 2% / 2 mm, a RMSD of 0.8%, and source generation speed-up factor of 2.5. Presented as part of the International Workshop on Monte Carlo Techniques in Medical Physics

  15. [Measuring the effect of eyeglasses on determination of squint angle with Purkinje reflexes and the prism cover test].

    PubMed

    Barry, J C; Backes, A

    1998-04-01

    The alternating prism and cover test is the conventional test for the measurement of the angle of strabismus. The error induced by the prismatic effect of glasses is typically about 27-30%/10 D. Alternatively, the angle of strabismus can be measured with methods based on Purkinje reflex positions. This study examines the differences between three such options, taking into account the influence of glasses. The studied system comprised the eyes with or without glasses, a fixation object and a device for recording the eye position: in the case of the alternate prism and cover test, a prism bar was required; in the case of a Purkinje reflex based device, light sources for generation of reflexes and a camera for the documentation of the reflex positions were used. Measurements performed on model eyes and computer ray traces were used to analyze and compare the options. When a single corneal reflex is used, the misalignment of the corneal axis can be measured; the error in this measurement due to the prismatic effect of glasses was 7.6%/10 D, the smallest found in this study. The individual Hirschberg ratio can be determined by monocular measurements in three gaze directions. The angle of strabismus can be measured with Purkinje reflex based methods if the fundamental differences between these methods and the alternate prism and cover test, and if the influence of glasses and other sources of error are accounted for.

  16. Quantitative NDA of isotopic neutron sources.

    PubMed

    Lakosi, L; Nguyen, C T; Bagi, J

    2005-01-01

    A non-destructive method for assaying transuranic neutron sources was developed, using a combination of gamma-spectrometry and neutron correlation technique. Source strength or actinide content of a number of PuBe, AmBe, AmLi, (244)Cm, and (252)Cf sources was assessed, both as a safety issue and with respect to combating illicit trafficking. A passive neutron coincidence collar was designed with (3)He counters embedded in a polyethylene moderator (lined with Cd) surrounding the sources to be measured. The electronics consist of independent channels of pulse amplifiers and discriminators as well as a shift register for coincidence counting. The neutron output of the sources was determined by gross neutron counting, and the actinide content was found out by adopting specific spontaneous fission and (alpha,n) reaction yields of individual isotopes from the literature. Identification of an unknown source type and constituents can be made by gamma-spectrometry. The coincidences are due to spontaneous fission in the case of Cm and Cf sources, while they are mostly due to neutron-induced fission of the Pu isotopes (i.e. self-multiplication) and the (9)Be(n,2n)(8)Be reaction in Be-containing sources. Recording coincidence rate offers a potential for calibration, exploiting a correlation between the Pu amount and the coincidence-to-total ratio. The method and the equipment were tested in an in-field demonstration exercise, with participation of national public authorities and foreign observers. Seizure of the illicit transport of a PuBe source was simulated in the exercise, and the Pu content of the source was determined. It is expected that the method could be used for identification and assay of illicit, found, or not documented neutron sources.

  17. Development and evaluation of modified envelope correlation method for deep tectonic tremor

    NASA Astrophysics Data System (ADS)

    Mizuno, N.; Ide, S.

    2017-12-01

    We develop a new location method for deep tectonic tremors, as an improvement of widely used envelope correlation method, and applied it to construct a tremor catalog in western Japan. Using the cross-correlation functions as objective functions and weighting components of data by the inverse of error variances, the envelope cross-correlation method is redefined as a maximum likelihood method. This method is also capable of multiple source detection, because when several events occur almost simultaneously, they appear as local maxima of likelihood.The average of weighted cross-correlation functions, defined as ACC, is a nonlinear function whose variable is a position of deep tectonic tremor. The optimization method has two steps. First, we fix the source depth to 30 km and use a grid search with 0.2 degree intervals to find the maxima of ACC, which are candidate event locations. Then, using each of the candidate locations as initial values, we apply a gradient method to determine horizontal and vertical components of a hypocenter. Sometimes, several source locations are determined in a time window of 5 minutes. We estimate the resolution, which is defined as a distance of sources to be detected separately by the location method, is about 100 km. The validity of this estimation is confirmed by a numerical test using synthetic waveforms. Applying to continuous seismograms in western Japan for over 10 years, the new method detected 27% more tremors than a previous method, owing to the multiple detection and improvement of accuracy by appropriate weighting scheme.

  18. Next Generation of Leaching Tests

    EPA Science Inventory

    A corresponding abstract has been cleared for this presentation. The four methods comprising the Leaching Environmental Assessment Framework are described along with the tools to support implementation of the more rigorous and accurate source terms that are developed using LEAF ...

  19. Antibiotic Conditioned Growth Medium of Pseudomonas Aeruginosa

    ERIC Educational Resources Information Center

    Benathen, Isaiah A.; Cazeau, Barbara; Joseph, Njeri

    2004-01-01

    A simple method to study the consequences of bacterial antibiosis after interspecific competition between microorganisms is presented. Common microorganisms are used as the test organisms and Pseudomonas aeruginosa are used as the source of the inhibitor agents.

  20. Polyphasic approach for differentiating Penicillium nordicum from Penicillium verrucosum.

    PubMed

    Berni, E; Degola, F; Cacchioli, C; Restivo, F M; Spotti, E

    2011-04-01

    The aim of this research was to use a polyphasic approach to differentiate Penicillium verrucosum from Penicillium nordicum, to compare different techniques, and to select the most suitable for industrial use. In particular, (1) a cultural technique with two substrates selective for these species; (2) a molecular diagnostic test recently set up and a RAPD procedure derived from this assay; (3) an RP-HPLC analysis to quantify ochratoxin A (OTA) production and (4) an automated system based on fungal carbon source utilisation (Biolog Microstation™) were used. Thirty strains isolated from meat products and originally identified as P. verrucosum by morphological methods were re-examined by newer cultural tests and by PCR methods. All were found to belong to P. nordicum. Their biochemical and chemical characterisation supported the results obtained by cultural and molecular techniques and showed the varied ability in P. verrucosum and P. nordicum to metabolise carbon-based sources and to produce OTA at different concentrations, respectively.

Top