Sample records for equivalent source technique

  1. Identification of active sources inside cavities using the equivalent source method-based free-field recovery technique

    NASA Astrophysics Data System (ADS)

    Bi, Chuan-Xing; Hu, Ding-Yu; Zhang, Yong-Bin; Jing, Wen-Qian

    2015-06-01

    In previous studies, an equivalent source method (ESM)-based technique for recovering the free sound field in a noisy environment has been successfully applied to exterior problems. In order to evaluate its performance when applied to a more general noisy environment, that technique is used to identify active sources inside cavities where the sound field is composed of the field radiated by active sources and that reflected by walls. A patch approach with two semi-closed surfaces covering the target active sources is presented to perform the measurements, and the field that would be radiated by these target active sources into free space is extracted from the mixed field by using the proposed technique, which will be further used as the input of nearfield acoustic holography for source identification. Simulation and experimental results validate the effectiveness of the proposed technique for source identification in cavities, and show the feasibility of performing the measurements with a double layer planar array.

  2. Spherical earth gravity and magnetic anomaly analysis by equivalent point source inversion

    NASA Technical Reports Server (NTRS)

    Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.

    1981-01-01

    To facilitate geologic interpretation of satellite elevation potential field data, analysis techniques are developed and verified in the spherical domain that are commensurate with conventional flat earth methods of potential field interpretation. A powerful approach to the spherical earth problem relates potential field anomalies to a distribution of equivalent point sources by least squares matrix inversion. Linear transformations of the equivalent source field lead to corresponding geoidal anomalies, pseudo-anomalies, vector anomaly components, spatial derivatives, continuations, and differential magnetic pole reductions. A number of examples using 1 deg-averaged surface free-air gravity anomalies of POGO satellite magnetometer data for the United States, Mexico, and Central America illustrate the capabilities of the method.

  3. Simulation of scattered fields: Some guidelines for the equivalent source method

    NASA Astrophysics Data System (ADS)

    Gounot, Yves J. R.; Musafir, Ricardo E.

    2011-07-01

    Three different approaches of the equivalent source method for simulating scattered fields are compared: two of them deal with monopole sets, the other with multipole expansions. In the first monopole approach, the sources have fixed positions given by specific rules, while in the second one (ESGA), the optimal positions are determined via a genetic algorithm. The 'pros and cons' of each of these approaches are discussed with the aim of providing practical guidelines for the user. It is shown that while both monopole techniques furnish quite good pressure field reconstructions with simple source arrangements, ESGA requires a number of monopoles significantly smaller and, with equal number of sources, yields a better precision. As for the multipole technique, the main advantage is that in principle any precision can be reached, provided the source order is sufficiently high. On the other hand, the results point out that the lack of rules for determining the proper multipole order necessary for a desired precision may constitute a handicap for the user.

  4. An equivalent source model of the satellite-altitude magnetic anomaly field over Australia

    NASA Technical Reports Server (NTRS)

    Mayhew, M. A.; Johnson, B. D.; Langel, R. A.

    1980-01-01

    The low-amplitude, long-wavelength magnetic anomaly field measured between 400 and 700 km elevation over Australia by the POGO satellites is modeled by means of the equivalent source technique. Magnetic dipole moments are computed for a latitude-longitude array of dipole sources on the earth's surface such that the dipoles collectively give rise to a field which makes a least squares best fit to that observed. The distribution of magnetic moments is converted to a model of apparent magnetization contrast in a layer of constant (40 km) thickness, which contains information equivalent to the lateral variation in the vertical integral of magnetization down to the Curie isotherm and can be transformed to a model of variable thickness magnetization. It is noted that the closest equivalent source spacing giving a stable solution is about 2.5 deg, corresponding to about half the mean data elevation, and that the magnetization distribution correlates well with some of the principle tectonic elements of Australia.

  5. Efficient techniques for wave-based sound propagation in interactive applications

    NASA Astrophysics Data System (ADS)

    Mehra, Ravish

    Sound propagation techniques model the effect of the environment on sound waves and predict their behavior from point of emission at the source to the final point of arrival at the listener. Sound is a pressure wave produced by mechanical vibration of a surface that propagates through a medium such as air or water, and the problem of sound propagation can be formulated mathematically as a second-order partial differential equation called the wave equation. Accurate techniques based on solving the wave equation, also called the wave-based techniques, are too expensive computationally and memory-wise. Therefore, these techniques face many challenges in terms of their applicability in interactive applications including sound propagation in large environments, time-varying source and listener directivity, and high simulation cost for mid-frequencies. In this dissertation, we propose a set of efficient wave-based sound propagation techniques that solve these three challenges and enable the use of wave-based sound propagation in interactive applications. Firstly, we propose a novel equivalent source technique for interactive wave-based sound propagation in large scenes spanning hundreds of meters. It is based on the equivalent source theory used for solving radiation and scattering problems in acoustics and electromagnetics. Instead of using a volumetric or surface-based approach, this technique takes an object-centric approach to sound propagation. The proposed equivalent source technique generates realistic acoustic effects and takes orders of magnitude less runtime memory compared to prior wave-based techniques. Secondly, we present an efficient framework for handling time-varying source and listener directivity for interactive wave-based sound propagation. The source directivity is represented as a linear combination of elementary spherical harmonic sources. This spherical harmonic-based representation of source directivity can support analytical, data-driven, rotating or time-varying directivity function at runtime. Unlike previous approaches, the listener directivity approach can be used to compute spatial audio (3D audio) for a moving, rotating listener at interactive rates. Lastly, we propose an efficient GPU-based time-domain solver for the wave equation that enables wave simulation up to the mid-frequency range in tens of minutes on a desktop computer. It is demonstrated that by carefully mapping all the components of the wave simulator to match the parallel processing capabilities of the graphics processors, significant improvement in performance can be achieved compared to the CPU-based simulators, while maintaining numerical accuracy. We validate these techniques with offline numerical simulations and measured data recorded in an outdoor scene. We present results of preliminary user evaluations conducted to study the impact of these techniques on user's immersion in virtual environment. We have integrated these techniques with the Half-Life 2 game engine, Oculus Rift head-mounted display, and Xbox game controller to enable users to experience high-quality acoustics effects and spatial audio in the virtual environment.

  6. Identifying equivalent sound sources from aeroacoustic simulations using a numerical phased array

    NASA Astrophysics Data System (ADS)

    Pignier, Nicolas J.; O'Reilly, Ciarán J.; Boij, Susann

    2017-04-01

    An application of phased array methods to numerical data is presented, aimed at identifying equivalent flow sound sources from aeroacoustic simulations. Based on phased array data extracted from compressible flow simulations, sound source strengths are computed on a set of points in the source region using phased array techniques assuming monopole propagation. Two phased array techniques are used to compute the source strengths: an approach using a Moore-Penrose pseudo-inverse and a beamforming approach using dual linear programming (dual-LP) deconvolution. The first approach gives a model of correlated sources for the acoustic field generated from the flow expressed in a matrix of cross- and auto-power spectral values, whereas the second approach results in a model of uncorrelated sources expressed in a vector of auto-power spectral values. The accuracy of the equivalent source model is estimated by computing the acoustic spectrum at a far-field observer. The approach is tested first on an analytical case with known point sources. It is then applied to the example of the flow around a submerged air inlet. The far-field spectra obtained from the source models for two different flow conditions are in good agreement with the spectra obtained with a Ffowcs Williams-Hawkings integral, showing the accuracy of the source model from the observer's standpoint. Various configurations for the phased array and for the sources are used. The dual-LP beamforming approach shows better robustness to changes in the number of probes and sources than the pseudo-inverse approach. The good results obtained with this simulation case demonstrate the potential of the phased array approach as a modelling tool for aeroacoustic simulations.

  7. Particle swarm optimization and its application in MEG source localization using single time sliced data

    NASA Astrophysics Data System (ADS)

    Lin, Juan; Liu, Chenglian; Guo, Yongning

    2014-10-01

    The estimation of neural active sources from the magnetoencephalography (MEG) data is a very critical issue for both clinical neurology and brain functions research. A widely accepted source-modeling technique for MEG involves calculating a set of equivalent current dipoles (ECDs). Depth in the brain is one of difficulties in MEG source localization. Particle swarm optimization(PSO) is widely used to solve various optimization problems. In this paper we discuss its ability and robustness to find the global optimum in different depths of the brain when using single equivalent current dipole (sECD) model and single time sliced data. The results show that PSO is an effective global optimization to MEG source localization when given one dipole in different depths.

  8. Implementation issues of the nearfield equivalent source imaging microphone array

    NASA Astrophysics Data System (ADS)

    Bai, Mingsian R.; Lin, Jia-Hong; Tseng, Chih-Wen

    2011-01-01

    This paper revisits a nearfield microphone array technique termed nearfield equivalent source imaging (NESI) proposed previously. In particular, various issues concerning the implementation of the NESI algorithm are examined. The NESI can be implemented in both the time domain and the frequency domain. Acoustical variables including sound pressure, particle velocity, active intensity and sound power are calculated by using multichannel inverse filters. Issues concerning sensor deployment are also investigated for the nearfield array. The uniform array outperformed a random array previously optimized for far-field imaging, which contradicts the conventional wisdom in far-field arrays. For applications in which only a patch array with scarce sensors is available, a virtual microphone approach is employed to ameliorate edge effects using extrapolation and to improve imaging resolution using interpolation. To enhance the processing efficiency of the time-domain NESI, an eigensystem realization algorithm (ERA) is developed. Several filtering methods are compared in terms of computational complexity. Significant saving on computations can be achieved using ERA and the frequency-domain NESI, as compared to the traditional method. The NESI technique was also experimentally validated using practical sources including a 125 cc scooter and a wooden box model with a loudspeaker fitted inside. The NESI technique proved effective in identifying broadband and non-stationary sources produced by the sources.

  9. Spherical-earth Gravity and Magnetic Anomaly Modeling by Gauss-legendre Quadrature Integration

    NASA Technical Reports Server (NTRS)

    Vonfrese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J. (Principal Investigator)

    1981-01-01

    The anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical Earth for an arbitrary body represented by an equivalent point source distribution of gravity poles or magnetic dipoles were calculated. The distribution of equivalent point sources was determined directly from the coordinate limits of the source volume. Variable integration limits for an arbitrarily shaped body are derived from interpolation of points which approximate the body's surface envelope. The versatility of the method is enhanced by the ability to treat physical property variations within the source volume and to consider variable magnetic fields over the source and observation surface. A number of examples verify and illustrate the capabilities of the technique, including preliminary modeling of potential field signatures for Mississippi embayment crustal structure at satellite elevations.

  10. Multispectral data compression through transform coding and block quantization

    NASA Technical Reports Server (NTRS)

    Ready, P. J.; Wintz, P. A.

    1972-01-01

    Transform coding and block quantization techniques are applied to multispectral aircraft scanner data, and digitized satellite imagery. The multispectral source is defined and an appropriate mathematical model proposed. The Karhunen-Loeve, Fourier, and Hadamard encoders are considered and are compared to the rate distortion function for the equivalent Gaussian source and to the performance of the single sample PCM encoder.

  11. Spherical-earth gravity and magnetic anomaly modeling by Gauss-Legendre quadrature integration

    NASA Technical Reports Server (NTRS)

    Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J.

    1981-01-01

    Gauss-Legendre quadrature integration is used to calculate the anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical earth. The procedure involves representation of the anomalous source as a distribution of equivalent point gravity poles or point magnetic dipoles. The distribution of equivalent point sources is determined directly from the volume limits of the anomalous body. The variable limits of integration for an arbitrarily shaped body are obtained from interpolations performed on a set of body points which approximate the body's surface envelope. The versatility of the method is shown by its ability to treat physical property variations within the source volume as well as variable magnetic fields over the source and observation surface. Examples are provided which illustrate the capabilities of the technique, including a preliminary modeling of potential field signatures for the Mississippi embayment crustal structure at 450 km.

  12. A model for jet-noise analysis using pressure-gradient correlations on an imaginary cone

    NASA Technical Reports Server (NTRS)

    Norum, T. D.

    1974-01-01

    The technique for determining the near and far acoustic field of a jet through measurements of pressure-gradient correlations on an imaginary conical surface surrounding the jet is discussed. The necessary analytical developments are presented, and their feasibility is checked by using a point source as the sound generator. The distribution of the apparent sources on the cone, equivalent to the point source, is determined in terms of the pressure-gradient correlations.

  13. PSII as an in vivo molecular catalyst for the production of energy rich hydroquinones - A new approach in renewable energy.

    PubMed

    Das, Sai; Maiti, Soumen K

    2018-03-01

    One of the pertinent issues in the field of energy science today is the quest for an abundant source of hydrogen or hydrogen equivalents. In this study, phenyl-p-benzoquinone (pPBQ) has been used to generate a molecular store of hydrogen equivalents (phenyl-p-hydroquinone; pPBQH 2 ) from thein vivo splitting of water by photosystem II of the marine cyanobacterium Synechococcus elongatus BDU 70542. Using this technique, 10.8 μmol of pPBQH 2 per mg chlorophyll a can be extracted per minute, an efficiency that is orders of magnitude higher when compared to the techniques present in the current literature. Moreover, the photo-reduction process was stable when tested over longer periods of time. Addition of phenyl-p-benzoquinone on an intermittent basis resulted in the precipitation of phenyl-p-hydroquinone, obviating the need for costly downstream processing units for product recovery. Phenyl-p-hydroquinone so obtained is a molecular store of free energy preserved through the light driven photolysis of water and can be used as a cheap and a renewable source of hydrogen equivalents by employing transition metal catalysts or fuel cells with the concomitant regeneration of phenyl-p-benzoquinone. The cyclic nature of this technique makes it an ideal candidate to be utilized in mankind's transition from fossil fuels to solar fuels. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Murine cytomegalovirus: detection of latent infection by nucleic acid hybridization technique.

    PubMed Central

    Cheung, K S; Huang, E S; Lang, D J

    1980-01-01

    The technique of nucleic acid hybridization was used to detect the presence of murine cytomegalovirus (MCMV)-specific deoxyribonucleic acid (DNA) in cell cultures and salivary gland tissues. The presence of approximately 4.5 and 0.2 genome equivalents per cell of MCMV-specific DNA was identified in cultures of salivary (ISG2) and prostate gland (IP) cells, respectively. These cells, derived from animals with experimentally induced latent infections, were negative for virus-specific antigens by immunofluorescence and on electron microscopy revealed no visible evidence of the presence of herpesviruses. A cell line derived from the salivary gland of an uninoculated animal (NSG2) was also found to possess MCMV-specific DNA (0.2 genome equivalents per cell). For this reason, salivary gland tissues from uninoculated animals supplied as "specific pathogen-free" mice by three commercial sources were tested upon arrival for the presence of MCMC-specific DNA. MCMV-specific DNA was detectable in pooled salivary gland extracts from uninoculated animals derived from two commercial sources. All of these animals were seronegative and virus negative by conventional infectivity assays. PMID:6247281

  15. Enhanced performance for the analysis of prostaglandins and thromboxanes by liquid chromatography-tandem mass spectrometry using a new atmospheric pressure ionization source.

    PubMed

    Lubin, Arnaud; Geerinckx, Suzy; Bajic, Steve; Cabooter, Deirdre; Augustijns, Patrick; Cuyckens, Filip; Vreeken, Rob J

    2016-04-01

    Eicosanoids, including prostaglandins and thromboxanes are lipid mediators synthetized from polyunsaturated fatty acids. They play an important role in cell signaling and are often reported as inflammatory markers. LC-MS/MS is the technique of choice for the analysis of these compounds, often in combination with advanced sample preparation techniques. Here we report a head to head comparison between an electrospray ionization source (ESI) and a new atmospheric pressure ionization source (UniSpray). The performance of both interfaces was evaluated in various matrices such as human plasma, pig colon and mouse colon. The UniSpray source shows an increase in method sensitivity up to a factor 5. Equivalent to better linearity and repeatability on various matrices as well as an increase in signal intensity were observed in comparison to ESI. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Active room compensation for sound reinforcement using sound field separation techniques.

    PubMed

    Heuchel, Franz M; Fernandez-Grande, Efren; Agerkvist, Finn T; Shabalina, Elena

    2018-03-01

    This work investigates how the sound field created by a sound reinforcement system can be controlled at low frequencies. An indoor control method is proposed which actively absorbs the sound incident on a reflecting boundary using an array of secondary sources. The sound field is separated into incident and reflected components by a microphone array close to the secondary sources, enabling the minimization of reflected components by means of optimal signals for the secondary sources. The method is purely feed-forward and assumes constant room conditions. Three different sound field separation techniques for the modeling of the reflections are investigated based on plane wave decomposition, equivalent sources, and the Spatial Fourier transform. Simulations and an experimental validation are presented, showing that the control method performs similarly well at enhancing low frequency responses with the three sound separation techniques. Resonances in the entire room are reduced, although the microphone array and secondary sources are confined to a small region close to the reflecting wall. Unlike previous control methods based on the creation of a plane wave sound field, the investigated method works in arbitrary room geometries and primary source positions.

  17. Effects of finite ground plane on the radiation characteristics of a circular patch antenna

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Arun K.

    1990-02-01

    An analytical technique to determine the effects of finite ground plane on the radiation characteristics of a microstrip antenna is presented. The induced currents on the ground plane and on the upper surface of the patch are determined from the discontinuity of the near field produced by the equivalent magnetic current source on the physical aperture of the patch. The radiated fields contributed by the induced current on the ground plane and the equivalent sources on the physical aperture yield the radiation pattern of the antenna. Radiation patterns of the circular patch with finite ground plane size are computed and compared with the experimental data, and the agreement is found to be good. The radiation pattern, directive gain, and input impedance are found to vary widely with the ground plane size.

  18. Noise analysis for CCD-based ultraviolet and visible spectrophotometry.

    PubMed

    Davenport, John J; Hodgkinson, Jane; Saffell, John R; Tatam, Ralph P

    2015-09-20

    We present the results of a detailed analysis of the noise behavior of two CCD spectrometers in common use, an AvaSpec-3648 CCD UV spectrometer and an Ocean Optics S2000 Vis spectrometer. Light sources used include a deuterium UV/Vis lamp and UV and visible LEDs. Common noise phenomena include source fluctuation noise, photoresponse nonuniformity, dark current noise, fixed pattern noise, and read noise. These were identified and characterized by varying light source, spectrometer settings, or temperature. A number of noise-limiting techniques are proposed, demonstrating a best-case spectroscopic noise equivalent absorbance of 3.5×10(-4)  AU for the AvaSpec-3648 and 5.6×10(-4)  AU for the Ocean Optics S2000 over a 30 s integration period. These techniques can be used on other CCD spectrometers to optimize performance.

  19. Four-point probe measurements using current probes with voltage feedback to measure electric potentials

    NASA Astrophysics Data System (ADS)

    Lüpke, Felix; Cuma, David; Korte, Stefan; Cherepanov, Vasily; Voigtländer, Bert

    2018-02-01

    We present a four-point probe resistance measurement technique which uses four equivalent current measuring units, resulting in minimal hardware requirements and corresponding sources of noise. Local sample potentials are measured by a software feedback loop which adjusts the corresponding tip voltage such that no current flows to the sample. The resulting tip voltage is then equivalent to the sample potential at the tip position. We implement this measurement method into a multi-tip scanning tunneling microscope setup such that potentials can also be measured in tunneling contact, allowing in principle truly non-invasive four-probe measurements. The resulting measurement capabilities are demonstrated for \

  20. Separation of non-stationary multi-source sound field based on the interpolated time-domain equivalent source method

    NASA Astrophysics Data System (ADS)

    Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng

    2016-05-01

    In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.

  1. Measurement of the ambient gamma dose equivalent and kerma from the small 252Cf source at 1 meter and the small 60Co source at 2 meters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carl, W. F.

    NASA Langley Research Center requested a measurement and determination of the ambient gamma dose equivalent rate and kerma at 100 cm from the 252Cf source and determination of the ambient gamma dose equivalent rate and kerma at 200 cm from the 60Co source for the Radiation Budget Instrument Experiment (Rad-X). An Exradin A6 ion chamber with Shonka air-equivalent plastic walls in combination with a Supermax electrometer were used to measure the exposure rate and free-in-air kerma rate of the two sources at the requested distances. The measured gamma exposure, kerma, and dose equivalent rates are tabulated.

  2. Sound source identification and sound radiation modeling in a moving medium using the time-domain equivalent source method.

    PubMed

    Zhang, Xiao-Zheng; Bi, Chuan-Xing; Zhang, Yong-Bin; Xu, Liang

    2015-05-01

    Planar near-field acoustic holography has been successfully extended to reconstruct the sound field in a moving medium, however, the reconstructed field still contains the convection effect that might lead to the wrong identification of sound sources. In order to accurately identify sound sources in a moving medium, a time-domain equivalent source method is developed. In the method, the real source is replaced by a series of time-domain equivalent sources whose strengths are solved iteratively by utilizing the measured pressure and the known convective time-domain Green's function, and time averaging is used to reduce the instability in the iterative solving process. Since these solved equivalent source strengths are independent of the convection effect, they can be used not only to identify sound sources but also to model sound radiations in both moving and static media. Numerical simulations are performed to investigate the influence of noise on the solved equivalent source strengths and the effect of time averaging on reducing the instability, and to demonstrate the advantages of the proposed method on the source identification and sound radiation modeling.

  3. 40 CFR 63.55 - Maximum achievable control technology (MACT) determinations for affected sources subject to case...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (MACT) determinations for affected sources subject to case-by-case determination of equivalent emission... sources subject to case-by-case determination of equivalent emission limitations. (a) Requirements for... hazardous air pollutant emissions limitations equivalent to the limitations that would apply if an emission...

  4. Functional equivalency inferred from "authoritative sources" in networks of homologous proteins.

    PubMed

    Natarajan, Shreedhar; Jakobsson, Eric

    2009-06-12

    A one-on-one mapping of protein functionality across different species is a critical component of comparative analysis. This paper presents a heuristic algorithm for discovering the Most Likely Functional Counterparts (MoLFunCs) of a protein, based on simple concepts from network theory. A key feature of our algorithm is utilization of the user's knowledge to assign high confidence to selected functional identification. We show use of the algorithm to retrieve functional equivalents for 7 membrane proteins, from an exploration of almost 40 genomes form multiple online resources. We verify the functional equivalency of our dataset through a series of tests that include sequence, structure and function comparisons. Comparison is made to the OMA methodology, which also identifies one-on-one mapping between proteins from different species. Based on that comparison, we believe that incorporation of user's knowledge as a key aspect of the technique adds value to purely statistical formal methods.

  5. Regression Verification Using Impact Summaries

    NASA Technical Reports Server (NTRS)

    Backes, John; Person, Suzette J.; Rungta, Neha; Thachuk, Oksana

    2013-01-01

    Regression verification techniques are used to prove equivalence of syntactically similar programs. Checking equivalence of large programs, however, can be computationally expensive. Existing regression verification techniques rely on abstraction and decomposition techniques to reduce the computational effort of checking equivalence of the entire program. These techniques are sound but not complete. In this work, we propose a novel approach to improve scalability of regression verification by classifying the program behaviors generated during symbolic execution as either impacted or unimpacted. Our technique uses a combination of static analysis and symbolic execution to generate summaries of impacted program behaviors. The impact summaries are then checked for equivalence using an o-the-shelf decision procedure. We prove that our approach is both sound and complete for sequential programs, with respect to the depth bound of symbolic execution. Our evaluation on a set of sequential C artifacts shows that reducing the size of the summaries can help reduce the cost of software equivalence checking. Various reduction, abstraction, and compositional techniques have been developed to help scale software verification techniques to industrial-sized systems. Although such techniques have greatly increased the size and complexity of systems that can be checked, analysis of large software systems remains costly. Regression analysis techniques, e.g., regression testing [16], regression model checking [22], and regression verification [19], restrict the scope of the analysis by leveraging the differences between program versions. These techniques are based on the idea that if code is checked early in development, then subsequent versions can be checked against a prior (checked) version, leveraging the results of the previous analysis to reduce analysis cost of the current version. Regression verification addresses the problem of proving equivalence of closely related program versions [19]. These techniques compare two programs with a large degree of syntactic similarity to prove that portions of one program version are equivalent to the other. Regression verification can be used for guaranteeing backward compatibility, and for showing behavioral equivalence in programs with syntactic differences, e.g., when a program is refactored to improve its performance, maintainability, or readability. Existing regression verification techniques leverage similarities between program versions by using abstraction and decomposition techniques to improve scalability of the analysis [10, 12, 19]. The abstractions and decomposition in the these techniques, e.g., summaries of unchanged code [12] or semantically equivalent methods [19], compute an over-approximation of the program behaviors. The equivalence checking results of these techniques are sound but not complete-they may characterize programs as not functionally equivalent when, in fact, they are equivalent. In this work we describe a novel approach that leverages the impact of the differences between two programs for scaling regression verification. We partition program behaviors of each version into (a) behaviors impacted by the changes and (b) behaviors not impacted (unimpacted) by the changes. Only the impacted program behaviors are used during equivalence checking. We then prove that checking equivalence of the impacted program behaviors is equivalent to checking equivalence of all program behaviors for a given depth bound. In this work we use symbolic execution to generate the program behaviors and leverage control- and data-dependence information to facilitate the partitioning of program behaviors. The impacted program behaviors are termed as impact summaries. The dependence analyses that facilitate the generation of the impact summaries, we believe, could be used in conjunction with other abstraction and decomposition based approaches, [10, 12], as a complementary reduction technique. An evaluation of our regression verification technique shows that our approach is capable of leveraging similarities between program versions to reduce the size of the queries and the time required to check for logical equivalence. The main contributions of this work are: - A regression verification technique to generate impact summaries that can be checked for functional equivalence using an off-the-shelf decision procedure. - A proof that our approach is sound and complete with respect to the depth bound of symbolic execution. - An implementation of our technique using the LLVMcompiler infrastructure, the klee Symbolic Virtual Machine [4], and a variety of Satisfiability Modulo Theory (SMT) solvers, e.g., STP [7] and Z3 [6]. - An empirical evaluation on a set of C artifacts which shows that the use of impact summaries can reduce the cost of regression verification.

  6. Radiation exposure from consumer products and miscellaneous sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1977-01-01

    This review of the literature indicates that there is a variety of consumer products and miscellaneous sources of radiation that result in exposure to the U.S. population. A summary of the number of people exposed to each such source, an estimate of the resulting dose equivalents to the exposed population, and an estimate of the average annual population dose equivalent are tabulated. A review of the data in this table shows that the total average annual contribution to the whole-body dose equivalent of the U.S. population from consumer products is less than 5 mrem; about 70 percent of this arisesmore » from the presence of naturally-occurring radionuclides in building materials. Some of the consumer product sources contribute exposure mainly to localized tissues or organs. Such localized estimates include: 0.5 to 1 mrem to the average annual population lung dose equivalent (generalized); 2 rem to the average annual population bronchial epithelial dose equivalent (localized); and 10 to 15 rem to the average annual population basal mucosal dose equivalent (basal mucosa of the gum). Based on these estimates, these sources may be grouped or classified as those that involve many people and the dose equivalent is relative large or those that involve many people but the dose equivalent is relatively small, or the dose equivalent is relatively large but the number of people involved is small.« less

  7. Development of an Efficient Binaural Simulation for the Analysis of Structural Acoustic Data

    NASA Technical Reports Server (NTRS)

    Johnson, Marty E.; Lalime, Aimee L.; Grosveld, Ferdinand W.; Rizzi, Stephen A.; Sullivan, Brenda M.

    2003-01-01

    Applying binaural simulation techniques to structural acoustic data can be very computationally intensive as the number of discrete noise sources can be very large. Typically, Head Related Transfer Functions (HRTFs) are used to individually filter the signals from each of the sources in the acoustic field. Therefore, creating a binaural simulation implies the use of potentially hundreds of real time filters. This paper details two methods of reducing the number of real-time computations required by: (i) using the singular value decomposition (SVD) to reduce the complexity of the HRTFs by breaking them into dominant singular values and vectors and (ii) by using equivalent source reduction (ESR) to reduce the number of sources to be analyzed in real-time by replacing sources on the scale of a structural wavelength with sources on the scale of an acoustic wavelength. The ESR and SVD reduction methods can be combined to provide an estimated computation time reduction of 99.4% for the structural acoustic data tested. In addition, preliminary tests have shown that there is a 97% correlation between the results of the combined reduction methods and the results found with the current binaural simulation techniques

  8. On the optimality of a universal noiseless coder

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Rice, Robert F.; Miller, Warner H.

    1993-01-01

    Rice developed a universal noiseless coding structure that provides efficient performance over an extremely broad range of source entropy. This is accomplished by adaptively selecting the best of several easily implemented variable length coding algorithms. Variations of such noiseless coders have been used in many NASA applications. Custom VLSI coder and decoder modules capable of processing over 50 million samples per second have been fabricated and tested. In this study, the first of the code options used in this module development is shown to be equivalent to a class of Huffman code under the Humblet condition, for source symbol sets having a Laplacian distribution. Except for the default option, other options are shown to be equivalent to the Huffman codes of a modified Laplacian symbol set, at specified symbol entropy values. Simulation results are obtained on actual aerial imagery over a wide entropy range, and they confirm the optimality of the scheme. Comparison with other known techniques are performed on several widely used images and the results further validate the coder's optimality.

  9. Part II: Biomechanical assessment for a footprint-restoring transosseous-equivalent rotator cuff repair technique compared with a double-row repair technique.

    PubMed

    Park, Maxwell C; Tibone, James E; ElAttrache, Neal S; Ahmad, Christopher S; Jun, Bong-Jae; Lee, Thay Q

    2007-01-01

    We hypothesized that a transosseous-equivalent repair would demonstrate improved tensile strength and gap formation between the tendon and tuberosity when compared with a double-row technique. In 6 fresh-frozen human shoulders, a transosseous-equivalent rotator cuff repair was performed: a suture limb from each of two medial anchors was bridged over the tendon and fixed laterally with an interference screw. In 6 contralateral matched-pair specimens, a double-row repair was performed. For all repairs, a materials testing machine was used to load each repair cyclically from 10 N to 180 N for 30 cycles; each repair underwent tensile testing to measure failure loads at a deformation rate of 1 mm/sec. Gap formation between the tendon edge and insertion was measured with a video digitizing system. The mean ultimate load to failure was significantly greater for the transosseous-equivalent technique (443.0 +/- 87.8 N) compared with the double-row technique (299.2 +/- 52.5 N) (P = .043). Gap formation during cyclic loading was not significantly different between the transosseous-equivalent and double-row techniques, with mean values of 3.74 +/- 1.51 mm and 3.79 +/- 0.68 mm, respectively (P = .95). Stiffness for all cycles was not statistically different between the two constructs (P > .40). The transosseous-equivalent rotator cuff repair technique improves ultimate failure loads when compared with a double-row technique. Gap formation is similar for both techniques. A transosseous-equivalent repair helps restore footprint dimensions and provides a stronger repair than the double-row technique, which may help optimize healing biology.

  10. Theoretical considerations and a simple method for measuring alkalinity and acidity in low-pH waters by gran titration

    USGS Publications Warehouse

    Barringer, J.L.; Johnsson, P.A.

    1996-01-01

    Titrations for alkalinity and acidity using the technique described by Gran (1952, Determination of the equivalence point in potentiometric titrations, Part II: The Analyst, v. 77, p. 661-671) have been employed in the analysis of low-pH natural waters. This report includes a synopsis of the theory and calculations associated with Gran's technique and presents a simple and inexpensive method for performing alkalinity and acidity determinations. However, potential sources of error introduced by the chemical character of some waters may limit the utility of Gran's technique. Therefore, the cost- and time-efficient method for performing alkalinity and acidity determinations described in this report is useful for exploring the suitability of Gran's technique in studies of water chemistry.

  11. Advanced analysis technique for the evaluation of linear alternators and linear motors

    NASA Technical Reports Server (NTRS)

    Holliday, Jeffrey C.

    1995-01-01

    A method for the mathematical analysis of linear alternator and linear motor devices and designs is described, and an example of its use is included. The technique seeks to surpass other methods of analysis by including more rigorous treatment of phenomena normally omitted or coarsely approximated such as eddy braking, non-linear material properties, and power losses generated within structures surrounding the device. The technique is broadly applicable to linear alternators and linear motors involving iron yoke structures and moving permanent magnets. The technique involves the application of Amperian current equivalents to the modeling of the moving permanent magnet components within a finite element formulation. The resulting steady state and transient mode field solutions can simultaneously account for the moving and static field sources within and around the device.

  12. Comparison of sound reproduction using higher order loudspeakers and equivalent line arrays in free-field conditions.

    PubMed

    Poletti, Mark A; Betlehem, Terence; Abhayapala, Thushara D

    2014-07-01

    Higher order sound sources of Nth order can radiate sound with 2N + 1 orthogonal radiation patterns, which can be represented as phase modes or, equivalently, amplitude modes. This paper shows that each phase mode response produces a spiral wave front with a different spiral rate, and therefore a different direction of arrival of sound. Hence, for a given receiver position a higher order source is equivalent to a linear array of 2N + 1 monopole sources. This interpretation suggests performance similar to a circular array of higher order sources can be produced by an array of sources, each of which consists of a line array having monopoles at the apparent source locations of the corresponding phase modes. Simulations of higher order arrays and arrays of equivalent line sources are presented. It is shown that the interior fields produced by the two arrays are essentially the same, but that the exterior fields differ because the higher order sources produces different equivalent source locations for field positions outside the array. This work provides an explanation of the fact that an array of L Nth order sources can reproduce sound fields whose accuracy approaches the performance of (2N + 1)L monopoles.

  13. Solving fully fuzzy transportation problem using pentagonal fuzzy numbers

    NASA Astrophysics Data System (ADS)

    Maheswari, P. Uma; Ganesan, K.

    2018-04-01

    In this paper, we propose a simple approach for the solution of fuzzy transportation problem under fuzzy environment in which the transportation costs, supplies at sources and demands at destinations are represented by pentagonal fuzzy numbers. The fuzzy transportation problem is solved without converting to its equivalent crisp form using a robust ranking technique and a new fuzzy arithmetic on pentagonal fuzzy numbers. To illustrate the proposed approach a numerical example is provided.

  14. Assessment of normal tissue complications following prostate cancer irradiation: Comparison of radiation treatment modalities using NTCP models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takam, Rungdham; Bezak, Eva; Yeoh, Eric E.

    2010-09-15

    Purpose: Normal tissue complication probability (NTCP) of the rectum, bladder, urethra, and femoral heads following several techniques for radiation treatment of prostate cancer were evaluated applying the relative seriality and Lyman models. Methods: Model parameters from literature were used in this evaluation. The treatment techniques included external (standard fractionated, hypofractionated, and dose-escalated) three-dimensional conformal radiotherapy (3D-CRT), low-dose-rate (LDR) brachytherapy (I-125 seeds), and high-dose-rate (HDR) brachytherapy (Ir-192 source). Dose-volume histograms (DVHs) of the rectum, bladder, and urethra retrieved from corresponding treatment planning systems were converted to biological effective dose-based and equivalent dose-based DVHs, respectively, in order to account for differences inmore » radiation treatment modality and fractionation schedule. Results: Results indicated that with hypofractionated 3D-CRT (20 fractions of 2.75 Gy/fraction delivered five times/week to total dose of 55 Gy), NTCP of the rectum, bladder, and urethra were less than those for standard fractionated 3D-CRT using a four-field technique (32 fractions of 2 Gy/fraction delivered five times/week to total dose of 64 Gy) and dose-escalated 3D-CRT. Rectal and bladder NTCPs (5.2% and 6.6%, respectively) following the dose-escalated four-field 3D-CRT (2 Gy/fraction to total dose of 74 Gy) were the highest among analyzed treatment techniques. The average NTCP for the rectum and urethra were 0.6% and 24.7% for LDR-BT and 0.5% and 11.2% for HDR-BT. Conclusions: Although brachytherapy techniques resulted in delivering larger equivalent doses to normal tissues, the corresponding NTCPs were lower than those of external beam techniques other than the urethra because of much smaller volumes irradiated to higher doses. Among analyzed normal tissues, the femoral heads were found to have the lowest probability of complications as most of their volume was irradiated to lower equivalent doses compared to other tissues.« less

  15. Improving the geological interpretation of magnetic and gravity satellite anomalies

    NASA Technical Reports Server (NTRS)

    Hinze, William J.; Braile, Lawrence W.; Vonfrese, Ralph R. B.

    1987-01-01

    Quantitative analysis of the geologic component of observed satellite magnetic and gravity fields requires accurate isolation of the geologic component of the observations, theoretically sound and viable inversion techniques, and integration of collateral, constraining geologic and geophysical data. A number of significant contributions were made which make quantitative analysis more accurate. These include procedures for: screening and processing orbital data for lithospheric signals based on signal repeatability and wavelength analysis; producing accurate gridded anomaly values at constant elevations from the orbital data by three-dimensional least squares collocation; increasing the stability of equivalent point source inversion and criteria for the selection of the optimum damping parameter; enhancing inversion techniques through an iterative procedure based on the superposition theorem of potential fields; and modeling efficiently regional-scale lithospheric sources of satellite magnetic anomalies. In addition, these techniques were utilized to investigate regional anomaly sources of North and South America and India and to provide constraints to continental reconstruction. Since the inception of this research study, eleven papers were presented with associated published abstracts, three theses were completed, four papers were published or accepted for publication, and an additional manuscript was submitted for publication.

  16. 78 FR 73128 - Dividend Equivalents From Sources Within the United States

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-05

    ... Dividend Equivalents From Sources Within the United States AGENCY: Internal Revenue Service (IRS), Treasury... dividends, and the amount of the dividend equivalents. This information is required to establish whether a... valid control number assigned by the Office of Management and Budget. Books or records relating to a...

  17. Occams Quantum Strop: Synchronizing and Compressing Classical Cryptic Processes via a Quantum Channel (Open Source)

    DTIC Science & Technology

    2016-02-15

    do not quote them here. A sequel details a yet more efficient analytic technique based on holomorphic functions of the internal - state Markov chain...required, though, when synchronizing over a quantum channel? Recent work demonstrated that representing causal similarity as quantum state ...minimal, unifilar predictor4. The -machine’s causal states σ ∈ are defined by the equivalence relation that groups all histories = −∞ ←x x :0 that

  18. Vibration Method for Tracking the Resonant Mode and Impedance of a Microwave Cavity

    NASA Technical Reports Server (NTRS)

    Barmatz, M.; Iny, O.; Yiin, T.; Khan, I.

    1995-01-01

    A vibration technique his been developed to continuously maintain mode resonance and impedance much between a constant frequency magnetron source and resonant cavity. This method uses a vibrating metal rod to modulate the volume of the cavity in a manner equivalent to modulating an adjustable plunger. A similar vibrating metal rod attached to a stub tuner modulates the waveguide volume between the source and cavity. A phase sensitive detection scheme determines the optimum position of the adjustable plunger and stub turner during processing. The improved power transfer during the heating of a 99.8% pure alumina rod was demonstrated using this new technique. Temperature-time and reflected power-time heating curves are presented for the cases of no tracking, impedance tracker only, mode tracker only and simultaneous impedance and mode tracking. Controlled internal melting of an alumina rod near 2000 C using both tracking units was also demonstrated.

  19. Determination of instream metal loads using tracer-injection and synoptic-sampling techniques in Wightman Fork, southwestern Colorado, September 1997

    USGS Publications Warehouse

    Ortiz, Roderick F.; Bencala, Kenneth E.

    2001-01-01

    Spatial determinations of the metal loads in Wightman Fork can be used to identify potential source areas to the stream. In September 1997, a chloride tracer-injection study was done concurrently with synoptic water-quality sampling in Wightman Fork near the Summitville Mine site. Discharge was determined and metal concentrations at 38 sites were used to generate mass-load profiles for dissolved aluminum, copper, iron, manganese, and zinc. The U.S. Environmental Protection Agency had previously identified these metals as contaminants of concern.Metal loads increased substantially in Wightman Fork near the Summitville Mine. A large increase occurred along a 60-meter reach that is north of the North Waste Dump and generally corresponds to a region of radial faults. Metal loading from this reach was equivalent to 50 percent or more of the dissolved aluminum, copper, iron, manganese, and zinc load upstream from the outfall of the Summitville Water Treatment Facility (SWTF). Overall, sources along the entire reach upstream from the SWTF were equivalent to 15 percent of the iron, 33 percent of the copper and manganese, 58 percent of the zinc, and 66 percent of the aluminum load leaving the mine site. The largest increases in metal loading to Wightman Fork occurred as a result of inflow from Cropsy Creek. Aluminum, iron, manganese, and zinc loads from Cropsy Creek were equivalent to about 40 percent of the specific metal load leaving the mine site. Copper, iron, and manganese loads from Cropsy Creek were nearly as large or larger than the load from sources upstream from the SWTF.

  20. Double row equivalent for rotator cuff repair: A biomechanical analysis of a new technique.

    PubMed

    Robinson, Sean; Krigbaum, Henry; Kramer, Jon; Purviance, Connor; Parrish, Robin; Donahue, Joseph

    2018-06-01

    There are numerous configurations of double row fixation for rotator cuff tears however, there remains to be a consensus on the best method. In this study, we evaluated three different double-row configurations, including a new method. Our primary question is whether the new anchor and technique compares in biomechanical strength to standard double row techniques. Eighteen prepared fresh frozen bovine infraspinatus tendons were randomized to one of three groups including the New Double Row Equivalent, Arthrex Speedbridge and a transosseous equivalent using standard Stabilynx anchors. Biomechanical testing was performed on humeri sawbones and ultimate load, strain, yield strength, contact area, contact pressure, and a survival plots were evaluated. The new double row equivalent method demonstrated increased survival as well as ultimate strength at 415N compared to the remainder testing groups as well as equivalent contact area and pressure to standard double row techniques. This new anchor system and technique demonstrated higher survival rates and loads to failure than standard double row techniques. This data provides us with a new method of rotator cuff fixation which should be further evaluated in the clinical setting. Basic science biomechanical study.

  1. Distribution and sources of polycyclic aromatic hydrocarbons in size-differentiated re-suspended dust on building surfaces in an oilfield city, China

    NASA Astrophysics Data System (ADS)

    Kong, Shaofei; Lu, Bing; Ji, Yaqin; Bai, Zhipeng; Xu, Yonghai; Liu, Yong; Jiang, Hua

    2012-08-01

    Thirty re-suspended dust samples were collected from building surfaces in an oilfield city, re-suspended and sampled through PM2.5, PM10 and PM100 inlets and analyzed for 18 PAHs by GC-MS technique. PAHs concentrations, toxicity and profiles characteristic for different districts and size were studied. PAHs sources were identified by diagnostic ratios and primary component analysis. Results showed that the total amounts of analyzed PAHs in re-suspended dust in Dongying were 45.29, 23.79 and 11.41 μg g-1 for PM2.5, PM10 and PM100, respectively. PAHs tended to concentrate in finer particles with mass ratios of PM2.5/PM10 and PM10/PM100 as 1.96 ± 0.86 and 2.53 ± 1.57. The old district with more human activities and long oil exploitation history exhibited higher concentrations of PAHs from both combustion and non-combustion sources. BaP-based toxic equivalent factor and BaP-based equivalent carcinogenic power exhibited decreasing sequence as PM2.5 > PM10 > PM100 suggesting that the finer the particles, the more toxic of the dust. NaP, Phe, Flu, Pyr, BbF and BghiP were the abundant species. Coefficient of divergence analysis implied that PAHs in different districts and size fractions had common sources. Coal combustion, industrial sources, vehicle emission and petroleum were probably the main contributions according to the principal component analysis result.

  2. Generation of optimal artificial neural networks using a pattern search algorithm: application to approximation of chemical systems.

    PubMed

    Ihme, Matthias; Marsden, Alison L; Pitsch, Heinz

    2008-02-01

    A pattern search optimization method is applied to the generation of optimal artificial neural networks (ANNs). Optimization is performed using a mixed variable extension to the generalized pattern search method. This method offers the advantage that categorical variables, such as neural transfer functions and nodal connectivities, can be used as parameters in optimization. When used together with a surrogate, the resulting algorithm is highly efficient for expensive objective functions. Results demonstrate the effectiveness of this method in optimizing an ANN for the number of neurons, the type of transfer function, and the connectivity among neurons. The optimization method is applied to a chemistry approximation of practical relevance. In this application, temperature and a chemical source term are approximated as functions of two independent parameters using optimal ANNs. Comparison of the performance of optimal ANNs with conventional tabulation methods demonstrates equivalent accuracy by considerable savings in memory storage. The architecture of the optimal ANN for the approximation of the chemical source term consists of a fully connected feedforward network having four nonlinear hidden layers and 117 synaptic weights. An equivalent representation of the chemical source term using tabulation techniques would require a 500 x 500 grid point discretization of the parameter space.

  3. Numerical characterization of landing gear aeroacoustics using advanced simulation and analysis techniques

    NASA Astrophysics Data System (ADS)

    Redonnet, S.; Ben Khelil, S.; Bulté, J.; Cunha, G.

    2017-09-01

    With the objective of aircraft noise mitigation, we here address the numerical characterization of the aeroacoustics by a simplified nose landing gear (NLG), through the use of advanced simulation and signal processing techniques. To this end, the NLG noise physics is first simulated through an advanced hybrid approach, which relies on Computational Fluid Dynamics (CFD) and Computational AeroAcoustics (CAA) calculations. Compared to more traditional hybrid methods (e.g. those relying on the use of an Acoustic Analogy), and although it is used here with some approximations made (e.g. design of the CFD-CAA interface), the present approach does not rely on restrictive assumptions (e.g. equivalent noise source, homogeneous propagation medium), which allows to incorporate more realism into the prediction. In a second step, the outputs coming from such CFD-CAA hybrid calculations are processed through both traditional and advanced post-processing techniques, thus offering to further investigate the NLG's noise source mechanisms. Among other things, this work highlights how advanced computational methodologies are now mature enough to not only simulate realistic problems of airframe noise emission, but also to investigate their underlying physics.

  4. SQUID (superconducting quantum interference device) arrays for simultaneous magnetic measurements: Calibration and source localization performance

    NASA Astrophysics Data System (ADS)

    Kaufman, Lloyd; Williamson, Samuel J.; Costaribeiro, P.

    1988-02-01

    Recently developed small arrays of SQUID-based magnetic sensors can, if appropriately placed, locate the position of a confined biomagnetic source without moving the array. The authors present a technique with a relative accuracy of about 2 percent for calibrating such sensors having detection coils with the geometry of a second-order gradiometer. The effects of calibration error and magnetic noise on the accuracy of locating an equivalent current dipole source in the human brain are investigated for 5- and 7-sensor probes and for a pair of 7-sensor probes. With a noise level of 5 percent of peak signal, uncertainties of about 20 percent in source strength and depth for a 5-sensor probe are reduced to 8 percent for a pair of 7-sensor probes, and uncertainties of about 15 mm in lateral position are reduced to 1 mm, for the configuration considered.

  5. Developing a hybrid dictionary-based bio-entity recognition technique.

    PubMed

    Song, Min; Yu, Hwanjo; Han, Wook-Shin

    2015-01-01

    Bio-entity extraction is a pivotal component for information extraction from biomedical literature. The dictionary-based bio-entity extraction is the first generation of Named Entity Recognition (NER) techniques. This paper presents a hybrid dictionary-based bio-entity extraction technique. The approach expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. In addition, the proposed technique adopts text mining techniques in the merging stage of similar entities such as Part of Speech (POS) expansion, stemming, and the exploitation of the contextual cues to further improve the performance. The experimental results show that the proposed technique achieves the best or at least equivalent performance among compared techniques, GENIA, MESH, UMLS, and combinations of these three resources in F-measure. The results imply that the performance of dictionary-based extraction techniques is largely influenced by information resources used to build the dictionary. In addition, the edit distance algorithm shows steady performance with three different dictionaries in precision whereas the context-only technique achieves a high-end performance with three difference dictionaries in recall.

  6. Developing a hybrid dictionary-based bio-entity recognition technique

    PubMed Central

    2015-01-01

    Background Bio-entity extraction is a pivotal component for information extraction from biomedical literature. The dictionary-based bio-entity extraction is the first generation of Named Entity Recognition (NER) techniques. Methods This paper presents a hybrid dictionary-based bio-entity extraction technique. The approach expands the bio-entity dictionary by combining different data sources and improves the recall rate through the shortest path edit distance algorithm. In addition, the proposed technique adopts text mining techniques in the merging stage of similar entities such as Part of Speech (POS) expansion, stemming, and the exploitation of the contextual cues to further improve the performance. Results The experimental results show that the proposed technique achieves the best or at least equivalent performance among compared techniques, GENIA, MESH, UMLS, and combinations of these three resources in F-measure. Conclusions The results imply that the performance of dictionary-based extraction techniques is largely influenced by information resources used to build the dictionary. In addition, the edit distance algorithm shows steady performance with three different dictionaries in precision whereas the context-only technique achieves a high-end performance with three difference dictionaries in recall. PMID:26043907

  7. Evidence for Different Reaction Pathways for Liquid and Granular Micronutrients in a Calcareous Soil

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hettiarachchi, Ganga M.; McLaughlin, Mike J.; Scheckel, Kirk G.

    2008-06-16

    The benefits of Mn and Zn fluid fertilizers over conventional granular products in calcareous sandy loam soils have been agronomically demonstrated. We hypothesized that the differences in the effectiveness between granular and fluid Mn and Zn fertilizers is due to different Mn and Zn reaction processes in and around fertilizer granules and fluid fertilizer bands. We used a combination of several synchrotron-based x-ray techniques, namely, spatially resolved micro-x-ray fluorescence (?-XRF), micro-x-ray absorption near edge structure spectroscopy (?-XANES), and bulk-XANES and -extended x-ray absorption fine structure (EXAFS) spectroscopy, along with several laboratory-based x-ray techniques to speciate different fertilizer-derived Mn and Znmore » species in highly calcareous soils to understand the chemistry underlying the observed differential behavior of fluid and granular micronutrient forms. Micro-XRF mapping of soil-fertilizer reactions zones indicated that the mobility of Mn and Zn from liquid fertilizer was greater than that observed for equivalent granular sources of these micronutrients in soil. After application of these micronutrient fertilizers to soil, Mn and Zn from liquid fertilizers were found to remain in comparatively more soluble solid forms, such as hydrated Mn phosphate-like, Mn calcite-like, adsorbed Zn-like, and Zn silicate-like phases, whereas Mn and Zn from equivalent granular sources tended to transform into comparatively less soluble solid forms such as Mn oxide-like, Mn carbonate-like, and Zn phosphate-like phases.« less

  8. Equivalent radiation source of 3D package for electromagnetic characteristics analysis

    NASA Astrophysics Data System (ADS)

    Li, Jun; Wei, Xingchang; Shu, Yufei

    2017-10-01

    An equivalent radiation source method is proposed to characterize electromagnetic emission and interference of complex three dimensional integrated circuits (IC) in this paper. The method utilizes amplitude-only near-field scanning data to reconstruct an equivalent magnetic dipole array, and the differential evolution optimization algorithm is proposed to extract the locations, orientation and moments of those dipoles. By importing the equivalent dipoles model into a 3D full-wave simulator together with the victim circuit model, the electromagnetic interference issues in mixed RF/digital systems can be well predicted. A commercial IC is used to validate the accuracy and efficiency of this proposed method. The coupled power at the victim antenna port calculated by the equivalent radiation source is compared with the measured data. Good consistency is obtained which confirms the validity and efficiency of the method. Project supported by the National Nature Science Foundation of China (No. 61274110).

  9. A study of the transmission characteristics of suppressor nozzles

    NASA Technical Reports Server (NTRS)

    Ahuja, K. K.; Salikuddin, M.; Burrin, R. H.; Plumbee, H. E., Jr.

    1980-01-01

    The internal noise radiation characteristics for a single stream 12 lobe 24 tube suppressor nozzle, and for a dual stream 36 chute suppressor nozzle were investigated. An equivalent single round conical nozzle and an equivalent coannular nozzle system were also tested to provide a reference for the two suppressors. The technique utilized a high voltage spark discharge as a noise source within the test duct which permitted separation of the incident, reflected and transmitted signals in the time domain. These signals were then Fourier transformed to obtain the nozzle transmission coefficient and the power transfer function. These transmission parameters for the 12 lobe, 24 tube suppressor nozzle and the reference conical nozzle are presented as a function of jet Mach number, duct Mach number polar angle and temperature. Effects of simulated forward flight are also considered for this nozzle. For the dual stream, 36 chute suppressor, the transmission parameters are presented as a function of velocity ratios and temperature ratios. Possible data for the equivalent coaxial nozzle is also presented. Jet noise suppression by these nozzles is also discussed.

  10. Applicability of the single equivalent point dipole model to represent a spatially distributed bio-electrical source

    NASA Technical Reports Server (NTRS)

    Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.

    2001-01-01

    Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius. Finally, a method is devised to obtain the size of the distributed source during the cardiac cycle.

  11. Comparing primary energy attributed to renewable energy with primary energy equivalent to determine carbon abatement in a national context.

    PubMed

    Gallachóir, Brian P O; O'Leary, Fergal; Bazilian, Morgan; Howley, Martin; McKeogh, Eamon J

    2006-01-01

    The current conventional approach to determining the primary energy associated with non-combustible renewable energy (RE) sources such as wind energy and hydro power is to equate the electricity generated from these sources with the primary energy supply. This paper compares this with an approach that was formerly used by the IEA, in which the primary energy equivalent attributed to renewable energy was equated with the fossil fuel energy it displaces. Difficulties with implementing this approach in a meaningful way for international comparisons lead to most international organisations abandoning the primary energy equivalent methodology. It has recently re-emerged in prominence however, as efforts grow to develop baseline procedures for quantifying the greenhouse gas (GHG) emissions avoided by renewable energy within the context of the Kyoto Protocol credit trading mechanisms. This paper discusses the primary energy equivalent approach and in particular the distinctions between displacing fossil fuel energy in existing plant or in new plant. The approach is then extended provide insight into future primary energy displacement by renewable energy and to quantify the amount of CO2 emissions avoided by renewable energy. The usefulness of this approach in quantifying the benefits of renewable energy is also discussed in an energy policy context, with regard to increasing security of energy supply as well as reducing energy-related GHG (and other) emissions. The approach is applied in a national context and Ireland is case study country selected for this research. The choice of Ireland is interesting in two respects. The first relates to the high proportion of electricity only fossil fuel plants in Ireland resulting in a significant variation between primary energy and primary energy equivalent. The second concerns Ireland's poor performance to date in limiting GHG emissions in line with its Kyoto target and points to the need for techniques to quantify the potential contribution of renewable energy in achieving the target set.

  12. Nonevaporable getter coating chambers for extreme high vacuum

    DOE PAGES

    Stutzman, Marcy L.; Adderley, Philip A.; Mamun, Md Abdullah Al; ...

    2018-03-01

    Techniques for NEG coating a large diameter chamber are presented along with vacuum measurements in the chamber using several pumping configurations, with base pressure as low as 1.56x10^-12 Torr (N2 equivalent) with only a NEG coating and small ion pump. We then describe modifications to the NEG coating process to coat complex geometry chambers for ultra-cold atom trap experiments. Surface analysis of NEG coated samples are used to measure composition and morphology of the thin films. Finally, pressure measurements are compared for two NEG coated polarized electron source chambers: the 130 kV polarized electron source at Jefferson Lab and themore » upgraded 350 kV polarized 2 electron source, both of which are approaching or within the extreme high vacuum (XHV) range, defined as P<7.5x10^-13 Torr.« less

  13. Nonevaporable getter coating chambers for extreme high vacuum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stutzman, Marcy L.; Adderley, Philip A.; Mamun, Md Abdullah Al

    Techniques for NEG coating a large diameter chamber are presented along with vacuum measurements in the chamber using several pumping configurations, with base pressure as low as 1.56x10^-12 Torr (N2 equivalent) with only a NEG coating and small ion pump. We then describe modifications to the NEG coating process to coat complex geometry chambers for ultra-cold atom trap experiments. Surface analysis of NEG coated samples are used to measure composition and morphology of the thin films. Finally, pressure measurements are compared for two NEG coated polarized electron source chambers: the 130 kV polarized electron source at Jefferson Lab and themore » upgraded 350 kV polarized 2 electron source, both of which are approaching or within the extreme high vacuum (XHV) range, defined as P<7.5x10^-13 Torr.« less

  14. A sparse equivalent source method for near-field acoustic holography.

    PubMed

    Fernandez-Grande, Efren; Xenaki, Angeliki; Gerstoft, Peter

    2017-01-01

    This study examines a near-field acoustic holography method consisting of a sparse formulation of the equivalent source method, based on the compressive sensing (CS) framework. The method, denoted Compressive-Equivalent Source Method (C-ESM), encourages spatially sparse solutions (based on the superposition of few waves) that are accurate when the acoustic sources are spatially localized. The importance of obtaining a non-redundant representation, i.e., a sensing matrix with low column coherence, and the inherent ill-conditioning of near-field reconstruction problems is addressed. Numerical and experimental results on a classical guitar and on a highly reactive dipole-like source are presented. C-ESM is valid beyond the conventional sampling limits, making wide-band reconstruction possible. Spatially extended sources can also be addressed with C-ESM, although in this case the obtained solution does not recover the spatial extent of the source.

  15. Localization from near-source quasi-static electromagnetic fields

    NASA Astrophysics Data System (ADS)

    Mosher, J. C.

    1993-09-01

    A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. The nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUltiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.

  16. Localization from near-source quasi-static electromagnetic fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, John Compton

    1993-09-01

    A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. Themore » nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUtiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.« less

  17. Alternative Fuels Data Center: Iowa Transportation Data for Alternative

    Science.gov Websites

    Consumption Source: State Energy Data System based on beta data converted to gasoline gallon equivalents of (bbl/day) 0 Renewable Power Plants 41 Renewable Power Plant Capacity (nameplate, MW) 3,807 Source /gallon $2.60/GGE $2.96/gallon $2.66/GGE Source: Average prices per gasoline gallon equivalent (GGE) for

  18. Alternative Fuels Data Center: South Carolina Transportation Data for

    Science.gov Websites

    Consumption Source: State Energy Data System based on beta data converted to gasoline gallon equivalents of (bbl/day) 0 Renewable Power Plants 31 Renewable Power Plant Capacity (nameplate, MW) 3,396 Source /gallon $2.66/GGE Source: Average prices per gasoline gallon equivalent (GGE) for the Lower Atlantic PADD

  19. [Comparison between rapid detection method of enzyme substrate technique and multiple-tube fermentation technique in water coliform bacteria detection].

    PubMed

    Sun, Zong-ke; Wu, Rong; Ding, Pei; Xue, Jin-Rong

    2006-07-01

    To compare between rapid detection method of enzyme substrate technique and multiple-tube fermentation technique in water coliform bacteria detection. Using inoculated and real water samples to compare the equivalence and false positive rate between two methods. Results demonstrate that enzyme substrate technique shows equivalence with multiple-tube fermentation technique (P = 0.059), false positive rate between the two methods has no statistical difference. It is suggested that enzyme substrate technique can be used as a standard method for water microbiological safety evaluation.

  20. Wireless Power Transfer for Space Applications

    NASA Technical Reports Server (NTRS)

    Ramos, Gabriel Vazquez; Yuan, Jiann-Shiun

    2011-01-01

    This paper introduces an implementation for magnetic resonance wireless power transfer for space applications. The analysis includes an equivalent impedance study, loop material characterization, source/load resonance coupling technique, and system response behavior due to loads variability. System characterization is accomplished by executing circuit design from analytical equations and simulations using Matlab and SPICE. The theory was validated by a combination of different experiments that includes loop material consideration, resonance coupling circuits considerations, electric loads considerations and a small scale proof-of-concept prototype. Experiment results shows successful wireless power transfer for all the cases studied. The prototype provided about 4.5 W of power to the load at a separation of -5 cm from the source using a power amplifier rated for 7 W.

  1. Investigations of medium wavelength magnetic anomalies in the eastern Pacific using MAGSAT data

    NASA Technical Reports Server (NTRS)

    Harrison, C. G. A. (Principal Investigator)

    1981-01-01

    The suitability of using magnetic field measurements obtained by MAGSAT is discussed with regard to resolving the medium wavelength anomaly problem. A procedure for removing the external field component from the measured field is outlined. Various methods of determining crustal magnetizations are examined in light of satellite orbital parameters resulting in the selection of the equivalent source technique for evaluating scalar measurements. A matrix inversion of the vector components is suggested as a method for arriving at a scalar potential representation of the field.

  2. Measured neutron and gamma spectra from californium-252 in a tissue-equivalent medium.

    PubMed

    Elson, H R; Stupar, T A; Shapiro, A; Kereiakes, J G

    1979-01-01

    A method of experimentally obtaining both neutron and gamma-ray spectra in a scattering medium is described. The method utilizes a liquid-organic scintillator (NE-213) coupled with a pulse-shape discrimination circuit. This allows the separation of the neutron-induced pulse-height data from the gamma-ray pulse-height data. Using mathematical unfolding techniques, the two sets of pulse-height data were transformed to obtain the neutron and gamma-ray energy spectra. A small spherical detector was designed and constructed to reduce the errors incurred by attempting spectral measurements in a scattering medium. Demonstration of the utility of the system to obtain the neutron and gamma-ray spectra in a scattering medium was performed by characterizing the neutron and gamma-ray spectra at various sites about a 3.7-microgram (1.5 cm active length) californium-252 source in a tissue-equivalent medium.

  3. A novel technique to measure interface trap density in a GaAs MOS capacitor using time-varying magnetic fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choudhury, Aditya N. Roy, E-mail: aditya@physics.iisc.ernet.in; Venkataraman, V.

    Interface trap density (D{sub it}) in a GaAs metal-oxide-semiconductor (MOS) capacitor can be measured electrically by measuring its impedance, i.e. by exciting it with a small signal voltage source and measuring the resulting current through the circuit. We propose a new method of measuring D{sub it} where the MOS capacitor is subjected to a (time-varying) magnetic field instead, which produces an effect equivalent to a (time-varying) voltage drop across the sample. This happens because the electron chemical potential of GaAs changes with a change in an externally applied magnetic field (unlike that of the gate metal); this is not themore » voltage induced by Faraday’s law of electromagnetic induction. So, by measuring the current through the MOS, D{sub it} can be found similarly. Energy band diagrams and equivalent circuits of a MOS capacitor are drawn in the presence of a magnetic field, and analyzed. The way in which a magnetic field affects a MOS structure is shown to be fundamentally different compared to an electrical voltage source.« less

  4. Sound field separation with sound pressure and particle velocity measurements.

    PubMed

    Fernandez-Grande, Efren; Jacobsen, Finn; Leclère, Quentin

    2012-12-01

    In conventional near-field acoustic holography (NAH) it is not possible to distinguish between sound from the two sides of the array, thus, it is a requirement that all the sources are confined to only one side and radiate into a free field. When this requirement cannot be fulfilled, sound field separation techniques make it possible to distinguish between outgoing and incoming waves from the two sides, and thus NAH can be applied. In this paper, a separation method based on the measurement of the particle velocity in two layers and another method based on the measurement of the pressure and the velocity in a single layer are proposed. The two methods use an equivalent source formulation with separate transfer matrices for the outgoing and incoming waves, so that the sound from the two sides of the array can be modeled independently. A weighting scheme is proposed to account for the distance between the equivalent sources and measurement surfaces and for the difference in magnitude between pressure and velocity. Experimental and numerical studies have been conducted to examine the methods. The double layer velocity method seems to be more robust to noise and flanking sound than the combined pressure-velocity method, although it requires an additional measurement surface. On the whole, the separation methods can be useful when the disturbance of the incoming field is significant. Otherwise the direct reconstruction is more accurate and straightforward.

  5. Time-dependent polar distribution of outgassing from a spacecraft

    NASA Technical Reports Server (NTRS)

    Scialdone, J. J.

    1974-01-01

    A technique has been developed to obtain a characterization of the self-generated environment of a spacecraft and its variation with time, angular position, and distance. The density, pressure, outgassing flux, total weight loss, and other important parameters were obtained from data provided by two mass measuring crystal microbalances, mounted back to back, at distance of 1 m from the spacecraft equivalent surface. A major outgassing source existed at an angular position of 300 deg to 340 deg, near the rocket motor, while the weakest source was at the antennas. The strongest source appeared to be caused by a material diffusion process which produced a directional density at 1 m distance of about 1.6 x 10 to the 11th power molecules/cu cm after 1 hr in vacuum and decayed to 1.6 x 10 to the 9th power molecules/cu cm after 200 hr. The total average outgassing flux at the same distance and during the same time span changed from 1.2 x 10 to the minus 7th power to 1.4 x to the minus 10th power g/sq cm/s. These values are three times as large at the spacecraft surface. Total weight loss was 537 g after 10 hr and about 833 g after 200 hr. Self-contamination of the spacecraft was equivalent to that in orbit at about 300-km altitude.

  6. Compact lumped circuit model of discharges in DC accelerator using partial element equivalent circuit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banerjee, Srutarshi; Rajan, Rehim N.; Singh, Sandeep K.

    2014-07-01

    DC Accelerators undergoes different types of discharges during its operation. A model depicting the discharges has been simulated to study the different transient conditions. The paper presents a Physics based approach of developing a compact circuit model of the DC Accelerator using Partial Element Equivalent Circuit (PEEC) technique. The equivalent RLC model aids in analyzing the transient behavior of the system and predicting anomalies in the system. The electrical discharges and its properties prevailing in the accelerator can be evaluated by this equivalent model. A parallel coupled voltage multiplier structure is simulated in small scale using few stages of coronamore » guards and the theoretical and practical results are compared. The PEEC technique leads to a simple model for studying the fault conditions in accelerator systems. Compared to the Finite Element Techniques, this technique gives the circuital representation. The lumped components of the PEEC are used to obtain the input impedance and the result is also compared to that of the FEM technique for a frequency range of (0-200) MHz. (author)« less

  7. Crustal structure of the Churchill-Superior boundary zone between 80 and 98 deg W longitude from Magsat anomaly maps and stacked passes

    NASA Technical Reports Server (NTRS)

    Hall, D. H.; Millar, T. W.; Noble, I. A.

    1985-01-01

    A modeling technique using spherical shell elements and equivalent dipole sources has been applied to Magsat signatures at the Churchill-Superior boundary in Manitoba, Ontario, and Ungava. A large satellite magnetic anomaly (12 nT amplitude) on POGO and Magsat maps near the Churchill-Superior boundary was found to be related to the Richmond Gulf aulacogen. The averaged crustal magnetization in the source region is 5.2 A/m. Stacking of the magnetic traces from Magsat passes reveals a magnetic signature (10 nT amplitude) at the Churchill-Superior boundary in an area studied between 80 deg W and 98 deg W. Modeling suggests a steplike thickening of the crust on the Churchill side of the boundary in a layer with a magnetization of 5 A/m. Signatures on aeromagnetic maps are also found in the source areas for both of these satellite anomalies.

  8. End-to-end system test for solid-state microdosemeters.

    PubMed

    Pisacane, V L; Dolecek, Q E; Malak, H; Dicello, J F

    2010-08-01

    The gold standard in microdosemeters has been the tissue equivalent proportional counter (TEPC) that utilises a gas cavity. An alternative is the solid-state microdosemeter that replaces the gas with a condensed phase (silicon) detector with microscopic sensitive volumes. Calibrations of gas and solid-state microdosemeters are generally carried out using radiation sources built into the detector that impose restrictions on their handling, transportation and licensing in accordance with the regulations from international, national and local nuclear regulatory bodies. Here a novel method is presented for carrying out a calibration and end-to-end system test of a microdosemeter using low-energy photons as the initiating energy source, thus obviating the need for a regulated ionising radiation source. This technique may be utilised to calibrate both a solid-state microdosemeter and, with modification, a TEPC with the higher average ionisation energy of a gas.

  9. Deflection Measurements of a Thermally Simulated Nuclear Core Using a High-Resolution CCD-Camera

    NASA Technical Reports Server (NTRS)

    Stanojev, B. J.; Houts, M.

    2004-01-01

    Space fission systems under consideration for near-term missions all use compact. fast-spectrum reactor cores. Reactor dimensional change with increasing temperature, which affects neutron leakage. is the dominant source of reactivity feedback in these systems. Accurately measuring core dimensional changes during realistic non-nuclear testing is therefore necessary in predicting the system nuclear equivalent behavior. This paper discusses one key technique being evaluated for measuring such changes. The proposed technique is to use a Charged Couple Device (CCD) sensor to obtain deformation readings of electrically heated prototypic reactor core geometry. This paper introduces a technique by which a single high spatial resolution CCD camera is used to measure core deformation in Real-Time (RT). Initial system checkout results are presented along with a discussion on how additional cameras could be used to achieve a three- dimensional deformation profile of the core during test.

  10. Detailing the equivalence between real equiangular tight frames and certain strongly regular graphs

    NASA Astrophysics Data System (ADS)

    Fickus, Matthew; Watson, Cody E.

    2015-08-01

    An equiangular tight frame (ETF) is a set of unit vectors whose coherence achieves the Welch bound, and so is as incoherent as possible. They arise in numerous applications. It is well known that real ETFs are equivalent to a certain subclass of strongly regular graphs. In this note, we give some alternative techniques for understanding this equivalence. In a later document, we will use these techniques to further generalize this theory.

  11. MEMS 3-DoF gyroscope design, modeling and simulation through equivalent circuit lumped parameter model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mian, Muhammad Umer, E-mail: umermian@gmail.com; Khir, M. H. Md.; Tang, T. B.

    Pre-fabrication, behavioural and performance analysis with computer aided design (CAD) tools is a common and fabrication cost effective practice. In light of this we present a simulation methodology for a dual-mass oscillator based 3 Degree of Freedom (3-DoF) MEMS gyroscope. 3-DoF Gyroscope is modeled through lumped parameter models using equivalent circuit elements. These equivalent circuits consist of elementary components which are counterpart of their respective mechanical components, used to design and fabricate 3-DoF MEMS gyroscope. Complete designing of equivalent circuit model, mathematical modeling and simulation are being presented in this paper. Behaviors of the equivalent lumped models derived for themore » proposed device design are simulated in MEMSPRO T-SPICE software. Simulations are carried out with the design specifications following design rules of the MetalMUMPS fabrication process. Drive mass resonant frequencies simulated by this technique are 1.59 kHz and 2.05 kHz respectively, which are close to the resonant frequencies found by the analytical formulation of the gyroscope. The lumped equivalent circuit modeling technique proved to be a time efficient modeling technique for the analysis of complex MEMS devices like 3-DoF gyroscopes. The technique proves to be an alternative approach to the complex and time consuming couple field analysis Finite Element Analysis (FEA) previously used.« less

  12. Summer fluxes of methane and carbon dioxide from a pond and floating mat in a continental Canadian peatland

    NASA Astrophysics Data System (ADS)

    Burger, Magdalena; Berger, Sina; Spangenberg, Ines; Blodau, Christian

    2016-06-01

    Ponds smaller than 10 000 m2 likely account for about one-third of the global lake perimeter. The release of methane (CH4) and carbon dioxide (CO2) from these ponds is often high and significant on the landscape scale. We measured CO2 and CH4 fluxes in a temperate peatland in southern Ontario, Canada, in summer 2014 along a transect from the open water of a small pond (847 m2) towards the surrounding floating mat (5993 m2) and in a peatland reference area. We used a high-frequency closed chamber technique and distinguished between diffusive and ebullitive CH4 fluxes. CH4 fluxes and CH4 bubble frequency increased from a median of 0.14 (0.00 to 0.43) mmol m-2 h-1 and 4 events m-2 h-1 on the open water to a median of 0.80 (0.20 to 14.97) mmol m-2 h-1 and 168 events m-2 h-1 on the floating mat. The mat was a summer hot spot of CH4 emissions. Fluxes were 1 order of magnitude higher than at an adjacent peatland site. During daytime the pond was a net source of CO2 equivalents to the atmosphere amounting to 0.13 (-0.02 to 1.06) g CO2 equivalents m-2 h-1, whereas the adjacent peatland site acted as a sink of -0.78 (-1.54 to 0.29) g CO2 equivalents m-2 h-1. The photosynthetic CO2 uptake on the floating mat did not counterbalance the high CH4 emissions, which turned the floating mat into a strong net source of 0.21 (-0.11 to 2.12) g CO2 equivalents m-2 h-1. This study highlights the large small-scale variability of CH4 fluxes and CH4 bubble frequency at the peatland-pond interface and the importance of the often large ecotone areas surrounding small ponds as a source of greenhouse gases to the atmosphere.

  13. Summer fluxes of methane and carbon dioxide from a pond and floating mat in a continental Canadian peatland

    NASA Astrophysics Data System (ADS)

    Blodau, Christian; Burger, Magdalena; Berger, Sina; Spangenberg, Ines

    2016-04-01

    Ponds smaller than 10000 m2 likely account for about one third of the global lake perimeter. The release of methane (CH4) and carbon dioxide (CO2) from these ponds is often high and significant on the landscape scale. We measured CO2 and CH4 fluxes in a temperate peatland in southern Ontario, Canada, in summer 2014 along a transect from the open water of a small pond (847 m2) towards the surrounding floating mat (5993 m2) and in a peatland reference area. We used a high-frequency closed chamber technique and distinguished between diffusive and ebullitive CH4 fluxes. CH4 fluxes and CH4 bubble frequency increased from a median of 0.14 (0.00 to 0.43) mmol m-2 h-1 and 4 events m-2 h-1 on the open water to a median of 0.80 (0.20 to 14.97) mmol m-2 h-1 and 168 events m-2 h-1 on the floating mat. The mat was a summer hot spot of CH4 emissions. Fluxes were one order of magnitude higher than at an adjacent peatland site. During daytime the pond was a net source of CO2 equivalents to the atmosphere amounting to 0.13 (-0.02 to 1.06) g CO2 equivalents m-2 h-1, whereas the adjacent peatland site acted as a sink of -0.78 (-1.54 to 0.29) g CO2 equivalents m-2 h-1. The photosynthetic CO2 uptake on the floating mat did not counterbalance the high CH4 emissions, which turned the floating mat into a strong net source of 0.21 (-0.11 to 2.12) g CO2 equivalents m-2 h-1. This study highlights the large small-scale variability of CH4 fluxes and CH4 bubble frequency at the peatland-pond interface and the importance of the often large ecotone areas surrounding small ponds as a source of greenhouse gases to the atmosphere.

  14. Little Green Lies: Dissecting the Hype of Renewables

    DTIC Science & Technology

    2011-05-11

    Sources: 2009 BP Statistical Energy Analysis , US Energy Information Administration Per Capita Energy Use (Kg Oil Equivalent) World 1,819 USA 7,766...Equivalent BUILDING STRONG® Energy Trends Sources: 2006 BP Statistical Energy Analysis Oil 37% Nuclear 6o/o Coal 25% Gas 23o/o Biomass 4% Hydro 3% Wind

  15. Cost-effectiveness Analysis with Influence Diagrams.

    PubMed

    Arias, M; Díez, F J

    2015-01-01

    Cost-effectiveness analysis (CEA) is used increasingly in medicine to determine whether the health benefit of an intervention is worth the economic cost. Decision trees, the standard decision modeling technique for non-temporal domains, can only perform CEA for very small problems. To develop a method for CEA in problems involving several dozen variables. We explain how to build influence diagrams (IDs) that explicitly represent cost and effectiveness. We propose an algorithm for evaluating cost-effectiveness IDs directly, i.e., without expanding an equivalent decision tree. The evaluation of an ID returns a set of intervals for the willingness to pay - separated by cost-effectiveness thresholds - and, for each interval, the cost, the effectiveness, and the optimal intervention. The algorithm that evaluates the ID directly is in general much more efficient than the brute-force method, which is in turn more efficient than the expansion of an equivalent decision tree. Using OpenMarkov, an open-source software tool that implements this algorithm, we have been able to perform CEAs on several IDs whose equivalent decision trees contain millions of branches. IDs can perform CEA on large problems that cannot be analyzed with decision trees.

  16. Evaluation of Aqueous and Powder Processing Techniques for Production of Pu-238-Fueled General Purpose Heat Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2008-06-01

    This report evaluates alternative processes that could be used to produce Pu-238 fueled General Purpose Heat Sources (GPHS) for radioisotope thermoelectric generators (RTG). Fabricating GPHSs with the current process has remained essentially unchanged since its development in the 1970s. Meanwhile, 30 years of technological advancements have been made in the fields of chemistry, manufacturing, ceramics, and control systems. At the Department of Energy’s request, alternate manufacturing methods were compared to current methods to determine if alternative fabrication processes could reduce the hazards, especially the production of respirable fines, while producing an equivalent GPHS product. An expert committee performed the evaluationmore » with input from four national laboratories experienced in Pu-238 handling.« less

  17. Quiet PROPELLER MRI techniques match the quality of conventional PROPELLER brain imaging techniques.

    PubMed

    Corcuera-Solano, I; Doshi, A; Pawha, P S; Gui, D; Gaddipati, A; Tanenbaum, L

    2015-06-01

    Switching of magnetic field gradients is the primary source of acoustic noise in MR imaging. Sound pressure levels can run as high as 120 dB, capable of producing physical discomfort and at least temporary hearing loss, mandating hearing protection. New technology has made quieter techniques feasible, which range from as low as 80 dB to nearly silent. The purpose of this study was to evaluate the image quality of new commercially available quiet T2 and quiet FLAIR fast spin-echo PROPELLER acquisitions in comparison with equivalent conventional PROPELLER techniques in current day-to-day practice in imaging of the brain. Thirty-four consecutive patients were prospectively scanned with quiet T2 and quiet T2 FLAIR PROPELLER, in addition to spatial resolution-matched conventional T2 and T2 FLAIR PROPELLER imaging sequences on a clinical 1.5T MR imaging scanner. Measurement of sound pressure levels and qualitative evaluation of relative image quality was performed. Quiet T2 and quiet T2 FLAIR were comparable in image quality with conventional acquisitions, with sound levels of approximately 75 dB, a reduction in average sound pressure levels of up to 28.5 dB, with no significant trade-offs aside from longer scan times. Quiet FSE provides equivalent image quality at comfortable sound pressure levels at the cost of slightly longer scan times. The significant reduction in potentially injurious noise is particularly important in vulnerable populations such as children, the elderly, and the debilitated. Quiet techniques should be considered in these special situations for routine use in clinical practice. © 2015 by American Journal of Neuroradiology.

  18. Seismic equivalents of volcanic jet scaling laws and multipoles in acoustics

    NASA Astrophysics Data System (ADS)

    Haney, Matthew M.; Matoza, Robin S.; Fee, David; Aldridge, David F.

    2018-04-01

    We establish analogies between equivalent source theory in seismology (moment-tensor and single-force sources) and acoustics (monopoles, dipoles and quadrupoles) in the context of volcanic eruption signals. Although infrasound (acoustic waves < 20 Hz) from volcanic eruptions may be more complex than a simple monopole, dipole or quadrupole assumption, these elementary acoustic sources are a logical place to begin exploring relations with seismic sources. By considering the radiated power of a harmonic force source at the surface of an elastic half-space, we show that a volcanic jet or plume modelled as a seismic force has similar scaling with respect to eruption parameters (e.g. exit velocity and vent area) as an acoustic dipole. We support this by demonstrating, from first principles, a fundamental relationship that ties together explosion, torque and force sources in seismology and highlights the underlying dipole nature of seismic forces. This forges a connection between the multipole expansion of equivalent sources in acoustics and the use of forces and moments as equivalent sources in seismology. We further show that volcanic infrasound monopole and quadrupole sources exhibit scalings similar to seismicity radiated by volume injection and moment sources, respectively. We describe a scaling theory for seismic tremor during volcanic eruptions that agrees with observations showing a linear relation between radiated power of tremor and eruption rate. Volcanic tremor over the first 17 hr of the 2016 eruption at Pavlof Volcano, Alaska, obeyed the linear relation. Subsequent tremor during the main phase of the eruption did not obey the linear relation and demonstrates that volcanic eruption tremor can exhibit other scalings even during the same eruption.

  19. 77 FR 13968 - Dividend Equivalents From Sources Within the United States; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-08

    ...--INCOME TAXES 0 Paragraph 1. The authority citation for part 1 continues to read in part as follows... temporary regulations (TD 9572), relating to dividend equivalents from sources within the United States.... List of Subjects in 26 CFR Part 1 Income taxes, Reporting and recordkeeping requirements. Correction of...

  20. Method for obtaining electron energy-density functions from Langmuir-probe data using a card-programmable calculator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Longhurst, G.R.

    This paper presents a method for obtaining electron energy density functions from Langmuir probe data taken in cool, dense plasmas where thin-sheath criteria apply and where magnetic effects are not severe. Noise is filtered out by using regression of orthogonal polynomials. The method requires only a programmable calculator (TI-59 or equivalent) to implement and can be used for the most general, nonequilibrium electron energy distribution plasmas. Data from a mercury ion source analyzed using this method are presented and compared with results for the same data using standard numerical techniques.

  1. Toward quantitative estimation of material properties with dynamic mode atomic force microscopy: a comparative study.

    PubMed

    Ghosal, Sayan; Gannepalli, Anil; Salapaka, Murti

    2017-08-11

    In this article, we explore methods that enable estimation of material properties with the dynamic mode atomic force microscopy suitable for soft matter investigation. The article presents the viewpoint of casting the system, comprising of a flexure probe interacting with the sample, as an equivalent cantilever system and compares a steady-state analysis based method with a recursive estimation technique for determining the parameters of the equivalent cantilever system in real time. The steady-state analysis of the equivalent cantilever model, which has been implicitly assumed in studies on material property determination, is validated analytically and experimentally. We show that the steady-state based technique yields results that quantitatively agree with the recursive method in the domain of its validity. The steady-state technique is considerably simpler to implement, however, slower compared to the recursive technique. The parameters of the equivalent system are utilized to interpret storage and dissipative properties of the sample. Finally, the article identifies key pitfalls that need to be avoided toward the quantitative estimation of material properties.

  2. Optical coherence tomography detection of shear wave propagation in inhomogeneous tissue equivalent phantoms and ex-vivo carotid artery samples

    PubMed Central

    Razani, Marjan; Luk, Timothy W.H.; Mariampillai, Adrian; Siegler, Peter; Kiehl, Tim-Rasmus; Kolios, Michael C.; Yang, Victor X.D.

    2014-01-01

    In this work, we explored the potential of measuring shear wave propagation using optical coherence elastography (OCE) in an inhomogeneous phantom and carotid artery samples based on a swept-source optical coherence tomography (OCT) system. Shear waves were generated using a piezoelectric transducer transmitting sine-wave bursts of 400 μs duration, applying acoustic radiation force (ARF) to inhomogeneous phantoms and carotid artery samples, synchronized with a swept-source OCT (SS-OCT) imaging system. The phantoms were composed of gelatin and titanium dioxide whereas the carotid artery samples were embedded in gel. Differential OCT phase maps, measured with and without the ARF, detected the microscopic displacement generated by shear wave propagation in these phantoms and samples of different stiffness. We present the technique for calculating tissue mechanical properties by propagating shear waves in inhomogeneous tissue equivalent phantoms and carotid artery samples using the ARF of an ultrasound transducer, and measuring the shear wave speed and its associated properties in the different layers with OCT phase maps. This method lays the foundation for future in-vitro and in-vivo studies of mechanical property measurements of biological tissues such as vascular tissues, where normal and pathological structures may exhibit significant contrast in the shear modulus. PMID:24688822

  3. Blind source separation and localization using microphone arrays

    NASA Astrophysics Data System (ADS)

    Sun, Longji

    The blind source separation and localization problem for audio signals is studied using microphone arrays. Pure delay mixtures of source signals typically encountered in outdoor environments are considered. Our proposed approach utilizes the subspace methods, including multiple signal classification (MUSIC) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms, to estimate the directions of arrival (DOAs) of the sources from the collected mixtures. Since audio signals are generally considered broadband, the DOA estimates at frequencies with the large sum of squared amplitude values are combined to obtain the final DOA estimates. Using the estimated DOAs, the corresponding mixing and demixing matrices are computed, and the source signals are recovered using the inverse short time Fourier transform. Subspace methods take advantage of the spatial covariance matrix of the collected mixtures to achieve robustness to noise. While the subspace methods have been studied for localizing radio frequency signals, audio signals have their special properties. For instance, they are nonstationary, naturally broadband and analog. All of these make the separation and localization for the audio signals more challenging. Moreover, our algorithm is essentially equivalent to the beamforming technique, which suppresses the signals in unwanted directions and only recovers the signals in the estimated DOAs. Several crucial issues related to our algorithm and their solutions have been discussed, including source number estimation, spatial aliasing, artifact filtering, different ways of mixture generation, and source coordinate estimation using multiple arrays. Additionally, comprehensive simulations and experiments have been conducted to examine various aspects of the algorithm. Unlike the existing blind source separation and localization methods, which are generally time consuming, our algorithm needs signal mixtures of only a short duration and therefore supports real-time implementation.

  4. 10 CFR 35.49 - Suppliers for sealed sources or devices for medical use.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... accordance with a license issued under 10 CFR part 30 and 10 CFR 32.74 of this chapter or equivalent requirements of an Agreement State; (b) Sealed sources or devices non-commercially transferred from a Part 35... in accordance with a license issued under 10 CFR part 30 or the equivalent requirements of an...

  5. 10 CFR 35.49 - Suppliers for sealed sources or devices for medical use.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... accordance with a license issued under 10 CFR Part 30 and 10 CFR 32.74 of this chapter or equivalent requirements of an Agreement State; (b) Sealed sources or devices non-commercially transferred from a Part 35... in accordance with a license issued under 10 CFR Part 30 or the equivalent requirements of an...

  6. 10 CFR 35.49 - Suppliers for sealed sources or devices for medical use.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... accordance with a license issued under 10 CFR Part 30 and 10 CFR 32.74 of this chapter or equivalent requirements of an Agreement State; (b) Sealed sources or devices non-commercially transferred from a Part 35... in accordance with a license issued under 10 CFR Part 30 or the equivalent requirements of an...

  7. 10 CFR 35.49 - Suppliers for sealed sources or devices for medical use.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... accordance with a license issued under 10 CFR Part 30 and 10 CFR 32.74 of this chapter or equivalent requirements of an Agreement State; (b) Sealed sources or devices non-commercially transferred from a Part 35... in accordance with a license issued under 10 CFR Part 30 or the equivalent requirements of an...

  8. 10 CFR 35.49 - Suppliers for sealed sources or devices for medical use.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... accordance with a license issued under 10 CFR Part 30 and 10 CFR 32.74 of this chapter or equivalent requirements of an Agreement State; (b) Sealed sources or devices non-commercially transferred from a Part 35... in accordance with a license issued under 10 CFR Part 30 or the equivalent requirements of an...

  9. Feasibility of Equivalent Dipole Models for Electroencephalogram-Based Brain Computer Interfaces.

    PubMed

    Schimpf, Paul H

    2017-09-15

    This article examines the localization errors of equivalent dipolar sources inverted from the surface electroencephalogram in order to determine the feasibility of using their location as classification parameters for non-invasive brain computer interfaces. Inverse localization errors are examined for two head models: a model represented by four concentric spheres and a realistic model based on medical imagery. It is shown that the spherical model results in localization ambiguity such that a number of dipolar sources, with different azimuths and varying orientations, provide a near match to the electroencephalogram of the best equivalent source. No such ambiguity exists for the elevation of inverted sources, indicating that for spherical head models, only the elevation of inverted sources (and not the azimuth) can be expected to provide meaningful classification parameters for brain-computer interfaces. In a realistic head model, all three parameters of the inverted source location are found to be reliable, providing a more robust set of parameters. In both cases, the residual error hypersurfaces demonstrate local minima, indicating that a search for the best-matching sources should be global. Source localization error vs. signal-to-noise ratio is also demonstrated for both head models.

  10. TH-AB-209-02: Gadolinium Measurements in Human Bone Using in Vivo K X-Ray Fluorescence (KXRF) Technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mostafaei, F; Nie, L

    Purpose: Improvement in an in vivo K x-ray fluorescence system, based on 109Cd source, for the detection of gadolinium (Gd) in bone has been investigated. Series of improvements to the method is described. Gd is of interest because of the extensive use of Gd-based contrast agents in MR imaging and the potential toxicity of Gd exposure. Methods: A set of seven bone equivalent phantoms with different amount of Gd concentrations (from 0–100 ppm) has been developed. Soft tissue equivalent plastic plates were used to simulate the soft tissue overlaying the tibia bone in an in vivo measurement. A new 5more » GBq 109Cd source was used to improve the source activity in comparison to the previous study (0.17 GBq). An improved spectral fitting program was utilized for data analysis. Results: The previous published minimum detection limit (MDL) for Gd doped phantom measurements using KXRF system was 3.3 ppm. In this study the MDL for bare bone phantoms was found to be 0.8 ppm. Our previous study used only three layers of plastic (0.32, 0.64 and 0.96 mm) as soft tissue equivalent materials and obtained the MDL of 4–4.8 ppm. In this study the plastic plates with more realistic thicknesses to simulate the soft tissue covering tibia bone (nine thicknesses ranging from 0.61–6.13 mm) were used. The MDLs for phantoms were determined to be 1.8–3.5 ppm. Conclusion: With the improvements made to the technology (stronger source, improved data analysis algorithm, realistic soft tissue thicknesses), the MDL of the KXRF system to measure Gd in bare bone was improved by a factor of 4.1. The MDL is at the level of the bone Gd concentration reported in literature. Hence, the system is ready to be tested on human subjects to investigate the use of bone Gd as a biomarker for Gd toxicity.« less

  11. Studies on new neutron-sensitive dosimeters using an optically stimulated luminescence technique

    NASA Astrophysics Data System (ADS)

    Kulkarni, M. S.; Luszik-Bhadra, M.; Behrens, R.; Muthe, K. P.; Rawat, N. S.; Gupta, S. K.; Sharma, D. N.

    2011-07-01

    The neutron response of detectors prepared using α-Al 2O 3:C phosphor developed using a melt processing technique and mixed with neutron converters was studied in monoenergetic neutron fields. The detector pellets were arranged in two different pairs: α-Al 2O 3:C + 6LiF/α-Al 2O 3:C + 7LiF and α-Al 2O 3:C + high-density polyethylene/α-Al 2O 3:C + Teflon, for neutron dosimetry using albedo and recoil proton techniques. The optically stimulated luminescence response of the Al 2O 3:C + 6,7LiF dosimeter to radiation from a 252Cf source was 0.21, in terms of personal dose equivalent Hp(10) and relative to radiation from a 137Cs source. This was comparable to results obtained with similar detectors prepared using commercially available α-Al 2O 3:C phosphor. The Hp(10) response of the α-Al 2O 3:C + 6,7LiF dosimeters was found to decrease by more than two orders of magnitude with increasing neutron energy, as expected for albedo dosimeters. The response of the α-Al 2O 3:C + high-density polyethylene/α-Al 2O 3:C + Teflon dosimeters was small, of the order of 1% to 2% in terms of Hp(10) and relative to radiation from a 137Cs source, for neutron energies greater than 1 MeV.

  12. 3D reconstruction of internal structure of animal body using near-infrared light

    NASA Astrophysics Data System (ADS)

    Tran, Trung Nghia; Yamamoto, Kohei; Namita, Takeshi; Kato, Yuji; Shimizu, Koichi

    2014-03-01

    To realize three-dimensional (3D) optical imaging of the internal structure of animal body, we have developed a new technique to reconstruct CT images from two-dimensional (2D) transillumination images. In transillumination imaging, the image is blurred due to the strong scattering in the tissue. We had developed a scattering suppression technique using the point spread function (PSF) for a fluorescent light source in the body. In this study, we have newly proposed a technique to apply this PSF for a light source to the image of unknown light-absorbing structure. The effectiveness of the proposed technique was examined in the experiments with a model phantom and a mouse. In the phantom experiment, the absorbers were placed in the tissue-equivalent medium to simulate the light-absorbing organs in mouse body. Near-infrared light was illuminated from one side of the phantom and the image was recorded with CMOS camera from another side. Using the proposed techniques, the scattering effect was efficiently suppressed and the absorbing structure can be visualized in the 2D transillumination image. Using the 2D images obtained in many different orientations, we could reconstruct the 3D image. In the mouse experiment, an anesthetized mouse was held in an acrylic cylindrical holder. We can visualize the internal organs such as kidneys through mouse's abdomen using the proposed technique. The 3D image of the kidneys and a part of the liver were reconstructed. Through these experimental studies, the feasibility of practical 3D imaging of the internal light-absorbing structure of a small animal was verified.

  13. 40 CFR 63.41 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... existing equipment will be equivalent to that level of control currently achieved by other well-controlled similar sources (i.e., equivalent to the level of control that would be provided by a current BACT, LAER... control equipment will be equivalent to the percent control efficiency provided by the control equipment...

  14. 40 CFR 63.41 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... existing equipment will be equivalent to that level of control currently achieved by other well-controlled similar sources (i.e., equivalent to the level of control that would be provided by a current BACT, LAER... control equipment will be equivalent to the percent control efficiency provided by the control equipment...

  15. 40 CFR 63.41 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... existing equipment will be equivalent to that level of control currently achieved by other well-controlled similar sources (i.e., equivalent to the level of control that would be provided by a current BACT, LAER... control equipment will be equivalent to the percent control efficiency provided by the control equipment...

  16. 40 CFR 63.41 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... existing equipment will be equivalent to that level of control currently achieved by other well-controlled similar sources (i.e., equivalent to the level of control that would be provided by a current BACT, LAER... control equipment will be equivalent to the percent control efficiency provided by the control equipment...

  17. 40 CFR 63.41 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... existing equipment will be equivalent to that level of control currently achieved by other well-controlled similar sources (i.e., equivalent to the level of control that would be provided by a current BACT, LAER... control equipment will be equivalent to the percent control efficiency provided by the control equipment...

  18. Material and physical model for evaluation of deep brain activity contribution to EEG recordings

    NASA Astrophysics Data System (ADS)

    Ye, Yan; Li, Xiaoping; Wu, Tiecheng; Li, Zhe; Xie, Wenwen

    2015-12-01

    Deep brain activity is conventionally recorded with surgical implantation of electrodes. During the neurosurgery, brain tissue damage and the consequent side effects to patients are inevitably incurred. In order to eliminate undesired risks, we propose that deep brain activity should be measured using the noninvasive scalp electroencephalography (EEG) technique. However, the deeper the neuronal activity is located, the noisier the corresponding scalp EEG signals are. Thus, the present study aims to evaluate whether deep brain activity could be observed from EEG recordings. In the experiment, a three-layer cylindrical head model was constructed to mimic a human head. A single dipole source (sine wave, 10 Hz, altering amplitudes) was embedded inside the model to simulate neuronal activity. When the dipole source was activated, surface potential was measured via electrodes attached on the top surface of the model and raw data were recorded for signal analysis. Results show that the dipole source activity positioned at 66 mm depth in the model, equivalent to the depth of deep brain structures, is clearly observed from surface potential recordings. Therefore, it is highly possible that deep brain activity could be observed from EEG recordings and deep brain activity could be measured using the noninvasive scalp EEG technique.

  19. Investigations on landmine detection by neutron-based techniques.

    PubMed

    Csikai, J; Dóczi, R; Király, B

    2004-07-01

    Principles and techniques of some neutron-based methods used to identify the antipersonnel landmines (APMs) are discussed. New results have been achieved in the field of neutron reflection, transmission, scattering and reaction techniques. Some conclusions are as follows: The neutron hand-held detector is suitable for the observation of anomaly caused by a DLM2-like sample in different soils with a scanning speed of 1m(2)/1.5 min; the reflection cross section of thermal neutrons rendered the determination of equivalent thickness of different soil components possible; a simple method was developed for the determination of the thermal neutron flux perturbation factor needed for multi-elemental analysis of bulky samples; unfolded spectra of elastically backscattered neutrons using broad-spectrum sources render the identification of APMs possible; the knowledge of leakage spectra of different source neutrons is indispensable for the determination of the differential and integrated reaction rates and through it the dimension of the interrogated volume; the precise determination of the C/O atom fraction requires the investigations on the angular distribution of the 6.13MeV gamma-ray emitted in the (16)O(n,n'gamma) reaction. These results, in addition to the identification of landmines, render the improvement of the non-intrusive neutron methods possible.

  20. 10 CFR 835.702 - Individual monitoring records.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... emergency exposures. (b) Recording of the non-uniform equivalent dose to the skin is not required if the... internal dose (committed effective dose or committed equivalent dose) is not required for any monitoring...: (i) The effective dose from external sources of radiation (equivalent dose to the whole body may be...

  1. 10 CFR 835.702 - Individual monitoring records.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... emergency exposures. (b) Recording of the non-uniform equivalent dose to the skin is not required if the... internal dose (committed effective dose or committed equivalent dose) is not required for any monitoring...: (i) The effective dose from external sources of radiation (equivalent dose to the whole body may be...

  2. 10 CFR 835.702 - Individual monitoring records.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... emergency exposures. (b) Recording of the non-uniform equivalent dose to the skin is not required if the... internal dose (committed effective dose or committed equivalent dose) is not required for any monitoring...: (i) The effective dose from external sources of radiation (equivalent dose to the whole body may be...

  3. 10 CFR 835.702 - Individual monitoring records.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... emergency exposures. (b) Recording of the non-uniform equivalent dose to the skin is not required if the... internal dose (committed effective dose or committed equivalent dose) is not required for any monitoring...: (i) The effective dose from external sources of radiation (equivalent dose to the whole body may be...

  4. 10 CFR 835.702 - Individual monitoring records.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... emergency exposures. (b) Recording of the non-uniform equivalent dose to the skin is not required if the... internal dose (committed effective dose or committed equivalent dose) is not required for any monitoring...: (i) The effective dose from external sources of radiation (equivalent dose to the whole body may be...

  5. Bi-Frequency Modulated Quasi-Resonant Converters: Theory and Applications

    NASA Astrophysics Data System (ADS)

    Zhang, Yuefeng

    1995-01-01

    To avoid the variable frequency operation of quasi -resonant converters, many soft-switching PWM converters have been proposed, all of them require an auxiliary switch, which will increase the cost and complexity of the power supply system. In this thesis, a new kind of technique for quasi -resonant converters has been proposed, which is called the bi-frequency modulation technique. By operating the quasi-resonant converters at two switching frequencies, this technique enables quasi-resonant converters to achieve the soft-switching, at fixed switching frequencies, without an auxiliary switch. The steady-state analysis of four commonly used quasi-resonant converters, namely, ZVS buck, ZCS buck, ZVS boost, and ZCS boost converter has been presented. Using the concepts of equivalent sources, equivalent sinks, and resonant tank, the large signal models of these four quasi -resonant converters were developed. Based on these models, the steady-state control characteristics of BFM ZVS buck, BFM ZCS buck, BFM ZVS boost, and BFM ZCS boost converter have been derived. The functional block and design consideration of the bi-frequency controller were presented, and one of the implementations of the bi-frequency controller was given. A complete design example has been presented. Both computer simulations and experimental results have verified that the bi-frequency modulated quasi-resonant converters can achieve soft-switching, at fixed switching frequencies, without an auxiliary switch. One of the application of bi-frequency modulation technique is for EMI reduction. The basic principle of using BFM technique for EMI reduction was introduced. Based on the spectral analysis, the EMI performances of the PWM, variable-frequency, and bi-frequency modulated control signals was evaluated, and the BFM control signals show the lowest EMI emission. The bi-frequency modulated technique has also been applied to the power factor correction. A BFM zero -current switching boost converter has been designed for the power factor correction, and the simulation results show that the power factor has been improved.

  6. An experimental comparison of various methods of nearfield acoustic holography

    DOE PAGES

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    2017-05-19

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  7. An experimental comparison of various methods of nearfield acoustic holography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  8. A Technique of Teaching the Principle of Equivalence at Ground Level

    ERIC Educational Resources Information Center

    Lubrica, Joel V.

    2016-01-01

    This paper presents one way of demonstrating the Principle of Equivalence in the classroom. Teaching the Principle of Equivalence involves someone experiencing acceleration through empty space, juxtaposed with the daily encounter with gravity. This classroom activity is demonstrated with a water-filled bottle containing glass marbles and…

  9. Estimation of neutron dose equivalent at the mezzanine of the Advanced Light Source and the laboratory boundary using the ORNL program MORSE.

    PubMed

    Sun, R K

    1990-12-01

    To investigate the radiation effect of neutrons near the Advanced Light Source (ALS) at Lawrence Berkeley Laboratory (LBL) with respect to the neutron dose equivalents in nearby occupied areas and at the site boundary, the neutron transport code MORSE, from Oak Ridge National Laboratory (ORNL), was used. These dose equivalents result from both skyshine neutrons transported by air scattering and direct neutrons penetrating the shielding. The ALS neutron sources are a 50-MeV linear accelerator and its transfer line, a 1.5-GeV booster, a beam extraction line, and a 1.9-GeV storage ring. The most conservative total occupational-dose-equivalent rate in the center of the ALS mezzanine, 39 m from the ALS center, was found to be 1.14 X 10(-3) Sv y-1 per 2000-h "occupational" year, and the total environmental-dose-equivalent rate at the ALS boundary, 125 m from the ALS center, was found to be 3.02 X 10(-4) Sv y-1 per 8760-h calendar year. More realistic dose-equivalent rates, using the nominal (expected) storage-ring current, were calculated to be 1.0 X 10(-4) Sv y-1 and 2.65 X 10(-5) Sv y-1 occupational year and calendar year, respectively, which are much lower than the DOE reporting levels.

  10. The use of short and wide x-ray pulses for time-of-flight x-ray Compton Scatter Imaging in cargo security

    NASA Astrophysics Data System (ADS)

    Calvert, Nick; Betcke, Marta M.; Cresswell, John R.; Deacon, Alick N.; Gleeson, Anthony J.; Judson, Daniel S.; Mason, Peter; McIntosh, Peter A.; Morton, Edward J.; Nolan, Paul J.; Ollier, James; Procter, Mark G.; Speller, Robert D.

    2015-05-01

    Using a short pulse width x-ray source and measuring the time-of-flight of photons that scatter from an object under inspection allows for the point of interaction to be determined, and a profile of the object to be sampled along the path of the beam. A three dimensional image can be formed by interrogating the entire object. Using high energy x rays enables the inspection of cargo containers with steel walls, in the search for concealed items. A longer pulse width x-ray source can also be used with deconvolution techniques to determine the points of interaction. We present time-of-flight results from both short (picosecond) width and long (hundreds of nanoseconds) width x-ray sources, and show that the position of scatter can be localised with a resolution of 2 ns, equivalent to 30 cm, for a 3 cm thick plastic test object.

  11. Blind Channel Equalization with Colored Source Based on Constrained Optimization Methods

    NASA Astrophysics Data System (ADS)

    Wang, Yunhua; DeBrunner, Linda; DeBrunner, Victor; Zhou, Dayong

    2008-12-01

    Tsatsanis and Xu have applied the constrained minimum output variance (CMOV) principle to directly blind equalize a linear channel—a technique that has proven effective with white inputs. It is generally assumed in the literature that their CMOV method can also effectively equalize a linear channel with a colored source. In this paper, we prove that colored inputs will cause the equalizer to incorrectly converge due to inadequate constraints. We also introduce a new blind channel equalizer algorithm that is based on the CMOV principle, but with a different constraint that will correctly handle colored sources. Our proposed algorithm works for channels with either white or colored inputs and performs equivalently to the trained minimum mean-square error (MMSE) equalizer under high SNR. Thus, our proposed algorithm may be regarded as an extension of the CMOV algorithm proposed by Tsatsanis and Xu. We also introduce several methods to improve the performance of our introduced algorithm in the low SNR condition. Simulation results show the superior performance of our proposed methods.

  12. Water equivalency evaluation of PRESAGE® dosimeters for dosimetry of Cs-137 and Ir-192 brachytherapy sources

    NASA Astrophysics Data System (ADS)

    Gorjiara, Tina; Hill, Robin; Kuncic, Zdenka; Baldock, Clive

    2010-11-01

    A major challenge in brachytherapy dosimetry is the measurement of steep dose gradients. This can be achieved with a high spatial resolution three dimensional (3D) dosimeter. PRESAGE® is a polyurethane based dosimeter which is suitable for 3D dosimetry. Since an ideal dosimeter is radiologically water equivalent, we have investigated the relative dose response of three different PRESAGE® formulations, two with a lower chloride and bromide content than original one, for Cs-137 and Ir-192 brachytherapy sources. Doses were calculated using the EGSnrc Monte Carlo package. Our results indicate that PRESAGE® dosimeters are suitable for relative dose measurement of Cs-137 and Ir-192 brachytherapy sources and the lower halogen content PRESAGE® dosimeters are more water equivalent than the original formulation.

  13. Equivalent source modeling of the main field using MAGSAT data

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The software was considerably enhanced to accommodate a more comprehensive examination of data available for field modeling using the equivalent sources method by (1) implementing a dynamic core allocation capability into the software system for the automatic dimensioning of the normal matrix; (2) implementing a time dependent model for the dipoles; (3) incorporating the capability to input specialized data formats in a fashion similar to models in spherical harmonics; and (4) implementing the optional ability to simultaneously estimate observatory anomaly biases where annual means data is utilized. The time dependence capability was demonstrated by estimating a component model of 21 deg resolution using the 14 day MAGSAT data set of Goddard's MGST (12/80). The equivalent source model reproduced both the constant and the secular variation found in MGST (12/80).

  14. Antioxidant Properties of “Natchez” and “Triple Crown” Blackberries Using Korean Traditional Winemaking Techniques

    PubMed Central

    Maness, Niels; McGlynn, William

    2017-01-01

    This research evaluated blackberries grown in Oklahoma and wines produced using a modified traditional Korean technique employing relatively oxygen-permeable earthenware fermentation vessels. The fermentation variables were temperature (21.6°C versus 26.6°C) and yeast inoculation versus wild fermentation. Wild fermented wines had higher total phenolic concentration than yeast fermented wines. Overall, wines had a relatively high concentration of anthocyanin (85–320 mg L−1 malvidin-3-monoglucoside) and antioxidant capacity (9776–37845 µmol Trolox equivalent g−1). “Natchez” berries had a higher anthocyanin concentration than “Triple Crown” berries. Higher fermentation temperature at the start of the winemaking process followed by the use of lower fermentation/storage temperature for aging wine samples maximized phenolic compound extraction/retention. The Korean winemaking technique used in this study produced blackberry wines that were excellent sources of polyphenolic compounds as well as being high in antioxidant capacity as measured by the Oxygen Radical Absorbance Capacity (ORAC) test. PMID:28713820

  15. A Comparison of seismic instrument noise coherence analysis techniques

    USGS Publications Warehouse

    Ringler, A.T.; Hutt, C.R.; Evans, J.R.; Sandoval, L.D.

    2011-01-01

    The self-noise of a seismic instrument is a fundamental characteristic used to evaluate the quality of the instrument. It is important to be able to measure this self-noise robustly, to understand how differences among test configurations affect the tests, and to understand how different processing techniques and isolation methods (from nonseismic sources) can contribute to differences in results. We compare two popular coherence methods used for calculating incoherent noise, which is widely used as an estimate of instrument self-noise (incoherent noise and self-noise are not strictly identical but in observatory practice are approximately equivalent; Holcomb, 1989; Sleeman et al., 2006). Beyond directly comparing these two coherence methods on similar models of seismometers, we compare how small changes in test conditions can contribute to incoherent-noise estimates. These conditions include timing errors, signal-to-noise ratio changes (ratios between background noise and instrument incoherent noise), relative sensor locations, misalignment errors, processing techniques, and different configurations of sensor types.

  16. Backward renormalization-group inference of cortical dipole sources and neural connectivity efficacy

    NASA Astrophysics Data System (ADS)

    Amaral, Selene da Rocha; Baccalá, Luiz A.; Barbosa, Leonardo S.; Caticha, Nestor

    2017-06-01

    Proper neural connectivity inference has become essential for understanding cognitive processes associated with human brain function. Its efficacy is often hampered by the curse of dimensionality. In the electroencephalogram case, which is a noninvasive electrophysiological monitoring technique to record electrical activity of the brain, a possible way around this is to replace multichannel electrode information with dipole reconstructed data. We use a method based on maximum entropy and the renormalization group to infer the position of the sources, whose success hinges on transmitting information from low- to high-resolution representations of the cortex. The performance of this method compares favorably to other available source inference algorithms, which are ranked here in terms of their performance with respect to directed connectivity inference by using artificially generated dynamic data. We examine some representative scenarios comprising different numbers of dynamically connected dipoles over distinct cortical surface positions and under different sensor noise impairment levels. The overall conclusion is that inverse problem solutions do not affect the correct inference of the direction of the flow of information as long as the equivalent dipole sources are correctly found.

  17. An equivalent body surface charge model representing three-dimensional bioelectrical activity

    NASA Technical Reports Server (NTRS)

    He, B.; Chernyak, Y. B.; Cohen, R. J.

    1995-01-01

    A new surface-source model has been developed to account for the bioelectrical potential on the body surface. A single-layer surface-charge model on the body surface has been developed to equivalently represent bioelectrical sources inside the body. The boundary conditions on the body surface are discussed in relation to the surface-charge in a half-space conductive medium. The equivalent body surface-charge is shown to be proportional to the normal component of the electric field on the body surface just outside the body. The spatial resolution of the equivalent surface-charge distribution appears intermediate between those of the body surface potential distribution and the body surface Laplacian distribution. An analytic relationship between the equivalent surface-charge and the surface Laplacian of the potential was found for a half-space conductive medium. The effects of finite spatial sampling and noise on the reconstruction of the equivalent surface-charge were evaluated by computer simulations. It was found through computer simulations that the reconstruction of the equivalent body surface-charge from the body surface Laplacian distribution is very stable against noise and finite spatial sampling. The present results suggest that the equivalent body surface-charge model may provide an additional insight to our understanding of bioelectric phenomena.

  18. 42 CFR 81.4 - Definition of terms used in this part.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...]. (e) Equivalent dose means the absorbed dose in a tissue or organ multiplied by a radiation weighting... dose means the portion of the equivalent dose that is received from radiation sources outside of the... pattern and level of radiation exposure. (h) Internal dose means the portion of the equivalent dose that...

  19. A boundary condition to the Khokhlov-Zabolotskaya equation for modeling strongly focused nonlinear ultrasound fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosnitskiy, P., E-mail: pavrosni@yandex.ru; Yuldashev, P., E-mail: petr@acs366.phys.msu.ru; Khokhlova, V., E-mail: vera@acs366.phys.msu.ru

    2015-10-28

    An equivalent source model was proposed as a boundary condition to the nonlinear parabolic Khokhlov-Zabolotskaya (KZ) equation to simulate high intensity focused ultrasound (HIFU) fields generated by medical ultrasound transducers with the shape of a spherical shell. The boundary condition was set in the initial plane; the aperture, the focal distance, and the initial pressure of the source were chosen based on the best match of the axial pressure amplitude and phase distributions in the Rayleigh integral analytic solution for a spherical transducer and the linear parabolic approximation solution for the equivalent source. Analytic expressions for the equivalent source parametersmore » were derived. It was shown that the proposed approach allowed us to transfer the boundary condition from the spherical surface to the plane and to achieve a very good match between the linear field solutions of the parabolic and full diffraction models even for highly focused sources with F-number less than unity. The proposed method can be further used to expand the capabilities of the KZ nonlinear parabolic equation for efficient modeling of HIFU fields generated by strongly focused sources.« less

  20. Estimating the sources of global sea level rise with data assimilation techniques.

    PubMed

    Hay, Carling C; Morrow, Eric; Kopp, Robert E; Mitrovica, Jerry X

    2013-02-26

    A rapidly melting ice sheet produces a distinctive geometry, or fingerprint, of sea level (SL) change. Thus, a network of SL observations may, in principle, be used to infer sources of meltwater flux. We outline a formalism, based on a modified Kalman smoother, for using tide gauge observations to estimate the individual sources of global SL change. We also report on a series of detection experiments based on synthetic SL data that explore the feasibility of extracting source information from SL records. The Kalman smoother technique iteratively calculates the maximum-likelihood estimate of Greenland ice sheet (GIS) and West Antarctic ice sheet (WAIS) melt at each time step, and it accommodates data gaps while also permitting the estimation of nonlinear trends. Our synthetic tests indicate that when all tide gauge records are used in the analysis, it should be possible to estimate GIS and WAIS melt rates greater than ∼0.3 and ∼0.4 mm of equivalent eustatic sea level rise per year, respectively. We have also implemented a multimodel Kalman filter that allows us to account rigorously for additional contributions to SL changes and their associated uncertainty. The multimodel filter uses 72 glacial isostatic adjustment models and 3 ocean dynamic models to estimate the most likely models for these processes given the synthetic observations. We conclude that our modified Kalman smoother procedure provides a powerful method for inferring melt rates in a warming world.

  1. Estimating the sources of global sea level rise with data assimilation techniques

    PubMed Central

    Hay, Carling C.; Morrow, Eric; Kopp, Robert E.; Mitrovica, Jerry X.

    2013-01-01

    A rapidly melting ice sheet produces a distinctive geometry, or fingerprint, of sea level (SL) change. Thus, a network of SL observations may, in principle, be used to infer sources of meltwater flux. We outline a formalism, based on a modified Kalman smoother, for using tide gauge observations to estimate the individual sources of global SL change. We also report on a series of detection experiments based on synthetic SL data that explore the feasibility of extracting source information from SL records. The Kalman smoother technique iteratively calculates the maximum-likelihood estimate of Greenland ice sheet (GIS) and West Antarctic ice sheet (WAIS) melt at each time step, and it accommodates data gaps while also permitting the estimation of nonlinear trends. Our synthetic tests indicate that when all tide gauge records are used in the analysis, it should be possible to estimate GIS and WAIS melt rates greater than ∼0.3 and ∼0.4 mm of equivalent eustatic sea level rise per year, respectively. We have also implemented a multimodel Kalman filter that allows us to account rigorously for additional contributions to SL changes and their associated uncertainty. The multimodel filter uses 72 glacial isostatic adjustment models and 3 ocean dynamic models to estimate the most likely models for these processes given the synthetic observations. We conclude that our modified Kalman smoother procedure provides a powerful method for inferring melt rates in a warming world. PMID:22543163

  2. Obtaining source current density related to irregularly structured electromagnetic target field inside human body using hybrid inverse/FDTD method.

    PubMed

    Han, Jijun; Yang, Deqiang; Sun, Houjun; Xin, Sherman Xuegang

    2017-01-01

    Inverse method is inherently suitable for calculating the distribution of source current density related with an irregularly structured electromagnetic target field. However, the present form of inverse method cannot calculate complex field-tissue interactions. A novel hybrid inverse/finite-difference time domain (FDTD) method that can calculate the complex field-tissue interactions for the inverse design of source current density related with an irregularly structured electromagnetic target field is proposed. A Huygens' equivalent surface is established as a bridge to combine the inverse and FDTD method. Distribution of the radiofrequency (RF) magnetic field on the Huygens' equivalent surface is obtained using the FDTD method by considering the complex field-tissue interactions within the human body model. The obtained magnetic field distributed on the Huygens' equivalent surface is regarded as the next target. The current density on the designated source surface is derived using the inverse method. The homogeneity of target magnetic field and specific energy absorption rate are calculated to verify the proposed method.

  3. Design of bent waveguide semiconductor lasers using nonlinear equivalent chirp

    NASA Astrophysics Data System (ADS)

    Li, Lianyan; Shi, Yuechun; Zhang, Yunshan; Chen, Xiangfei

    2018-01-01

    Reconstruction equivalent chirp (REC) technique is widely used in the design and fabrication of semiconductor laser arrays and tunable lasers with low cost and high wavelength accuracy. Bent waveguide is a promising method to suppress the zeroth order resonance, which is an intrinsic problem in REC technique. However, it may introduce basic grating chirp and deteriorate the single longitudinal mode (SLM) property of the laser. A nonlinear equivalent chirp pattern is proposed in this paper to compensate the grating chirp and improve the SLM property. It will benefit the realization of low-cost Distributed feedback (DFB) semiconductor laser arrays with accurate lasing wavelength.

  4. Riemann-Hilbert technique scattering analysis of metamaterial-based asymmetric 2D open resonators

    NASA Astrophysics Data System (ADS)

    Kamiński, Piotr M.; Ziolkowski, Richard W.; Arslanagić, Samel

    2017-12-01

    The scattering properties of metamaterial-based asymmetric two-dimensional open resonators excited by an electric line source are investigated analytically. The resonators are, in general, composed of two infinite and concentric cylindrical layers covered with an infinitely thin, perfect conducting shell that has an infinite axial aperture. The line source is oriented parallel to the cylinder axis. An exact analytical solution of this problem is derived. It is based on the dual-series approach and its transformation to the equivalent Riemann-Hilbert problem. Asymmetric metamaterial-based configurations are found to lead simultaneously to large enhancements of the radiated power and to highly steerable Huygens-like directivity patterns; properties not attainable with the corresponding structurally symmetric resonators. The presented open resonator designs are thus interesting candidates for many scientific and engineering applications where enhanced directional near- and far-field responses, tailored with beam shaping and steering capabilities, are highly desired.

  5. A rapid compatibility analysis of potential offshore sand sources for beaches of the Santa Barbara Littoral Cell

    USGS Publications Warehouse

    Mustain, N.; Griggs, G.; Barnard, P.L.

    2007-01-01

    The beaches of the Santa Barbara Littoral Cell, which are narrow as a result of either natural and/or anthropogenic factors, may benefit from nourishment. Sand compatibility is fundamental to beach nourishment success and grain size is the parameter often used to evaluate equivalence. Only after understanding which sand sizes naturally compose beaches in a specific cell, especially the smallest size that remains on the beach, can the potential compatibility of source areas, such as offshore borrow sites, be accurately assessed. This study examines sediments on the beach and in the nearshore (5-20m depth) for the entire Santa Barbara Littoral Cell east of Point Conception. A digital bed sediment camera, the Eyeball??, and spatial autocorrelation technique were used to determine sediment grain size. Here we report on whether nearshore sediments are comparable and compatible with beach sands of the Santa Barbara Littoral Cell. ?? 2007 ASCE.

  6. 42 CFR 82.5 - Definition of terms used in this part.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Illness Compensation Program Act of 2000, 42 U.S.C. 7384-7385 [1994, supp. 2001]. (i) Equivalent dose is... equivalent dose that is received from radiation sources outside of the body. (k) Internal dose means that portion of the equivalent dose that is received from radioactive materials taken into the body. (l) NIOSH...

  7. Using a fast dual-wavelength imaging ellipsometric system to measure the flow thickness profile of an oil thin film

    NASA Astrophysics Data System (ADS)

    Kuo, Chih-Wei; Han, Chien-Yuan; Jhou, Jhe-Yi; Peng, Zeng-Yi

    2017-11-01

    Dual-wavelength light sources with stroboscopic illumination technique were applied in a process of photoelastic modulated ellipsometry to retrieve two-dimensional ellipsometric parameters of thin films on a silicon substrate. Two laser diodes were alternately switched on and modulated by a programmable pulse generator to generate four short pulses at specific temporal phase angles in a modulation cycle, and short pulses were used to freeze the intensity variation of the PEM modulated signal that allows ellipsometric images to be captured by a charge-coupled device. Although the phase retardation of a photoelastic modulator is related to the light wavelength, we employed an equivalent phase retardation technique to avoid any setting from the photoelastic modulator. As a result, the ellipsometric parameters of different wavelengths may be rapidly obtained using this dual-wavelength ellipsometric system every 4 s. Both static and dynamic experiments are demonstrated in this work.

  8. Rotational and Translational Components of Motion Parallax: Observers' Sensitivity and Implications for Three-Dimensional Computer Graphics

    NASA Technical Reports Server (NTRS)

    Kaiser, Mary K.; Montegut, Michael J.; Proffitt, Dennis R.

    1995-01-01

    The motion of objects during motion parallax can be decomposed into 2 observer-relative components: translation and rotation. The depth ratio of objects in the visual field is specified by the inverse ratio of their angular displacement (from translation) or equivalently by the inverse ratio of their rotations. Despite the equal mathematical status of these 2 information sources, it was predicted that observers would be far more sensitive to the translational than rotational component. Such a differential sensitivity is implicitly assumed by the computer graphics technique billboarding, in which 3-dimensional (3-D) objects are drawn as planar forms (i.e., billboards) maintained normal to the line of sight. In 3 experiments, observers were found to be consistently less sensitive to rotational anomalies. The implications of these findings for kinetic depth effect displays and billboarding techniques are discussed.

  9. Identification of Low Order Equivalent System Models From Flight Test Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2000-01-01

    Identification of low order equivalent system dynamic models from flight test data was studied. Inputs were pilot control deflections, and outputs were aircraft responses, so the models characterized the total aircraft response including bare airframe and flight control system. Theoretical investigations were conducted and related to results found in the literature. Low order equivalent system modeling techniques using output error and equation error parameter estimation in the frequency domain were developed and validated on simulation data. It was found that some common difficulties encountered in identifying closed loop low order equivalent system models from flight test data could be overcome using the developed techniques. Implications for data requirements and experiment design were discussed. The developed methods were demonstrated using realistic simulation cases, then applied to closed loop flight test data from the NASA F-18 High Alpha Research Vehicle.

  10. Calculated organ doses using Monte Carlo simulations in a reference male phantom undergoing HDR brachytherapy applied to localized prostate carcinoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candela-Juan, Cristian; Perez-Calatayud, Jose; Ballester, Facundo

    Purpose: The aim of this study was to obtain equivalent doses in radiosensitive organs (aside from the bladder and rectum) when applying high-dose-rate (HDR) brachytherapy to a localized prostate carcinoma using {sup 60}Co or {sup 192}Ir sources. These data are compared with results in a water phantom and with expected values in an infinite water medium. A comparison with reported values from proton therapy and intensity-modulated radiation therapy (IMRT) is also provided. Methods: Monte Carlo simulations in Geant4 were performed using a voxelized phantom described in International Commission on Radiological Protection (ICRP) Publication 110, which reproduces masses and shapes frommore » an adult reference man defined in ICRP Publication 89. Point sources of {sup 60}Co or {sup 192}Ir with photon energy spectra corresponding to those exiting their capsules were placed in the center of the prostate, and equivalent doses per clinical absorbed dose in this target organ were obtained in several radiosensitive organs. Values were corrected to account for clinical circumstances with the source located at various positions with differing dwell times throughout the prostate. This was repeated for a homogeneous water phantom. Results: For the nearest organs considered (bladder, rectum, testes, small intestine, and colon), equivalent doses given by {sup 60}Co source were smaller (8%-19%) than from {sup 192}Ir. However, as the distance increases, the more penetrating gamma rays produced by {sup 60}Co deliver higher organ equivalent doses. The overall result is that effective dose per clinical absorbed dose from a {sup 60}Co source (11.1 mSv/Gy) is lower than from a {sup 192}Ir source (13.2 mSv/Gy). On the other hand, equivalent doses were the same in the tissue and the homogeneous water phantom for those soft tissues closer to the prostate than about 30 cm. As the distance increased, the differences of photoelectric effect in water and soft tissue, and appearance of other materials such as air, bone, or lungs, produced variations between both phantoms which were at most 35% in the considered organ equivalent doses. Finally, effective doses per clinical absorbed dose from IMRT and proton therapy were comparable to those from both brachytherapy sources, with brachytherapy being advantageous over external beam radiation therapy for the furthest organs. Conclusions: A database of organ equivalent doses when applying HDR brachytherapy to the prostate with either {sup 60}Co or {sup 192}Ir is provided. According to physical considerations, {sup 192}Ir is dosimetrically advantageous over {sup 60}Co sources at large distances, but not in the closest organs. Damage to distant healthy organs per clinical absorbed dose is lower with brachytherapy than with IMRT or protons, although the overall effective dose per Gy given to the prostate seems very similar. Given that there are several possible fractionation schemes, which result in different total amounts of therapeutic absorbed dose, advantage of a radiation treatment (according to equivalent dose to healthy organs) is treatment and facility dependent.« less

  11. Stoichiometry of Reducing Equivalents and Splitting of Water in the Citric Acid Cycle.

    ERIC Educational Resources Information Center

    Madeira, Vitor M. C.

    1988-01-01

    Presents a solution to the problem of finding the source of extra reducing equivalents, and accomplishing the stoichiometry of glucose oxidation reactions. Discusses the citric acid cycle and glycolysis. (CW)

  12. Measurements of the neutron dose equivalent for various radiation qualities, treatment machines and delivery techniques in radiation therapy

    NASA Astrophysics Data System (ADS)

    Hälg, R. A.; Besserer, J.; Boschung, M.; Mayer, S.; Lomax, A. J.; Schneider, U.

    2014-05-01

    In radiation therapy, high energy photon and proton beams cause the production of secondary neutrons. This leads to an unwanted dose contribution, which can be considerable for tissues outside of the target volume regarding the long term health of cancer patients. Due to the high biological effectiveness of neutrons in regards to cancer induction, small neutron doses can be important. This study quantified the neutron doses for different radiation therapy modalities. Most of the reports in the literature used neutron dose measurements free in air or on the surface of phantoms to estimate the amount of neutron dose to the patient. In this study, dose measurements were performed in terms of neutron dose equivalent inside an anthropomorphic phantom. The neutron dose equivalent was determined using track etch detectors as a function of the distance to the isocenter, as well as for radiation sensitive organs. The dose distributions were compared with respect to treatment techniques (3D-conformal, volumetric modulated arc therapy and intensity-modulated radiation therapy for photons; spot scanning and passive scattering for protons), therapy machines (Varian, Elekta and Siemens linear accelerators) and radiation quality (photons and protons). The neutron dose equivalent varied between 0.002 and 3 mSv per treatment gray over all measurements. Only small differences were found when comparing treatment techniques, but substantial differences were observed between the linear accelerator models. The neutron dose equivalent for proton therapy was higher than for photons in general and in particular for double-scattered protons. The overall neutron dose equivalent measured in this study was an order of magnitude lower than the stray dose of a treatment using 6 MV photons, suggesting that the contribution of the secondary neutron dose equivalent to the integral dose of a radiotherapy patient is small.

  13. Measurements of the neutron dose equivalent for various radiation qualities, treatment machines and delivery techniques in radiation therapy.

    PubMed

    Hälg, R A; Besserer, J; Boschung, M; Mayer, S; Lomax, A J; Schneider, U

    2014-05-21

    In radiation therapy, high energy photon and proton beams cause the production of secondary neutrons. This leads to an unwanted dose contribution, which can be considerable for tissues outside of the target volume regarding the long term health of cancer patients. Due to the high biological effectiveness of neutrons in regards to cancer induction, small neutron doses can be important. This study quantified the neutron doses for different radiation therapy modalities. Most of the reports in the literature used neutron dose measurements free in air or on the surface of phantoms to estimate the amount of neutron dose to the patient. In this study, dose measurements were performed in terms of neutron dose equivalent inside an anthropomorphic phantom. The neutron dose equivalent was determined using track etch detectors as a function of the distance to the isocenter, as well as for radiation sensitive organs. The dose distributions were compared with respect to treatment techniques (3D-conformal, volumetric modulated arc therapy and intensity-modulated radiation therapy for photons; spot scanning and passive scattering for protons), therapy machines (Varian, Elekta and Siemens linear accelerators) and radiation quality (photons and protons). The neutron dose equivalent varied between 0.002 and 3 mSv per treatment gray over all measurements. Only small differences were found when comparing treatment techniques, but substantial differences were observed between the linear accelerator models. The neutron dose equivalent for proton therapy was higher than for photons in general and in particular for double-scattered protons. The overall neutron dose equivalent measured in this study was an order of magnitude lower than the stray dose of a treatment using 6 MV photons, suggesting that the contribution of the secondary neutron dose equivalent to the integral dose of a radiotherapy patient is small.

  14. On the nature of the unidentified high latitude UHURU sources

    NASA Technical Reports Server (NTRS)

    Holt, S. S.; Boldt, E. A.; Serlemitsos, P. J.; Murray, S. S.; Giacconi, R.; Kellogg, E. M.; Matilsky, T. A.

    1973-01-01

    It is found that the unidentified high latitude UHURU sources can have either of two very different explanations. They must either reside at great distances with luminosity equivalent to or greater than 10 to the 46th power ergs/sec, or be contained in the galaxy with luminosity equivalent to or less than 10 to the 34th power ergs/sec. The two possibilities are indistinguishable with the available data.

  15. Effect of Arctic Amplification on Design Snow Loads in Alaska

    DTIC Science & Technology

    2016-09-01

    snow water equivalent UFC Unified Facilities Criteria UTC Coordinated Universal Time Keywords: Alaska, Arctic amplification, climate change...extreme value analysis, snow loads, snow water equivalent , SWE Acknowledgements: This work was conducted with support from the Strategic... equivalent (SWE) of the snowpack. We acquired SWE data from a number of sources that provide automatic or manual observations, reanalysis data, or

  16. Fast simulation techniques for switching converters

    NASA Technical Reports Server (NTRS)

    King, Roger J.

    1987-01-01

    Techniques for simulating a switching converter are examined. The state equations for the equivalent circuits, which represent the switching converter, are presented and explained. The uses of the Newton-Raphson iteration, low ripple approximation, half-cycle symmetry, and discrete time equations to compute the interval durations are described. An example is presented in which these methods are illustrated by applying them to a parallel-loaded resonant inverter with three equivalent circuits for its continuous mode of operation.

  17. Dose estimation and dating of pottery from Turkey

    NASA Astrophysics Data System (ADS)

    Altay Atlıhan, M.; Şahiner, Eren; Soykal Alanyalı, Feriştah

    2012-06-01

    The luminescence method is a widely used technique for environmental dosimetry and dating archaeological, geological materials. In this study, equivalent dose (ED) and annual dose rate (AD) of an archaeological sample were measured. The age of the material was calculated by means of equivalent dose divided by the annual dose rate. The archaeological sample was taken from Antalya, Turkey. Samples were prepared by the fine grain technique and equivalent dose was found using multiple-aliquot-additive-dose (MAAD) and single aliquot regeneration (SAR) techniques. Also the short shine normalization-MAAD and long shine normalization-MAAD were applied and the results of the methods were compared with each other. The optimal preheat temperature was found to be 200 °C for 10 min. The annual doses of concentrations of the major radioactive isotopes were determined using a high-purity germanium detector and a low-level alpha counter. The age of the sample was found to be 510±40 years.

  18. USEPA PATHOGEN EQUIVALENCY COMMITTEE RETREAT

    EPA Science Inventory

    The Pathogen Equivalency Committee held its retreat from September 20-21, 2005 at Hueston Woods State Park in College Corner, Ohio. This presentation will update the PEC’s membership on emerging pathogens, analytical methods, disinfection techniques, risk analysis, preparat...

  19. Antioxidant activity, phenolic content, and peroxide value of essential oil and extracts of some medicinal and aromatic plants used as condiments and herbal teas in Turkey.

    PubMed

    Ozcan, Mehmet Musa; Erel, Ozcan; Herken, Emine Etöz

    2009-02-01

    The antioxidant activity, total peroxide values, and total phenol contents of several medicinal and aromatic plant essential oil and extracts from Turkey were examined. Total phenolic contents were determined using a spectrophotometric technique and calculated as gallic acid equivalents. Total antioxidant activity of essential oil and extracts varied from 0.6853 to 1.3113 and 0.3189 to 0.6119 micromol of Trolox equivalents/g, respectively. The total phenolic content of essential oil ranged from 0.0871 to 0.5919 mg of gallic acid/g dry weight. However, the total phenolic contents of extracts were found to be higher compared with those of essential oils. The amount of total peroxide values of oils varied from 7.31 (pickling herb) to 58.23 (bitter fennel flower) mumol of H(2)O(2)/g. As a result, it is shown that medicinal plant derivatives such as extract and essential oils can be useful as a potential source of total phenol, peroxide, and antioxidant capacity for protection of processed foods.

  20. Sound field reproduction as an equivalent acoustical scattering problem.

    PubMed

    Fazi, Filippo Maria; Nelson, Philip A

    2013-11-01

    Given a continuous distribution of acoustic sources, the determination of the source strength that ensures the synthesis of a desired sound field is shown to be identical to the solution of an equivalent acoustic scattering problem. The paper begins with the presentation of the general theory that underpins sound field reproduction with secondary sources continuously arranged on the boundary of the reproduction region. The process of reproduction by a continuous source distribution is modeled by means of an integral operator (the single layer potential). It is then shown how the solution of the sound reproduction problem corresponds to that of an equivalent scattering problem. Analytical solutions are computed for two specific instances of this problem, involving, respectively, the use of a secondary source distribution in spherical and planar geometries. The results are shown to be the same as those obtained with analyses based on High Order Ambisonics and Wave Field Synthesis, respectively, thus bringing to light a fundamental analogy between these two methods of sound reproduction. Finally, it is shown how the physical optics (Kirchhoff) approximation enables the derivation of a high-frequency simplification for the problem under consideration, this in turn being related to the secondary source selection criterion reported in the literature on Wave Field Synthesis.

  1. Radiant Temperature Nulling Radiometer

    NASA Technical Reports Server (NTRS)

    Ryan, Robert (Inventor)

    2003-01-01

    A self-calibrating nulling radiometer for non-contact temperature measurement of an object, such as a body of water, employs a black body source as a temperature reference, an optomechanical mechanism, e.g., a chopper, to switch back and forth between measuring the temperature of the black body source and that of a test source, and an infrared detection technique. The radiometer functions by measuring radiance of both the test and the reference black body sources; adjusting the temperature of the reference black body so that its radiance is equivalent to the test source; and, measuring the temperature of the reference black body at this point using a precision contact-type temperature sensor, to determine the radiative temperature of the test source. The radiation from both sources is detected by an infrared detector that converts the detected radiation to an electrical signal that is fed with a chopper reference signal to an error signal generator, such as a synchronous detector, that creates a precision rectified signal that is approximately proportional to the difference between the temperature of the reference black body and that of the test infrared source. This error signal is then used in a feedback loop to adjust the reference black body temperature until it equals that of the test source, at which point the error signal is nulled to zero. The chopper mechanism operates at one or more Hertz allowing minimization of l/f noise. It also provides pure chopping between the black body and the test source and allows continuous measurements.

  2. Reconstruction of instantaneous surface normal velocity of a vibrating structure using interpolated time-domain equivalent source method

    NASA Astrophysics Data System (ADS)

    Geng, Lin; Bi, Chuan-Xing; Xie, Feng; Zhang, Xiao-Zheng

    2018-07-01

    Interpolated time-domain equivalent source method is extended to reconstruct the instantaneous surface normal velocity of a vibrating structure by using the time-evolving particle velocity as the input, which provides a non-contact way to overall understand the instantaneous vibration behavior of the structure. In this method, the time-evolving particle velocity in the near field is first modeled by a set of equivalent sources positioned inside the vibrating structure, and then the integrals of equivalent source strengths are solved by an iterative solving process and are further used to calculate the instantaneous surface normal velocity. An experiment of a semi-cylindrical steel plate impacted by a steel ball is investigated to examine the ability of the extended method, where the time-evolving normal particle velocity and pressure on the hologram surface measured by a Microflown pressure-velocity probe are used as the inputs of the extended method and the method based on pressure measurements, respectively, and the instantaneous surface normal velocity of the plate measured by a laser Doppler vibrometry is used as the reference for comparison. The experimental results demonstrate that the extended method is a powerful tool to visualize the instantaneous surface normal velocity of a vibrating structure in both time and space domains and can obtain more accurate results than that of the method based on pressure measurements.

  3. A linear spectral matching technique for retrieving equivalent water thickness and biochemical constituents of green vegetation

    NASA Technical Reports Server (NTRS)

    Gao, Bo-Cai; Goetz, Alexander F. H.

    1992-01-01

    Over the last decade, technological advances in airborne imaging spectrometers, having spectral resolution comparable with laboratory spectrometers, have made it possible to estimate biochemical constituents of vegetation canopies. Wessman estimated lignin concentration from data acquired with NASA's Airborne Imaging Spectrometer (AIS) over Blackhawk Island in Wisconsin. A stepwise linear regression technique was used to determine the single spectral channel or channels in the AIS data that best correlated with measured lignin contents using chemical methods. The regression technique does not take advantage of the spectral shape of the lignin reflectance feature as a diagnostic tool nor the increased discrimination among other leaf components with overlapping spectral features. A nonlinear least squares spectral matching technique was recently reported for deriving both the equivalent water thicknesses of surface vegetation and the amounts of water vapor in the atmosphere from contiguous spectra measured with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). The same technique was applied to a laboratory reflectance spectrum of fresh, green leaves. The result demonstrates that the fresh leaf spectrum in the 1.0-2.5 microns region consists of spectral components of dry leaves and the spectral component of liquid water. A linear least squares spectral matching technique for retrieving equivalent water thickness and biochemical components of green vegetation is described.

  4. Modelling and Characterization of Effective Thermal Conductivity of Single Hollow Glass Microsphere and Its Powder.

    PubMed

    Liu, Bing; Wang, Hui; Qin, Qing-Hua

    2018-01-14

    Tiny hollow glass microsphere (HGM) can be applied for designing new light-weighted and thermal-insulated composites as high strength core, owing to its hollow structure. However, little work has been found for studying its own overall thermal conductivity independent of any matrix, which generally cannot be measured or evaluated directly. In this study, the overall thermal conductivity of HGM is investigated experimentally and numerically. The experimental investigation of thermal conductivity of HGM powder is performed by the transient plane source (TPS) technique to provide a reference to numerical results, which are obtained by a developed three-dimensional two-step hierarchical computational method. In the present method, three heterogeneous HGM stacking elements representing different distributions of HGMs in the powder are assumed. Each stacking element and its equivalent homogeneous solid counterpart are, respectively, embedded into a fictitious matrix material as fillers to form two equivalent composite systems at different levels, and then the overall thermal conductivity of each stacking element can be numerically determined through the equivalence of the two systems. The comparison of experimental and computational results indicates the present computational modeling can be used for effectively predicting the overall thermal conductivity of single HGM and its powder in a flexible way. Besides, it is necessary to note that the influence of thermal interfacial resistance cannot be removed from the experimental results in the TPS measurement.

  5. Measurements of the cesium flow from a surface-plasma H/sup -/ ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, H.V.; Allison, P.W.

    1979-01-01

    A surface ionization gauge (SIG) was constructed and used to measure the Cs/sup 0/ flow rate through the emission slit of a surface-plasma source (SPS) of H/sup -/ ions with Penning geometry. The equivalent cesium density in the SPS discharge is deduced from these flow measurements. For dc operation the optimum H/sup -/ current occurs at an equivalent cesium density of approx. 7 x 10/sup 12/ cm/sup -3/ (corresponding to an average cesium consumption rate of 0.5 mg/h). For pulsed operation the optimum H/sup -/ current occurs at an equivalent cesium density of approx. 2 x 10/sup 13/ cm/sup -3/more » (1-mg/h average cesium consumption rate). Cesium trapping by the SPS discharge was observed for both dc and pulsed operation. A cesium energy of approx. 0.1 eV is deduced from the observed time of flight to the SIG. In addition to providing information on the physics of the source, the SIG is a useful diagnostic tool for source startup and operation.« less

  6. On-road and wind-tunnel measurement of motorcycle helmet noise.

    PubMed

    Kennedy, J; Carley, M; Walker, I; Holt, N

    2013-09-01

    The noise source mechanisms involved in motorcycling include various aerodynamic sources and engine noise. The problem of noise source identification requires extensive data acquisition of a type and level that have not previously been applied. Data acquisition on track and on road are problematic due to rider safety constraints and the portability of appropriate instrumentation. One way to address this problem is the use of data from wind tunnel tests. The validity of these measurements for noise source identification must first be demonstrated. In order to achieve this extensive wind tunnel tests have been conducted and compared with the results from on-track measurements. Sound pressure levels as a function of speed were compared between on track and wind tunnel tests and were found to be comparable. Spectral conditioning techniques were applied to separate engine and wind tunnel noise from aerodynamic noise and showed that the aerodynamic components were equivalent in both cases. The spectral conditioning of on-track data showed that the contribution of engine noise to the overall noise is a function of speed and is more significant than had previously been thought. These procedures form a basis for accurate experimental measurements of motorcycle noise.

  7. Constraints on primary and secondary particulate carbon sources using chemical tracer and 14C methods during CalNex-Bakersfield

    NASA Astrophysics Data System (ADS)

    Sheesley, Rebecca J.; Nallathamby, Punith Dev; Surratt, Jason D.; Lee, Anita; Lewandowski, Michael; Offenberg, John H.; Jaoui, Mohammed; Kleindienst, Tadeusz E.

    2017-10-01

    The present study investigates primary and secondary sources of organic carbon for Bakersfield, CA, USA as part of the 2010 CalNex study. The method used here involves integrated sampling that is designed to allow for detailed and specific chemical analysis of particulate matter (PM) in the Bakersfield airshed. To achieve this objective, filter samples were taken during thirty-four 23-hr periods between 19 May and 26 June 2010 and analyzed for organic tracers by gas chromatography - mass spectrometry (GC-MS). Contributions to organic carbon (OC) were determined by two organic tracer-based techniques: primary OC by chemical mass balance and secondary OC by a mass fraction method. Radiocarbon (14C) measurements of the total organic carbon were also made to determine the split between the modern and fossil carbon and thereby constrain unknown sources of OC not accounted for by either tracer-based attribution technique. From the analysis, OC contributions from four primary sources and four secondary sources were determined, which comprised three sources of modern carbon and five sources of fossil carbon. The major primary sources of OC were from vegetative detritus (9.8%), diesel (2.3%), gasoline (<1.0%), and lubricating oil impacted motor vehicle exhaust (30%); measured secondary sources resulted from isoprene (1.5%), α-pinene (<1.0%), toluene (<1.0%), and naphthalene (<1.0%, as an upper limit) contributions. The average observed organic carbon (OC) was 6.42 ± 2.33 μgC m-3. The 14C derived apportionment indicated that modern and fossil components were nearly equivalent on average; however, the fossil contribution ranged from 32 to 66% over the five week campaign. With the fossil primary and secondary sources aggregated, only 25% of the fossil organic carbon could not be attributed. Whereas, nearly 80% of the modern carbon could not be attributed to primary and secondary sources accessible to this analysis, which included tracers of biomass burning, vegetative detritus and secondary biogenic carbon. The results of the current study contributes source-based evaluation of the carbonaceous aerosol at CalNex Bakersfield.

  8. Constraints on primary and secondary particulate carbon sources using chemical tracer and 14C methods during CalNex-Bakersfield

    PubMed Central

    Sheesley, Rebecca J.; Nallathamby, Punith Dev; Surratt, Jason D.; Lee, Anita; Lewandowski, Michael; Offenberg, John H.; Jaoui, Mohammed; Kleindienst, Tadeusz E.

    2018-01-01

    The present study investigates primary and secondary sources of organic carbon for Bakersfield, CA, USA as part of the 2010 CalNex study. The method used here involves integrated sampling that is designed to allow for detailed and specific chemical analysis of particulate matter (PM) in the Bakersfield airshed. To achieve this objective, filter samples were taken during thirty-four 23-hr periods between 19 May and 26 June 2010 and analyzed for organic tracers by gas chromatography – mass spectrometry (GC-MS). Contributions to organic carbon (OC) were determined by two organic tracer-based techniques: primary OC by chemical mass balance and secondary OC by a mass fraction method. Radiocarbon (14C) measurements of the total organic carbon were also made to determine the split between the modern and fossil carbon and thereby constrain unknown sources of OC not accounted for by either tracer-based attribution technique. From the analysis, OC contributions from four primary sources and four secondary sources were determined, which comprised three sources of modern carbon and five sources of fossil carbon. The major primary sources of OC were from vegetative detritus (9.8%), diesel (2.3%), gasoline (<1.0%), and lubricating oil impacted motor vehicle exhaust (30%); measured secondary sources resulted from isoprene (1.5%), α-pinene (<1.0%), toluene (<1.0%), and naphthalene (<1.0%, as an upper limit) contributions. The average observed organic carbon (OC) was 6.42 ± 2.33 μgC m−3. The 14C derived apportionment indicated that modern and fossil components were nearly equivalent on average; however, the fossil contribution ranged from 32-66% over the five week campaign. With the fossil primary and secondary sources aggregated, only 25% of the fossil organic carbon could not be attributed. Whereas, nearly 80% of the modern carbon could not be attributed to primary and secondary sources accessible to this analysis, which included tracers of biomass burning, vegetative detritus and secondary biogenic carbon. The results of the current study contributes source-based evaluation of the carbonaceous aerosol at CalNex Bakersfield. PMID:29681757

  9. Electrochemical process for the preparation of nitrogen fertilizers

    DOEpatents

    Aulich, Ted R [Grand Forks, ND; Olson, Edwin S [Grand Forks, ND; Jiang, Junhua [Grand Forks, ND

    2012-04-10

    The present invention provides methods and apparatus for the preparation of nitrogen fertilizers including ammonium nitrate, urea, urea-ammonium nitrate, and/or ammonia, at low temperature and pressure, preferably at ambient temperature and pressure, utilizing a source of carbon, a source of nitrogen, and/or a source of hydrogen or hydrogen equivalent. Implementing an electrolyte serving as ionic charge carrier, (1) ammonium nitrate is produced via the reduction of a nitrogen source at the cathode and the oxidation of a nitrogen source at the anode; (2) urea or its isomers are produced via the simultaneous cathodic reduction of a carbon source and a nitrogen source; (3) ammonia is produced via the reduction of nitrogen source at the cathode and the oxidation of a hydrogen source or a hydrogen equivalent such as carbon monoxide or a mixture of carbon monoxide and hydrogen at the anode; and (4) urea-ammonium nitrate is produced via the simultaneous cathodic reduction of a carbon source and a nitrogen source, and anodic oxidation of a nitrogen source. The electrolyte can be aqueous, non-aqueous, or solid.

  10. Frequencies and Flutter Speed Estimation for Damaged Aircraft Wing Using Scaled Equivalent Plate Analysis

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Thiagarajan

    2010-01-01

    Equivalent plate analysis is often used to replace the computationally expensive finite element analysis in initial design stages or in conceptual design of aircraft wing structures. The equivalent plate model can also be used to design a wind tunnel model to match the stiffness characteristics of the wing box of a full-scale aircraft wing model while satisfying strength-based requirements An equivalent plate analysis technique is presented to predict the static and dynamic response of an aircraft wing with or without damage. First, a geometric scale factor and a dynamic pressure scale factor are defined to relate the stiffness, load and deformation of the equivalent plate to the aircraft wing. A procedure using an optimization technique is presented to create scaled equivalent plate models from the full scale aircraft wing using geometric and dynamic pressure scale factors. The scaled models are constructed by matching the stiffness of the scaled equivalent plate with the scaled aircraft wing stiffness. It is demonstrated that the scaled equivalent plate model can be used to predict the deformation of the aircraft wing accurately. Once the full equivalent plate geometry is obtained, any other scaled equivalent plate geometry can be obtained using the geometric scale factor. Next, an average frequency scale factor is defined as the average ratio of the frequencies of the aircraft wing to the frequencies of the full-scaled equivalent plate. The average frequency scale factor combined with the geometric scale factor is used to predict the frequency response of the aircraft wing from the scaled equivalent plate analysis. A procedure is outlined to estimate the frequency response and the flutter speed of an aircraft wing from the equivalent plate analysis using the frequency scale factor and geometric scale factor. The equivalent plate analysis is demonstrated using an aircraft wing without damage and another with damage. Both of the problems show that the scaled equivalent plate analysis can be successfully used to predict the frequencies and flutter speed of a typical aircraft wing.

  11. Equivalent reduced model technique development for nonlinear system dynamic response

    NASA Astrophysics Data System (ADS)

    Thibault, Louis; Avitabile, Peter; Foley, Jason; Wolfson, Janet

    2013-04-01

    The dynamic response of structural systems commonly involves nonlinear effects. Often times, structural systems are made up of several components, whose individual behavior is essentially linear compared to the total assembled system. However, the assembly of linear components using highly nonlinear connection elements or contact regions causes the entire system to become nonlinear. Conventional transient nonlinear integration of the equations of motion can be extremely computationally intensive, especially when the finite element models describing the components are very large and detailed. In this work, the equivalent reduced model technique (ERMT) is developed to address complicated nonlinear contact problems. ERMT utilizes a highly accurate model reduction scheme, the System equivalent reduction expansion process (SEREP). Extremely reduced order models that provide dynamic characteristics of linear components, which are interconnected with highly nonlinear connection elements, are formulated with SEREP for the dynamic response evaluation using direct integration techniques. The full-space solution will be compared to the response obtained using drastically reduced models to make evident the usefulness of the technique for a variety of analytical cases.

  12. Calibration factors for the SNOOPY NP-100 neutron dosimeter

    NASA Astrophysics Data System (ADS)

    Moscu, D. F.; McNeill, F. E.; Chase, J.

    2007-10-01

    Within CANDU nuclear power facilities, only a small fraction of workers are exposed to neutron radiation. For these individuals, roughly 4.5% of the total radiation equivalent dose is the result of exposure to neutrons. When this figure is considered across all workers receiving external exposure of any kind, only 0.25% of the total radiation equivalent dose is the result of exposure to neutrons. At many facilities, the NP-100 neutron dosimeter, manufactured by Canberra Industries Incorporated, is employed in both direct and indirect dosimetry methods. Also known as "SNOOPY", these detectors undergo calibration, which results in a calibration factor relating the neutron count rate to the ambient dose equivalent rate, using a standard Am-Be neutron source. Using measurements presented in a technical note, readings from the dosimeter for six different neutron fields in six source-detector orientations were used, to determine a calibration factor for each of these sources. The calibration factor depends on the neutron energy spectrum and the radiation weighting factor to link neutron fluence to equivalent dose. Although the neutron energy spectra measured in the CANDU workplace are quite different than that of the Am-Be calibration source, the calibration factor remains constant - within acceptable limits - regardless of the neutron source used in the calibration; for the specified calibration orientation and current radiation weighting factors. However, changing the value of the radiation weighting factors would result in changes to the calibration factor. In the event of changes to the radiation weighting factors, it will be necessary to assess whether a change to the calibration process or resulting calibration factor is warranted.

  13. Multihelix rotating shield brachytherapy for cervical cancer

    PubMed Central

    Dadkhah, Hossein; Kim, Yusung; Wu, Xiaodong; Flynn, Ryan T.

    2015-01-01

    Purpose: To present a novel brachytherapy technique, called multihelix rotating shield brachytherapy (H-RSBT), for the precise angular and linear positioning of a partial shield in a curved applicator. H-RSBT mechanically enables the dose delivery using only linear translational motion of the radiation source/shield combination. The previously proposed approach of serial rotating shield brachytherapy (S-RSBT), in which the partial shield is rotated to several angular positions at each source dwell position [W. Yang et al., “Rotating-shield brachytherapy for cervical cancer,” Phys. Med. Biol. 58, 3931–3941 (2013)], is mechanically challenging to implement in a curved applicator, and H-RSBT is proposed as a feasible solution. Methods: A Henschke-type applicator, designed for an electronic brachytherapy source (Xoft Axxent™) and a 0.5 mm thick tungsten partial shield with 180° or 45° azimuthal emission angles and 116° asymmetric zenith angle, is proposed. The interior wall of the applicator contains six evenly spaced helical keyways that rigidly define the emission direction of the partial radiation shield as a function of depth in the applicator. The shield contains three uniformly distributed protruding keys on its exterior wall and is attached to the source such that it rotates freely, thus longitudinal translational motion of the source is transferred to rotational motion of the shield. S-RSBT and H-RSBT treatment plans with 180° and 45° azimuthal emission angles were generated for five cervical cancer patients with a diverse range of high-risk target volume (HR-CTV) shapes and applicator positions. For each patient, the total number of emission angles was held nearly constant for S-RSBT and H-RSBT by using dwell positions separated by 5 and 1.7 mm, respectively, and emission directions separated by 22.5° and 60°, respectively. Treatment delivery time and tumor coverage (D90 of HR-CTV) were the two metrics used as the basis for evaluation and comparison. For all the generated treatment plans, the D90 of the HR-CTV in units of equivalent dose in 2 Gy fractions (EQD2) was escalated until the D2cc (minimum dose to hottest 2 cm3) tolerance of either the bladder (90 Gy3), rectum (75 Gy3), or sigmoid colon (75 Gy3) was reached. Results: Treatment time changed for H-RSBT versus S-RSBT by −7.62% to 1.17% with an average change of −2.8%, thus H-RSBT treatments times tended to be shorter than for S-RSBT. The HR-CTV D90 also changed by −2.7% to 2.38% with an average of −0.65%. Conclusions: H-RSBT is a mechanically feasible delivery technique for use in the curved applicators needed for cervical cancer brachytherapy. S-RSBT and H-RSBT were clinically equivalent for all patients considered, with the H-RSBT technique tending to require less time for delivery. PMID:26520749

  14. Magnetoencephalography recording and analysis.

    PubMed

    Velmurugan, Jayabal; Sinha, Sanjib; Satishchandra, Parthasarathy

    2014-03-01

    Magnetoencephalography (MEG) non-invasively measures the magnetic field generated due to the excitatory postsynaptic electrical activity of the apical dendritic pyramidal cells. Such a tiny magnetic field is measured with the help of the biomagnetometer sensors coupled with the Super Conducting Quantum Interference Device (SQUID) inside the magnetically shielded room (MSR). The subjects are usually screened for the presence of ferromagnetic materials, and then the head position indicator coils, electroencephalography (EEG) electrodes (if measured simultaneously), and fiducials are digitized using a 3D digitizer, which aids in movement correction and also in transferring the MEG data from the head coordinates to the device and voxel coordinates, thereby enabling more accurate co-registration and localization. MEG data pre-processing involves filtering the data for environmental and subject interferences, artefact identification, and rejection. Magnetic resonance Imaging (MRI) is processed for correction and identifying fiducials. After choosing and computing for the appropriate head models (spherical or realistic; boundary/finite element model), the interictal/ictal epileptiform discharges are selected and modeled by an appropriate source modeling technique (clinically and commonly used - single equivalent current dipole - ECD model). The equivalent current dipole (ECD) source localization of the modeled interictal epileptiform discharge (IED) is considered physiologically valid or acceptable based on waveform morphology, isofield pattern, and dipole parameters (localization, dipole moment, confidence volume, goodness of fit). Thus, MEG source localization can aid clinicians in sublobar localization, lateralization, and grid placement, by evoking the irritative/seizure onset zone. It also accurately localizes the eloquent cortex-like visual, language areas. MEG also aids in diagnosing and delineating multiple novel findings in other neuropsychiatric disorders, including Alzheimer's disease, Parkinsonism, Traumatic brain injury, autistic disorders, and so oon.

  15. Independent component analysis of EEG dipole source localization in resting and action state of brain

    NASA Astrophysics Data System (ADS)

    Almurshedi, Ahmed; Ismail, Abd Khamim

    2015-04-01

    EEG source localization was studied in order to determine the location of the brain sources that are responsible for the measured potentials at the scalp electrodes using EEGLAB with Independent Component Analysis (ICA) algorithm. Neuron source locations are responsible in generating current dipoles in different states of brain through the measured potentials. The current dipole sources localization are measured by fitting an equivalent current dipole model using a non-linear optimization technique with the implementation of standardized boundary element head model. To fit dipole models to ICA components in an EEGLAB dataset, ICA decomposition is performed and appropriate components to be fitted are selected. The topographical scalp distributions of delta, theta, alpha, and beta power spectrum and cross coherence of EEG signals are observed. In close eyes condition it shows that during resting and action states of brain, alpha band was activated from occipital (O1, O2) and partial (P3, P4) area. Therefore, parieto-occipital area of brain are active in both resting and action state of brain. However cross coherence tells that there is more coherence between right and left hemisphere in action state of brain than that in the resting state. The preliminary result indicates that these potentials arise from the same generators in the brain.

  16. A major crustal feature in the southeastern United States inferred from the MAGSAT equivalent source anomaly field

    NASA Technical Reports Server (NTRS)

    Ruder, M. E.; Alexander, S. S.

    1985-01-01

    The MAGSAT equivalent-source anomaly field evaluated at 325 km altitude depicts a prominent anomaly centered over southeast Georgia, which is adjacent to the high-amplitude positive Kentucky anomaly. To overcome the satellite resolution constraint in studying this anomaly, conventional geophysical data were included in analysis: Bouguer gravity, seismic reflection and refraction, aeromagnetic, and in-situ stress-strain measurements. This integrated geophysical approach, infers more specifically the nature and extent of the crustal and/or lithospheric source of the Georgia MAGSAT anomaly. Physical properties and tectonic evolution of the area are all important in the interpretation.

  17. Fabricating fiber Bragg gratings with two phase masks based on reconstruction-equivalent-chirp technique.

    PubMed

    Gao, Liang; Chen, Xiangfei; Xiong, Jintian; Liu, Shengchun; Pu, Tao

    2012-01-30

    Based on reconstruction-equivalent-chirp (REC) technique, a novel solution for fabricating low-cost long fiber Bragg gratings (FBGs) with desired properties is proposed and initially studied. A proof-of-concept experiment is demonstrated with two conventional uniform phase masks and a submicron-precision translation stage, successfully. It is shown that the original phase shift (OPS) caused by phase mismatch of the two phase masks can be compensated by the equivalent phase shift (EPS) at the ±1st channels of sampled FBGs, separately. Furthermore, as an example, a π phase-shifted FBG of about 90 mm is fabricated by using these two 50mm-long uniform phase masks based on the presented method.

  18. Evaluation of water-mimicking solid phantom materials for use in HDR and LDR brachytherapy dosimetry

    NASA Astrophysics Data System (ADS)

    Schoenfeld, Andreas A.; Thieben, Maike; Harder, Dietrich; Poppe, Björn; Chofor, Ndimofor

    2017-12-01

    In modern HDR or LDR brachytherapy with photon emitters, fast checks of the dose profiles generated in water or a water-equivalent phantom have to be available in the interest of patient safety. However, the commercially available brachytherapy photon sources cover a wide range of photon emission spectra, and the range of the in-phantom photon spectrum is further widened by Compton scattering, so that the achievement of water-mimicking properties of such phantoms involves high requirements on their atomic composition. In order to classify the degree of water equivalence of the numerous commercially available solid water-mimicking phantom materials and the energy ranges of their applicability, the radial profiles of the absorbed dose to water, D w, have been calculated using Monte Carlo simulations in these materials and in water phantoms of the same dimensions. This study includes the HDR therapy sources Nucletron Flexisource Co-60 HDR (60Co), Eckert und Ziegler BEBIG GmbH CSM-11 (137Cs), Implant Sciences Corporation HDR Yb-169 Source 4140 (169Yb) as well as the LDR therapy sources IsoRay Inc. Proxcelan CS-1 (131Cs), IsoAid Advantage I-125 IAI-125A (125I), and IsoAid Advantage Pd-103 IAPd-103A (103Pd). Thereby our previous comparison between phantom materials and water surrounding a Varian GammaMed Plus HDR therapy 192Ir source (Schoenfeld et al 2015) has been complemented. Simulations were performed in cylindrical phantoms consisting of either water or the materials RW1, RW3, Solid Water, HE Solid Water, Virtual Water, Plastic Water DT, Plastic Water LR, Original Plastic Water (2015), Plastic Water (1995), Blue Water, polyethylene, polystyrene and PMMA. While for 192Ir, 137Cs and 60Co most phantom materials can be regarded as water equivalent, for 169Yb the materials Plastic Water LR, Plastic Water DT and RW1 appear as water equivalent. For the low-energy sources 106Pd, 131Cs and 125I, only Plastic Water LR can be classified as water equivalent.

  19. Evaluation of water-mimicking solid phantom materials for use in HDR and LDR brachytherapy dosimetry.

    PubMed

    Schoenfeld, Andreas A; Thieben, Maike; Harder, Dietrich; Poppe, Björn; Chofor, Ndimofor

    2017-11-21

    In modern HDR or LDR brachytherapy with photon emitters, fast checks of the dose profiles generated in water or a water-equivalent phantom have to be available in the interest of patient safety. However, the commercially available brachytherapy photon sources cover a wide range of photon emission spectra, and the range of the in-phantom photon spectrum is further widened by Compton scattering, so that the achievement of water-mimicking properties of such phantoms involves high requirements on their atomic composition. In order to classify the degree of water equivalence of the numerous commercially available solid water-mimicking phantom materials and the energy ranges of their applicability, the radial profiles of the absorbed dose to water, D w , have been calculated using Monte Carlo simulations in these materials and in water phantoms of the same dimensions. This study includes the HDR therapy sources Nucletron Flexisource Co-60 HDR ( 60 Co), Eckert und Ziegler BEBIG GmbH CSM-11 ( 137 Cs), Implant Sciences Corporation HDR Yb-169 Source 4140 ( 169 Yb) as well as the LDR therapy sources IsoRay Inc. Proxcelan CS-1 ( 131 Cs), IsoAid Advantage I-125 IAI-125A ( 125 I), and IsoAid Advantage Pd-103 IAPd-103A ( 103 Pd). Thereby our previous comparison between phantom materials and water surrounding a Varian GammaMed Plus HDR therapy 192 Ir source (Schoenfeld et al 2015) has been complemented. Simulations were performed in cylindrical phantoms consisting of either water or the materials RW1, RW3, Solid Water, HE Solid Water, Virtual Water, Plastic Water DT, Plastic Water LR, Original Plastic Water (2015), Plastic Water (1995), Blue Water, polyethylene, polystyrene and PMMA. While for 192 Ir, 137 Cs and 60 Co most phantom materials can be regarded as water equivalent, for 169 Yb the materials Plastic Water LR, Plastic Water DT and RW1 appear as water equivalent. For the low-energy sources 106 Pd, 131 Cs and 125 I, only Plastic Water LR can be classified as water equivalent.

  20. Generation and Radiation of Acoustic Waves from a 2-D Shear Layer

    NASA Technical Reports Server (NTRS)

    Agarwal, Anurag; Morris, Philip J.

    2000-01-01

    A parallel numerical simulation of the radiation of sound from an acoustic source inside a 2-D jet is presented in this paper. This basic benchmark problem is used as a test case for scattering problems that are presently being solved by using the Impedance Mismatch Method (IMM). In this technique, a solid body in the domain is represented by setting the acoustic impedance of each medium, encountered by a wave, to a different value. This impedance discrepancy results in reflected and scattered waves with appropriate amplitudes. The great advantage of the use of this method is that no modifications to a simple Cartesian grid need to be made for complicated geometry bodies. Thus, high order finite difference schemes may be applied simply to all parts of the domain. In the IMM, the total perturbation field is split into incident and scattered fields. The incident pressure is assumed to be known and the equivalent sources for the scattered field are associated with the presence of the scattering body (through the impedance mismatch) and the propagation of the incident field through a non-uniform flow. An earlier version of the technique could only handle uniform flow in the vicinity of the source and at the outflow boundary. Scattering problems in non-uniform mean flow are of great practical importance (for example, scattering from a high lift device in a non-uniform mean flow or the effects of a fuselage boundary layer). The solution to this benchmark problem, which has an acoustic wave propagating through a non-uniform mean flow, serves as a test case for the extensions of the IMM technique.

  1. 40 CFR 60.47Da - Commercial demonstration permit.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... may not exceed the following equivalent MW electrical generation capacity for any one technology... plants may not exceed 15,000 MW. Technology Pollutant Equivalent electrical capacity(MW electrical output... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Electric Utility...

  2. 40 CFR 63.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Protection Agency. Equivalent emission limitation means any maximum achievable control technology emission... common control that is included in a section 112(c) source category or subcategory for which a section... pollutant at least equivalent to the reduction in emissions of such pollutant achieved under a relevant...

  3. 40 CFR 63.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Protection Agency. Equivalent emission limitation means any maximum achievable control technology emission... common control that is included in a section 112(c) source category or subcategory for which a section... pollutant at least equivalent to the reduction in emissions of such pollutant achieved under a relevant...

  4. 40 CFR 63.2 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Protection Agency. Equivalent emission limitation means any maximum achievable control technology emission... common control that is included in a section 112(c) source category or subcategory for which a section... pollutant at least equivalent to the reduction in emissions of such pollutant achieved under a relevant...

  5. 40 CFR 63.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Protection Agency. Equivalent emission limitation means any maximum achievable control technology emission... common control that is included in a section 112(c) source category or subcategory for which a section... pollutant at least equivalent to the reduction in emissions of such pollutant achieved under a relevant...

  6. 40 CFR 63.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Protection Agency. Equivalent emission limitation means any maximum achievable control technology emission... common control that is included in a section 112(c) source category or subcategory for which a section... pollutant at least equivalent to the reduction in emissions of such pollutant achieved under a relevant...

  7. 40 CFR 60.47Da - Commercial demonstration permit.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... may not exceed the following equivalent MW electrical generation capacity for any one technology... plants may not exceed 15,000 MW. Technology Pollutant Equivalent electrical capacity(MW electrical output... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Electric Utility...

  8. Evaluating ecological equivalence of created marshes: comparing structural indicators with stable isotope indicators of blue crab trophic support

    USGS Publications Warehouse

    Llewellyn, Chris; LaPeyre, Megan K.

    2010-01-01

    This study sought to examine ecological equivalence of created marshes of different ages using traditional structural measures of equivalence, and tested a relatively novel approach using stable isotopes as a measure of functional equivalence. We compared soil properties, vegetation, nekton communities, and δ13C and δ15N isotope values of blue crab muscle and hepatopancreas tissue and primary producers at created (5-24 years old) and paired reference marshes in SW Louisiana. Paired contrasts indicated that created and reference marshes supported equivalent plant and nekton communities, but differed in soil characteristics. Stable isotope indicators examining blue crab food web support found that the older marshes (8 years+) were characterized by comparable trophic diversity and breadth compared to their reference marshes. Interpretation of results for the youngest site was confounded by the fact that the paired reference, which represented the desired end goal of restoration, contained a greater diversity of basal resources. Stable isotope techniques may give coastal managers an additional tool to assess functional equivalency of created marshes, as measured by trophic support, but may be limited to comparisons of marshes with similar vegetative communities and basal resources, or require the development of robust standardization techniques.

  9. Incentive Analysis for Clean Water Act Reauthorization: Point Source/Nonpoint Source Trading for Nutrient Discharge Reductions (1992)

    EPA Pesticide Factsheets

    Paper focuses on trading schemes in which regulated point sources are allowed to avoid upgrading their pollution control technology to meet water quality-based effluent limits if they pay for equivalent (or greater) reductions in nonpoint source pollution.

  10. ASSESSMENT OF INHALATION DOSE FROM THE INDOOR 222Rn AND 220Rn USING RAD7 AND PINHOLE CUP DOSEMETERS.

    PubMed

    Mehra, R; Jakhu, R; Bangotra, P; Kaur, K; Mittal, H M

    2016-10-01

    Radon is the most important source of natural radiation and is responsible for approximately half of the received dose from all sources. Most of this dose is from inhalation of the radon progeny, especially in closed atmospheres. Concentration of radon ( 222 Rn) and thoron ( 220 Rn) in the different villages of Jalandhar and Kapurthala district of Punjab has been calculated by pinhole cup dosemeters and RAD7. On an average, it has been observed from the study that the values of all the parameters calculated are higher in case of active monitoring than the passive monitoring. The calculated equilibrium equivalent 222 Rn concentration (EEC Rn ) and equilibrium equivalent 220 Rn concentration (EEC Th ) fluctuate in the range from 5.58 to 34.29 and from 0.35 to 2.7 Bq m -3 as estimated by active technique, respectively. Similarly, the observed mean value of the potential alpha energy concentration of 222 Rn (PAEC Rn ) and 220 Rn (PAEC Th ) is 4.55 and 4.34 mWL, respectively. The dose rate to the soft tissues and lung from indoor 222 Rn varies from 0.06 to 0.38 and from 0.50 to 3.05 nGy h -1 , respectively. The total annual effective dose for the residents of the study area is less than 10 mSv. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Transosseous-equivalent rotator cuff repair: a systematic review on the biomechanical importance of tying the medial row.

    PubMed

    Mall, Nathan A; Lee, Andrew S; Chahal, Jaskarndip; Van Thiel, Geoffrey S; Romeo, Anthony A; Verma, Nikhil N; Cole, Brian J

    2013-02-01

    Double-row and transosseous-equivalent repair techniques have shown greater strength and improved healing than single-row techniques. The purpose of this study was to determine whether tying of the medial-row sutures provides added stability during biomechanical testing of a transosseous-equivalent rotator cuff repair. We performed a systematic review of studies directly comparing biomechanical differences. Five studies met the inclusion and exclusion criteria. Of the 5 studies, 4 showed improved biomechanical properties with tying the medial-row anchors before bringing the sutures laterally to the lateral-row anchors, whereas the remaining study showed no difference in contact pressure, mean failure load, or gap formation with a standard suture bridge with knots tied at the medial row compared with knotless repairs. The results of this systematic review and quantitative synthesis indicate that the biomechanical factors ultimate load, stiffness, gap formation, and contact area are significantly improved when medial knots are tied as part of a transosseous-equivalent suture bridge construct compared with knotless constructs. Further studies comparing the clinical healing rates and functional outcomes between medial knotted and knotless repair techniques are needed. This review indicates that biomechanical factors are improved when the medial row of a transosseous-equivalent rotator cuff is tied compared with a knotless repair. However, this has not been definitively proven to translate to improved healing rates clinically. Copyright © 2013 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  12. SU-E-T-102: Determination of Dose Distributions and Water-Equivalence of MAGIC-F Polymer Gel for 60Co and 192Ir Brachytherapy Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quevedo, A; Nicolucci, P

    2014-06-01

    Purpose: Analyse the water-equivalence of MAGIC-f polymer gel for {sup 60}Co and {sup 192}Ir clinical brachytherapy sources, through dose distributions simulated with PENELOPE Monte Carlo code. Methods: The real geometry of {sup 60} (BEBIG, modelo Co0.A86) and {sup 192}192Ir (Varian, model GammaMed Plus) clinical brachytherapy sources were modelled on PENELOPE Monte Carlo simulation code. The most probable emission lines of photons were used for both sources: 17 emission lines for {sup 192}Ir and 12 lines for {sup 60}. The dose distributions were obtained in a cubic water or gel homogeneous phantom (30 × 30 × 30 cm{sup 3}), with themore » source positioned in the middle of the phantom. In all cases the number of simulation showers remained constant at 10{sup 9} particles. A specific material for gel was constructed in PENELOPE using weight fraction components of MAGIC-f: wH = 0,1062, wC = 0,0751, wN = 0,0139, wO = 0,8021, wS = 2,58×10{sup −6} e wCu = 5,08 × 10{sup −6}. The voxel size in the dose distributions was 0.6 mm. Dose distribution maps on the longitudinal and radial direction through the centre of the source were used to analyse the water-equivalence of MAGIC-f. Results: For the {sup 60} source, the maximum diferences in relative doses obtained in the gel and water were 0,65% and 1,90%, for radial and longitudinal direction, respectively. For {sup 192}Ir, the maximum difereces in relative doses were 0,30% and 1,05%, for radial and longitudinal direction, respectively. The materials equivalence can also be verified through the effective atomic number and density of each material: Zef-MAGIC-f = 7,07 e .MAGIC-f = 1,060 g/cm{sup 3} and Zef-water = 7,22. Conclusion: The results showed that MAGIC-f is water equivalent, consequently being suitable to simulate soft tissue, for Cobalt and Iridium energies. Hence, gel can be used as a dosimeter in clinical applications. Further investigation to its use in a clinical protocol is needed.« less

  13. pacce: Perl algorithm to compute continuum and equivalent widths

    NASA Astrophysics Data System (ADS)

    Riffel, Rogério; Borges Vale, Tibério

    2011-08-01

    We present Perl Algorithm to Compute continuum and Equivalent Widths ( pacce). We describe the methods used in the computations and the requirements for its usage. We compare the measurements made with pacce and "manual" ones made using iraf splot task. These tests show that for synthetic simple stellar population (SSP) models the equivalent widths strengths are very similar (differences ≲0.2 Å) for both measurements. In real stellar spectra, the correlation between both values is still very good, but with differences of up to 0.5 Å. pacce is also able to determine mean continuum and continuum at line center values, which are helpful in stellar population studies. In addition, it is also able to compute the uncertainties in the equivalent widths using photon statistics. The code is made available for the community through the web at http://www.if.ufrgs.br/~riffel/software.html .

  14. Mutagens from the cooking of food. III. Survey by Ames/Salmonella test of mutagen formation in secondary sources of cooked dietary protein.

    PubMed

    Bjeldanes, L F; Morris, M M; Felton, J S; Healy, S; Stuermer, D; Berry, P; Timourian, H; Hatch, F T

    1982-08-01

    A survey of mutagen formation during the cooking of a variety of protein-rich foods that are minor sources of protein intake in the American diet is reported (see Bjeldanes, Morris, Felton et al. (1982) for survey of major protein foods). Milk, cheese, tofu and organ meats showed negligible mutagen formation except following high-temperature cooking for long periods of time. Even under the most extreme conditions, tofu, cheese and milk exhibited fewer than 500 Ames/Salmonella typhimurium revertants/100 g equivalents (wet weight of uncooked food), and organ meats only double that amount. Beans showed low mutagen formation after boiling and boiling followed by frying (with and without oil). Only boiling of beans followed by baking for 1 hr gave appreciable mutagenicity (3650 revertants/100g equivalents). Seafood samples gave a variety of results: red snapper, salmon, trout, halibut and rock cod all gave more than 1000 revertants/100 g wet weight equivalents when pan-fried or griddle-fried for about 6 min/side. Baked or poached rock and deep-fried shrimp showed no significant mutagen formation. Broiled lamb chops showed mutagen formation similar to that in red meats tested in the preceding paper: 16,000 revertants/100 g equivalents. These findings show that as measured by bioassay in S. typhimurium, most of the foods that are minor sources of protein in the American diet are also minor sources of cooking-induced mutagens.

  15. 40 CFR 70.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... program to control air pollution from outer continental shelf sources, under section 328 of the Act; (12... other functionally-equivalent opening. General permit means a part 70 permit that meets the requirements of § 70.6(d). Major source means any stationary source (or any group of stationary sources that are...

  16. 40 CFR 70.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... program to control air pollution from outer continental shelf sources, under section 328 of the Act; (12... other functionally-equivalent opening. General permit means a part 70 permit that meets the requirements of § 70.6(d). Major source means any stationary source (or any group of stationary sources that are...

  17. 40 CFR 70.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... program to control air pollution from outer continental shelf sources, under section 328 of the Act; (12... other functionally-equivalent opening. General permit means a part 70 permit that meets the requirements of § 70.6(d). Major source means any stationary source (or any group of stationary sources that are...

  18. Lu-Hf AND Sm-Nd EVOLUTION IN LUNAR MARE BASALTS.

    USGS Publications Warehouse

    Unruh, D.M.; Stille, P.; Patchett, P.J.; Tatsumoto, M.

    1984-01-01

    Lu-Hf and Sm-Nd data for mare basalts combined with Rb-Sr and total REE data taken from the literature suggest that the mare basalts were derived by small ( less than equivalent to 10%) degrees of partial melting of cumulate sources, but that the magma ocean from which these sources formed was light REE and hf-enriched. Calculated source compositions range from lherzolite to olivine websterite. Nonmodal melting of small amounts of ilmenite ( less than equivalent to 3%) in the sources seems to be required by the Lu/Hf data. A comparison of the Hf and Nd isotopic characteristics between the mare basalts and terrestrial oceanic basalts reveals that the epsilon Hf/ epsilon Nd ratios in low-Ti mare basalts are much higher than in terrestrial ocean basalts.

  19. Theoretical comparison, equivalent transformation, and conjunction operations of electromagnetic induction generator and triboelectric nanogenerator for harvesting mechanical energy.

    PubMed

    Zhang, Chi; Tang, Wei; Han, Changbao; Fan, Fengru; Wang, Zhong Lin

    2014-06-11

    Triboelectric nanogenerator (TENG) is a newly invented technology that is effective using conventional organic materials with functionalized surfaces for converting mechanical energy into electricity, which is light weight, cost-effective and easy scalable. Here, we present the first systematic analysis and comparison of EMIG and TENG from their working mechanisms, governing equations and output characteristics, aiming at establishing complementary applications of the two technologies for harvesting various mechanical energies. The equivalent transformation and conjunction operations of the two power sources for the external circuit are also explored, which provide appropriate evidences that the TENG can be considered as a current source with a large internal resistance, while the EMIG is equivalent to a voltage source with a small internal resistance. The theoretical comparison and experimental validations presented in this paper establish the basis of using the TENG as a new energy technology that could be parallel or possibly equivalently important as the EMIG for general power application at large-scale. It opens a field of organic nanogenerator for chemists and materials scientists who can be first time using conventional organic materials for converting mechanical energy into electricity at a high efficiency. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Optical injection phase-lock loops

    NASA Astrophysics Data System (ADS)

    Bordonalli, Aldario Chrestani

    Locking techniques have been widely applied for frequency synchronisation of semiconductor lasers used in coherent communication and microwave signal generation systems. Two main locking techniques, the optical phase-lock loop (OPLL) and optical injection locking (OIL) are analysed in this thesis. The principal limitations on OPLL performance result from the loop propagation delay, which makes difficult the implementation of high gain and wide bandwidth loops, leading to poor phase noise suppression performance and requiring the linewidths of the semiconductor laser sources to be less than a few megahertz for practical values of loop delay. The OIL phase noise suppression is controlled by the injected power. The principal limitations of the OIL implementation are the finite phase error under locked conditions and the narrow stable locking range the system provides at injected power levels required to reduce the phase noise output of semiconductor lasers significantly. This thesis demonstrates theoretically and experimentally that it is possible to overcome the limitations of OPLL and OIL systems by combining them, to form an optical injection phase-lock loop (OIPLL). The modelling of an OIPLL system is presented and compared with the equivalent OPLL and OIL results. Optical and electrical design of an homodyne OIPLL is detailed. Experimental results are given which verify the theoretical prediction that the OIPLL would keep the phase noise suppression as high as that of the OIL system over a much wider stable locking range, even with wide linewidth lasers and long loop delays. The experimental results for lasers with summed linewidth of 36 MHz and a loop delay of 15 ns showed measured phase error variances as low as 0.006 rad2 (500 MHz bandwidth) for locking bandwidths greater than 26 GHz, compared with the equivalent OPLL phase error variance of around 1 rad2 (500 MHz bandwidth) and the equivalent OIL locking bandwidth of less than 1.2 GHz.

  1. Estimating Equivalency of Explosives Through A Thermochemical Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maienschein, J L

    2002-07-08

    The Cheetah thermochemical computer code provides an accurate method for estimating the TNT equivalency of any explosive, evaluated either with respect to peak pressure or the quasi-static pressure at long time in a confined volume. Cheetah calculates the detonation energy and heat of combustion for virtually any explosive (pure or formulation). Comparing the detonation energy for an explosive with that of TNT allows estimation of the TNT equivalency with respect to peak pressure, while comparison of the heat of combustion allows estimation of TNT equivalency with respect to quasi-static pressure. We discuss the methodology, present results for many explosives, andmore » show comparisons with equivalency data from other sources.« less

  2. Aspiring to Spectral Ignorance in Earth Observation

    NASA Astrophysics Data System (ADS)

    Oliver, S. A.

    2016-12-01

    Enabling robust, defensible and integrated decision making in the Era of Big Earth Data requires the fusion of data from multiple and diverse sensor platforms and networks. While the application of standardised global grid systems provides a common spatial analytics framework that facilitates the computationally efficient and statistically valid integration and analysis of these various data sources across multiple scales, there remains the challenge of sensor equivalency; particularly when combining data from different earth observation satellite sensors (e.g. combining Landsat and Sentinel-2 observations). To realise the vision of a sensor ignorant analytics platform for earth observation we require automation of spectral matching across the available sensors. Ultimately, the aim is to remove the requirement for the user to possess any sensor knowledge in order to undertake analysis. This paper introduces the concept of spectral equivalence and proposes a methodology through which equivalent bands may be sourced from a set of potential target sensors through application of equivalence metrics and thresholds. A number of parameters can be used to determine whether a pair of spectra are equivalent for the purposes of analysis. A baseline set of thresholds for these parameters and how to apply them systematically to enable relation of spectral bands amongst numerous different sensors is proposed. The base unit for comparison in this work is the relative spectral response. From this input, determination of a what may constitute equivalence can be related by a user, based on their own conceptualisation of equivalence.

  3. Polymer gel water equivalence and relative energy response with emphasis on low photon energy dosimetry in brachytherapy

    NASA Astrophysics Data System (ADS)

    Pantelis, E.; Karlis, A. K.; Kozicki, M.; Papagiannis, P.; Sakelliou, L.; Rosiak, J. M.

    2004-08-01

    The water equivalence and stable relative energy response of polymer gel dosimeters are usually taken for granted in the relatively high x-ray energy range of external beam radiotherapy based on qualitative indices such as mass and electron density and effective atomic number. However, these favourable dosimetric characteristics are questionable in the energy range of interest to brachytherapy especially in the case of lower energy photon sources such as 103Pd and 125I that are currently utilized. In this work, six representative polymer gel formulations as well as the most commonly used experimental set-up of a LiF TLD detector-solid water phantom are discussed on the basis of mass attenuation and energy absorption coefficients calculated in the energy range of 10 keV-10 MeV with regard to their water equivalence as a phantom and detector material. The discussion is also supported by Monte Carlo simulation results. It is found that water equivalence of polymer gel dosimeters is sustained for photon energies down to about 60 keV and no corrections are needed for polymer gel dosimetry of 169Yb or 192Ir sources. For 125I and 103Pd sources, however, a correction that is source-distance dependent is required. Appropriate Monte Carlo results show that at the dosimetric reference distance of 1 cm from a source, these corrections are of the order of 3% for 125I and 2% for 103Pd. These have to be compared with corresponding corrections of up to 35% for 125I and 103Pd and up to 15% even for the 169Yb energies for the experimental set-up of the LiF TLD detector-solid water phantom.

  4. Polymer gel water equivalence and relative energy response with emphasis on low photon energy dosimetry in brachytherapy.

    PubMed

    Pantelis, E; Karlis, A K; Kozicki, M; Papagiannis, P; Sakelliou, L; Rosiak, J M

    2004-08-07

    The water equivalence and stable relative energy response of polymer gel dosimeters are usually taken for granted in the relatively high x-ray energy range of external beam radiotherapy based on qualitative indices such as mass and electron density and effective atomic number. However, these favourable dosimetric characteristics are questionable in the energy range of interest to brachytherapy especially in the case of lower energy photon sources such as 103Pd and 125I that are currently utilized. In this work, six representative polymer gel formulations as well as the most commonly used experimental set-up of a LiF TLD detector-solid water phantom are discussed on the basis of mass attenuation and energy absorption coefficients calculated in the energy range of 10 keV-10 MeV with regard to their water equivalence as a phantom and detector material. The discussion is also supported by Monte Carlo simulation results. It is found that water equivalence of polymer gel dosimeters is sustained for photon energies down to about 60 keV and no corrections are needed for polymer gel dosimetry of 169Yb or 192Ir sources. For 125I and 103Pd sources, however, a correction that is source-distance dependent is required. Appropriate Monte Carlo results show that at the dosimetric reference distance of 1 cm from a source, these corrections are of the order of 3% for 125I and 2% for 103Pd. These have to be compared with corresponding corrections of up to 35% for 125I and 103Pd and up to 15% even for the 169Yb energies for the experimental set-up of the LiF TLD detector-solid water phantom.

  5. Linguistic Adaptation of the Clinical Dementia Rating Scale for a Spanish-Speaking Population

    PubMed Central

    Oquendo-Jiménez, Ilia; Mena, Rafaela; Antoun, Mikhail D.; Wojna, Valerie

    2012-01-01

    Background Alzheimer's disease (AD) is the most common form of dementia worldwide. In Hispanic populations there are few validated tests for the accurate identification and diagnosis of AD. The Clinical Dementia Rating (CDR) scale is an internationally recognized questionnaire used to stage dementia. This study's objective was to develop a linguistic adaptation of the CDR for the Puerto Rican population. Methods The linguistic adaptation consisted of the evaluation of each CDR question (item) and the questionnaire's instructions, for similarities in meaning (semantic equivalence), relevance of content (content equivalence), and appropriateness of the questionnaire's format and measuring technique (technical equivalence). A focus group methodology was used to assess cultural relevance, clarity, and suitability of the measuring technique in the Argentinean version of the CDR for use in a Puerto Rican population. Results A total of 27 semantic equivalence changes were recommended in four categories: higher than 6th grade level of reading, meaning, common use, and word preference. Four content equivalence changes were identified, all focused on improving the applicability of the test questions to the general population's concept of street addresses and common dietary choices. There were no recommendations for changes in the assessment of technical equivalence. Conclusions We developed a linguistically adapted CDR instrument for the Puerto Rican population, preserving the semantic, content, and technical equivalences of the original version. Further studies are needed to validate the CDR instrument with the staging of Alzheimer's disease in the Puerto Rican population. PMID:20496524

  6. Pediatric patient and staff dose measurements in barium meal fluoroscopic procedures

    NASA Astrophysics Data System (ADS)

    Filipov, D.; Schelin, H. R.; Denyak, V.; Paschuk, S. A.; Porto, L. E.; Ledesma, J. A.; Nascimento, E. X.; Legnani, A.; Andrade, M. E. A.; Khoury, H. J.

    2015-11-01

    This study investigates patient and staff dose measurements in pediatric barium meal series fluoroscopic procedures. It aims to analyze radiographic techniques, measure the air kerma-area product (PKA), and estimate the staff's eye lens, thyroid and hands equivalent doses. The procedures of 41 patients were studied, and PKA values were calculated using LiF:Mg,Ti thermoluminescent dosimeters (TLDs) positioned at the center of the patient's upper chest. Furthermore, LiF:Mg,Cu,P TLDs were used to estimate the equivalent doses. The results showed a discrepancy in the radiographic techniques when compared to the European Commission recommendations. Half of the results of the analyzed literature presented lower PKA and dose reference level values than the present study. The staff's equivalent doses strongly depends on the distance from the beam. A 55-cm distance can be considered satisfactory. However, a distance decrease of ~20% leads to, at least, two times higher equivalent doses. For eye lenses this dose is significantly greater than the annual limit set by the International Commission on Radiological Protection. In addition, the occupational doses were found to be much higher than in the literature. Changing the used radiographic techniques to the ones recommended by the European Communities, it is expected to achieve lower PKA values ​​and occupational doses.

  7. Radio-frequency low-coherence interferometry.

    PubMed

    Fernández-Pousa, Carlos R; Mora, José; Maestre, Haroldo; Corral, Pablo

    2014-06-15

    A method for retrieving low-coherence interferograms, based on the use of a microwave photonics filter, is proposed and demonstrated. The method is equivalent to the double-interferometer technique, with the scanning interferometer replaced by an analog fiber-optics link and the visibility recorded as the amplitude of its radio-frequency (RF) response. As a low-coherence interferometry system, it shows a decrease of resolution induced by the fiber's third-order dispersion (β3). As a displacement sensor, it provides highly linear and slope-scalable readouts of the interferometer's optical path difference in terms of RF, even in the presence of third-order dispersion. In a proof-of-concept experiment, we demonstrate 20-μm displacement readouts using C-band EDFA sources and standard single-mode fiber.

  8. The application of Green's theorem to the solution of boundary-value problems in linearized supersonic wing theory

    NASA Technical Reports Server (NTRS)

    Heaslet, Max A; Lomax, Harvard

    1950-01-01

    Following the introduction of the linearized partial differential equation for nonsteady three-dimensional compressible flow, general methods of solution are given for the two and three-dimensional steady-state and two-dimensional unsteady-state equations. It is also pointed out that, in the absence of thickness effects, linear theory yields solutions consistent with the assumptions made when applied to lifting-surface problems for swept-back plan forms at sonic speeds. The solutions of the particular equations are determined in all cases by means of Green's theorem, and thus depend on the use of Green's equivalent layer of sources, sinks, and doublets. Improper integrals in the supersonic theory are treated by means of Hadamard's "finite part" technique.

  9. Multi-MHz laser-scanning single-cell fluorescence microscopy by spatiotemporally encoded virtual source array

    PubMed Central

    Wu, Jianglai; Tang, Anson H. L.; Mok, Aaron T. Y.; Yan, Wenwei; Chan, Godfrey C. F.; Wong, Kenneth K. Y.; Tsia, Kevin K.

    2017-01-01

    Apart from the spatial resolution enhancement, scaling of temporal resolution, equivalently the imaging throughput, of fluorescence microscopy is of equal importance in advancing cell biology and clinical diagnostics. Yet, this attribute has mostly been overlooked because of the inherent speed limitation of existing imaging strategies. To address the challenge, we employ an all-optical laser-scanning mechanism, enabled by an array of reconfigurable spatiotemporally-encoded virtual sources, to demonstrate ultrafast fluorescence microscopy at line-scan rate as high as 8 MHz. We show that this technique enables high-throughput single-cell microfluidic fluorescence imaging at 75,000 cells/second and high-speed cellular 2D dynamical imaging at 3,000 frames per second, outperforming the state-of-the-art high-speed cameras and the gold-standard laser scanning strategies. Together with its wide compatibility to the existing imaging modalities, this technology could empower new forms of high-throughput and high-speed biological fluorescence microscopy that was once challenged. PMID:28966855

  10. PSPICE controlled-source models of analogous circuit for Langevin type piezoelectric transducer

    NASA Astrophysics Data System (ADS)

    Chen, Yeongchin; Wu, Menqjiun; Liu, Weikuo

    2007-02-01

    The design and construction of wide-band and high efficiency acoustical projector has long been considered an art beyond the capabilities of many smaller groups. Langevin type piezoelectric transducers have been the most candidate of sonar array system applied in underwater communication. The transducers are fabricated, by bolting head mass and tail mass on both ends of stacked piezoelectric ceramic, to satisfy the multiple, conflicting design for high power transmitting capability. The aim of this research is to study the characteristics of Langevin type piezoelectric transducer that depend on different metal loading. First, the Mason equivalent circuit is used to model the segmented piezoelectric ceramic, then, the impedance network of tail and head masses is deduced by the Newton’s theory. To obtain the optimal solution to a specific design formulation, PSPICE controlled-source programming techniques can be applied. A valid example of the application of PSPICE models for Langevin type transducer analysis is presented and the simulation results are in good agreement with the experimental measurements.

  11. Development of a Geant4 application to characterise a prototype neutron detector based on three orthogonal 3He tubes inside an HDPE sphere.

    PubMed

    Gracanin, V; Guatelli, S; Prokopovich, D; Rosenfeld, A B; Berry, A

    2017-01-01

    The Bonner Sphere Spectrometer (BSS) system is a well-established technique for neutron dosimetry that involves detection of thermal neutrons within a range of hydrogenous moderators. BSS detectors are often used to perform neutron field surveys in order to determine the ambient dose equivalent H*(10) and estimate health risk to personnel. There is a potential limitation of existing neutron survey techniques, since some detectors do not consider the direction of the neutron field, which can result in overly conservative estimates of dose in neutron fields. This paper shows the development of a Geant4 simulation application to characterise a prototype neutron detector based on three orthogonal 3 He tubes inside a single HDPE sphere built at the Australian Nuclear Science and Technology Organisation (ANSTO). The Geant4 simulation has been validated with respect to experimental measurements performed with an Am-Be source. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  12. Experimental demonstration of deep frequency modulation interferometry.

    PubMed

    Isleif, Katharina-Sophie; Gerberding, Oliver; Schwarze, Thomas S; Mehmet, Moritz; Heinzel, Gerhard; Cervantes, Felipe Guzmán

    2016-01-25

    Experiments for space and ground-based gravitational wave detectors often require a large dynamic range interferometric position readout of test masses with 1 pm/√Hz precision over long time scales. Heterodyne interferometer schemes that achieve such precisions are available, but they require complex optical set-ups, limiting their scalability for multiple channels. This article presents the first experimental results on deep frequency modulation interferometry, a new technique that combines sinusoidal laser frequency modulation in unequal arm length interferometers with a non-linear fit algorithm. We have tested the technique in a Michelson and a Mach-Zehnder Interferometer topology, respectively, demonstrated continuous phase tracking of a moving mirror and achieved a performance equivalent to a displacement sensitivity of 250 pm/Hz at 1 mHz between the phase measurements of two photodetectors monitoring the same optical signal. By performing time series fitting of the extracted interference signals, we measured that the linearity of the laser frequency modulation is on the order of 2% for the laser source used.

  13. Near Identifiability of Dynamical Systems

    NASA Technical Reports Server (NTRS)

    Hadaegh, F. Y.; Bekey, G. A.

    1987-01-01

    Concepts regarding approximate mathematical models treated rigorously. Paper presents new results in analysis of structural identifiability, equivalence, and near equivalence between mathematical models and physical processes they represent. Helps establish rigorous mathematical basis for concepts related to structural identifiability and equivalence revealing fundamental requirements, tacit assumptions, and sources of error. "Structural identifiability," as used by workers in this field, loosely translates as meaning ability to specify unique mathematical model and set of model parameters that accurately predict behavior of corresponding physical system.

  14. Consistent Principal Component Modes from Molecular Dynamics Simulations of Proteins.

    PubMed

    Cossio-Pérez, Rodrigo; Palma, Juliana; Pierdominici-Sottile, Gustavo

    2017-04-24

    Principal component analysis is a technique widely used for studying the movements of proteins using data collected from molecular dynamics simulations. In spite of its extensive use, the technique has a serious drawback: equivalent simulations do not afford the same PC-modes. In this article, we show that concatenating equivalent trajectories and calculating the PC-modes from the concatenated one significantly enhances the reproducibility of the results. Moreover, the consistency of the modes can be systematically improved by adding more individual trajectories to the concatenated one.

  15. Are visual cue masking and removal techniques equivalent for studying perceptual skills in sport?

    PubMed

    Mecheri, Sami; Gillet, Eric; Thouvarecq, Regis; Leroy, David

    2011-01-01

    The spatial-occlusion paradigm makes use of two techniques (masking and removing visual cues) to provide information about the anticipatory cues used by viewers. The visual scene resulting from the removal technique appears to be incongruous, but the assumed equivalence of these two techniques is spreading. The present study was designed to address this issue by combining eye-movement recording with the two types of occlusion (removal versus masking) in a tennis serve-return task. Response accuracy and decision onsets were analysed. The results indicated that subjects had longer reaction times under the removal condition, with an identical proportion of correct responses. Also, the removal technique caused the subjects to rely on atypical search patterns. Our findings suggest that, when the removal technique was used, viewers were unable to systematically count on stored memories to help them accomplish the interception task. The persistent failure to question some of the assumptions about the removal technique in applied visual research is highlighted, and suggestions for continued use of the masking technique are advanced.

  16. Computational technique for stepwise quantitative assessment of equation correctness

    NASA Astrophysics Data System (ADS)

    Othman, Nuru'l Izzah; Bakar, Zainab Abu

    2017-04-01

    Many of the computer-aided mathematics assessment systems that are available today possess the capability to implement stepwise correctness checking of a working scheme for solving equations. The computational technique for assessing the correctness of each response in the scheme mainly involves checking the mathematical equivalence and providing qualitative feedback. This paper presents a technique, known as the Stepwise Correctness Checking and Scoring (SCCS) technique that checks the correctness of each equation in terms of structural equivalence and provides quantitative feedback. The technique, which is based on the Multiset framework, adapts certain techniques from textual information retrieval involving tokenization, document modelling and similarity evaluation. The performance of the SCCS technique was tested using worked solutions on solving linear algebraic equations in one variable. 350 working schemes comprising of 1385 responses were collected using a marking engine prototype, which has been developed based on the technique. The results show that both the automated analytical scores and the automated overall scores generated by the marking engine exhibit high percent agreement, high correlation and high degree of agreement with manual scores with small average absolute and mixed errors.

  17. Solving transient acoustic boundary value problems with equivalent sources using a lumped parameter approach.

    PubMed

    Fahnline, John B

    2016-12-01

    An equivalent source method is developed for solving transient acoustic boundary value problems. The method assumes the boundary surface is discretized in terms of triangular or quadrilateral elements and that the solution is represented using the acoustic fields of discrete sources placed at the element centers. Also, the boundary condition is assumed to be specified for the normal component of the surface velocity as a function of time, and the source amplitudes are determined to match the known elemental volume velocity vector at a series of discrete time steps. Equations are given for marching-on-in-time schemes to solve for the source amplitudes at each time step for simple, dipole, and tripole source formulations. Several example problems are solved to illustrate the results and to validate the formulations, including problems with closed boundary surfaces where long-time numerical instabilities typically occur. A simple relationship between the simple and dipole source amplitudes in the tripole source formulation is derived so that the source radiates primarily in the direction of the outward surface normal. The tripole source formulation is shown to eliminate interior acoustic resonances and long-time numerical instabilities.

  18. Dynamic stability analysis for capillary channel flow: One-dimensional and three-dimensional computations and the equivalent steady state technique

    NASA Astrophysics Data System (ADS)

    Grah, Aleksander; Dreyer, Michael E.

    2010-01-01

    Spacecraft technology provides a series of applications for capillary channel flow. It can serve as a reliable means for positioning and transport of liquids under low gravity conditions. Basically, capillary channels provide liquid paths with one or more free surfaces. A problem may be flow instabilities leading to a collapse of the liquid surfaces. A result is undesired gas ingestion and a two phase flow which can in consequence cause several technical problems. The presented capillary channel consists of parallel plates with two free liquid surfaces. The flow rate is established by a pump at the channel outlet, creating a lower pressure within the channel. Owing to the pressure difference between the liquid phase and the ambient gas phase the free surfaces bend inwards and remain stable as long as they are able to resist the steady and unsteady pressure effects. For the numerical prediction of the flow stability two very different models are used. The one-dimensional unsteady model is mainly based on the Bernoulli equation, the continuity equation, and the Gauss-Laplace equation. For three-dimensional evaluations an open source computational fluid dynamics (CFD) tool is applied. For verifications the numerical results are compared with quasisteady and unsteady data of a sounding rocket experiment. Contrary to previous experiments this one results in a significantly longer observation sequence. Furthermore, the critical point of the steady flow instability could be approached by a quasisteady technique. As in previous experiments the comparison to the numerical model evaluation shows a very good agreement for the movement of the liquid surfaces and for the predicted flow instability. The theoretical prediction of the flow instability is related to the speed index, based on characteristic velocities of the capillary channel flow. Stable flow regimes are defined by stability criteria for steady and unsteady flow. The one-dimensional computation of the speed index is based on the technique of the equivalent steady system, which is published for the first time in the present paper. This approach assumes that for every unsteady state an equivalent steady state with a special boundary condition can be formulated. The equivalent steady state technique enables a reformulation of the equation system and an efficient and reliable speed index computation. Furthermore, the existence of the numerical singularity at the critical point of the steady flow instability, postulated in previous publication, is demonstrated in detail. The numerical singularity is related to the stability criterion for steady flow and represents the numerical consequence of the liquid surface collapse. The evaluation and generation of the pressure diagram is demonstrated in detail with a series of numerical dynamic flow studies. The stability diagram, based on one-dimensional computation, gives a detailed overview of the stable and instable flow regimes. This prediction is in good agreement with the experimentally observed critical flow conditions and results of three-dimensional CFD computations.

  19. Ontology Alignment Repair through Modularization and Confidence-Based Heuristics

    PubMed Central

    Santos, Emanuel; Faria, Daniel; Pesquita, Catia; Couto, Francisco M.

    2015-01-01

    Ontology Matching aims at identifying a set of semantic correspondences, called an alignment, between related ontologies. In recent years, there has been a growing interest in efficient and effective matching methods for large ontologies. However, alignments produced for large ontologies are often logically incoherent. It was only recently that the use of repair techniques to improve the coherence of ontology alignments began to be explored. This paper presents a novel modularization technique for ontology alignment repair which extracts fragments of the input ontologies that only contain the necessary classes and relations to resolve all detectable incoherences. The paper presents also an alignment repair algorithm that uses a global repair strategy to minimize both the degree of incoherence and the number of mappings removed from the alignment, while overcoming the scalability problem by employing the proposed modularization technique. Our evaluation shows that our modularization technique produces significantly small fragments of the ontologies and that our repair algorithm produces more complete alignments than other current alignment repair systems, while obtaining an equivalent degree of incoherence. Additionally, we also present a variant of our repair algorithm that makes use of the confidence values of the mappings to improve alignment repair. Our repair algorithm was implemented as part of AgreementMakerLight, a free and open-source ontology matching system. PMID:26710335

  20. Ontology Alignment Repair through Modularization and Confidence-Based Heuristics.

    PubMed

    Santos, Emanuel; Faria, Daniel; Pesquita, Catia; Couto, Francisco M

    2015-01-01

    Ontology Matching aims at identifying a set of semantic correspondences, called an alignment, between related ontologies. In recent years, there has been a growing interest in efficient and effective matching methods for large ontologies. However, alignments produced for large ontologies are often logically incoherent. It was only recently that the use of repair techniques to improve the coherence of ontology alignments began to be explored. This paper presents a novel modularization technique for ontology alignment repair which extracts fragments of the input ontologies that only contain the necessary classes and relations to resolve all detectable incoherences. The paper presents also an alignment repair algorithm that uses a global repair strategy to minimize both the degree of incoherence and the number of mappings removed from the alignment, while overcoming the scalability problem by employing the proposed modularization technique. Our evaluation shows that our modularization technique produces significantly small fragments of the ontologies and that our repair algorithm produces more complete alignments than other current alignment repair systems, while obtaining an equivalent degree of incoherence. Additionally, we also present a variant of our repair algorithm that makes use of the confidence values of the mappings to improve alignment repair. Our repair algorithm was implemented as part of AgreementMakerLight, a free and open-source ontology matching system.

  1. The oxidative stability of omega-3 oil-in-water nanoemulsion systems suitable for functional food enrichment: A systematic review of the literature.

    PubMed

    Bush, Linda; Stevenson, Leo; Lane, Katie E

    2017-10-23

    There is growing demand for functional food products enriched with long chain omega-3 polyunsaturated fatty acids (LCω3PUFA). Nanoemulsions, systems with extremely small droplet sizes have been shown to increase LCω3PUFA bioavailability. However, nanoemulsion creation and processing methods may impact on the oxidative stability of these systems. The present systematic review collates information from studies that evaluated the oxidative stability of LCω3PUFA nanoemulsions suitable for use in functional foods. The systematic search identified seventeen articles published during the last 10 years. Researchers used a range of surfactants and antioxidants to create systems which were evaluated from 7 to 100 days of storage. Nanoemulsions were created using synthetic and natural emulsifiers, with natural sources offering equivalent or increased oxidative stability compared to synthetic sources, which is useful as consumers are demanding natural, cleaner label food products. Equivalent vegetarian sources of LCω3PUFA found in fish oils such as algal oils are promising as they provide direct sources without the need for conversion in the human metabolic pathway. Quillaja saponin is a promising natural emulsifier that can produce nanoemulsion systems with equivalent/increased oxidative stability in comparison to other emulsifiers. Further studies to evaluate the oxidative stability of quillaja saponin nanoemulsions combined with algal sources of LCω3PUFA are warranted.

  2. 40 CFR 430.57 - Pretreatment standards for new sources (PSNS).

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... when POTWs find it necessary to impose mass effluent standards, equivalent mass standards are provided... 40 Protection of Environment 31 2012-07-01 2012-07-01 false Pretreatment standards for new sources...) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY...

  3. 40 CFR 430.57 - Pretreatment standards for new sources (PSNS).

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... when POTWs find it necessary to impose mass effluent standards, equivalent mass standards are provided... 40 Protection of Environment 30 2014-07-01 2014-07-01 false Pretreatment standards for new sources...) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY...

  4. 40 CFR 430.57 - Pretreatment standards for new sources (PSNS).

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... when POTWs find it necessary to impose mass effluent standards, equivalent mass standards are provided... 40 Protection of Environment 31 2013-07-01 2013-07-01 false Pretreatment standards for new sources...) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY...

  5. Quadratic Optimization in the Problems of Active Control of Sound

    NASA Technical Reports Server (NTRS)

    Loncaric, J.; Tsynkov, S. V.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    We analyze the problem of suppressing the unwanted component of a time-harmonic acoustic field (noise) on a predetermined region of interest. The suppression is rendered by active means, i.e., by introducing the additional acoustic sources called controls that generate the appropriate anti-sound. Previously, we have obtained general solutions for active controls in both continuous and discrete formulations of the problem. We have also obtained optimal solutions that minimize the overall absolute acoustic source strength of active control sources. These optimal solutions happen to be particular layers of monopoles on the perimeter of the protected region. Mathematically, minimization of acoustic source strength is equivalent to minimization in the sense of L(sub 1). By contrast. in the current paper we formulate and study optimization problems that involve quadratic functions of merit. Specifically, we minimize the L(sub 2) norm of the control sources, and we consider both the unconstrained and constrained minimization. The unconstrained L(sub 2) minimization is certainly the easiest problem to address numerically. On the other hand, the constrained approach allows one to analyze sophisticated geometries. In a special case, we call compare our finite-difference optimal solutions to the continuous optimal solutions obtained previously using a semi-analytic technique. We also show that the optima obtained in the sense of L(sub 2) differ drastically from those obtained in the sense of L(sub 1).

  6. The numerical simulation of heat transfer during a hybrid laser-MIG welding using equivalent heat source approach

    NASA Astrophysics Data System (ADS)

    Bendaoud, Issam; Matteï, Simone; Cicala, Eugen; Tomashchuk, Iryna; Andrzejewski, Henri; Sallamand, Pierre; Mathieu, Alexandre; Bouchaud, Fréderic

    2014-03-01

    The present study is dedicated to the numerical simulation of an industrial case of hybrid laser-MIG welding of high thickness duplex steel UR2507Cu with Y-shaped chamfer geometry. It consists in simulation of heat transfer phenomena using heat equivalent source approach and implementing in finite element software COMSOL Multiphysics. A numerical exploratory designs method is used to identify the heat sources parameters in order to obtain a minimal required difference between the numerical results and the experiment which are the shape of the welded zone and the temperature evolution in different locations. The obtained results were found in good correspondence with experiment, both for melted zone shape and thermal history.

  7. Comparison of Nonlinear Random Response Using Equivalent Linearization and Numerical Simulation

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Muravyov, Alexander A.

    2000-01-01

    A recently developed finite-element-based equivalent linearization approach for the analysis of random vibrations of geometrically nonlinear multiple degree-of-freedom structures is validated. The validation is based on comparisons with results from a finite element based numerical simulation analysis using a numerical integration technique in physical coordinates. In particular, results for the case of a clamped-clamped beam are considered for an extensive load range to establish the limits of validity of the equivalent linearization approach.

  8. Develop real-time dosimetry concepts and instrumentation for long term missions

    NASA Technical Reports Server (NTRS)

    Braby, L. A.

    1982-01-01

    The development of a rugged portable instrument to evaluate dose and dose equivalent is described. A tissue-equivalent proportional counter simulating a 2 micrometer spherical tissue volume was operated satisfactorily for over a year. The basic elements of the electronic system were designed and tested. And finally, the most suitable mathematical technique for evaluating dose equivalent with a portable instrument was selected. Design and fabrication of a portable prototype, based on the previously tested circuits, is underway.

  9. Precision Tests of a Quantum Hall Effect Device DC Equivalent Circuit Using Double-Series and Triple-Series Connections

    PubMed Central

    Jeffery, A.; Elmquist, R. E.; Cage, M. E.

    1995-01-01

    Precision tests verify the dc equivalent circuit used by Ricketts and Kemeny to describe a quantum Hall effect device in terms of electrical circuit elements. The tests employ the use of cryogenic current comparators and the double-series and triple-series connection techniques of Delahaye. Verification of the dc equivalent circuit in double-series and triple-series connections is a necessary step in developing the ac quantum Hall effect as an intrinsic standard of resistance. PMID:29151768

  10. 77 FR 11039 - Proposed Confidentiality Determinations for the Petroleum and Natural Gas Systems Source Category...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-24

    ... CO 2 carbon dioxide CO 2 e carbon dioxide equivalent CBI confidential business information CFR Code... RFA Regulatory Flexibility Act T-D transmission--distribution UIC Underground Injection Control UMRA... to or greater than 25,000 metric tons carbon dioxide equivalent (mtCO 2 e). The proposed...

  11. National Snow Analyses - NOHRSC - The ultimate source for snow information

    Science.gov Websites

    Equivalent Thumbnail image of Modeled Snow Water Equivalent Animate: Season --- Two weeks --- One Day Snow Depth Thumbnail image of Modeled Snow Depth Animate: Season --- Two weeks --- One Day Average Snowpack Temp Thumbnail image of Modeled Average Snowpack Temp Animate: Season --- Two weeks --- One Day SWE

  12. Simulation Study of Near-Surface Coupling of Nuclear Devices vs. Equivalent High-Explosive Charges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fournier, Kevin B; Walton, Otis R; Benjamin, Russ

    2014-09-29

    A computational study was performed to examine the differences in near-surface ground-waves and air-blast waves generated by high-explosive energy sources and those generated by much higher energy - density low - yield nuclear sources. The study examined the effect of explosive-source emplacement (i.e., height-of-burst, HOB, or depth-of-burial, DOB) over a range from depths of -35m to heights of 20m, for explosions with an explosive yield of 1-kt . The chemical explosive was modeled by a JWL equation-of-state model for a ~14m diameter sphere of ANFO (~1,200,000kg – 1 k t equivalent yield ), and the high-energy-density source was modeled asmore » a one tonne (1000 kg) plasma of ‘Iron-gas’ (utilizing LLNL’s tabular equation-of-state database, LEOS) in a 2m diameter sphere, with a total internal-energy content equivalent to 1 k t . A consistent equivalent-yield coupling-factor approach was developed to compare the behavior of the two sources. The results indicate that the equivalent-yield coupling-factor for air-blasts from 1 k t ANFO explosions varies monotonically and continuously from a nearly perfec t reflected wave off of the ground surface for a HOB ≈ 20m, to a coupling factor of nearly zero at DOB ≈ -25m. The nuclear air - blast coupling curve, on the other hand, remained nearly equal to a perfectly reflected wave all the way down to HOB’s very near zero, and then quickly dropped to a value near zero for explosions with a DOB ≈ -10m. The near - surface ground - wave traveling horizontally out from the explosive source region to distances of 100’s of meters exhibited equivalent - yield coupling - factors t hat varied nearly linearly with HOB/DOB for the simulated ANFO explosive source, going from a value near zero at HOB ≈ 5m to nearly one at DOB ≈ -25m. The nuclear-source generated near-surface ground wave coupling-factor remained near zero for almost all HOB’s greater than zero, and then appeared to vary nearly - linearly with depth-of-burial until it reached a value of one at a DOB between 15m and 20m. These simulations confirm the expected result that the variation of coupling to the ground, or the air, change s much more rapidly with emplacement location for a high-energy-density (i.e., nuclear-like) explosive source than it does for relatively low - energy - density chemical explosive sources. The Energy Partitioning, Energy Coupling (EPEC) platform at LLNL utilizes laser energy from one quad (i.e. 4-laser beams) of the 192 - beam NIF Laser bank to deliver ~10kJ of energy to 1mg of silver in a hohlraum creating an effective small-explosive ‘source’ with an energy density comparable to those in low-yield nuclear devices. Such experiments have the potential to provide direct experimental confirmation of the simulation results obtained in this study, at a physical scale (and time-scale) which is a factor of 1000 smaller than the spatial- or temporal-scales typically encountered when dealing with nuclear explosions.« less

  13. Multihelix rotating shield brachytherapy for cervical cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dadkhah, Hossein; Kim, Yusung; Flynn, Ryan T., E-mail: ryan-flynn@uiowa.edu

    Purpose: To present a novel brachytherapy technique, called multihelix rotating shield brachytherapy (H-RSBT), for the precise angular and linear positioning of a partial shield in a curved applicator. H-RSBT mechanically enables the dose delivery using only linear translational motion of the radiation source/shield combination. The previously proposed approach of serial rotating shield brachytherapy (S-RSBT), in which the partial shield is rotated to several angular positions at each source dwell position [W. Yang et al., “Rotating-shield brachytherapy for cervical cancer,” Phys. Med. Biol. 58, 3931–3941 (2013)], is mechanically challenging to implement in a curved applicator, and H-RSBT is proposed as amore » feasible solution. Methods: A Henschke-type applicator, designed for an electronic brachytherapy source (Xoft Axxent™) and a 0.5 mm thick tungsten partial shield with 180° or 45° azimuthal emission angles and 116° asymmetric zenith angle, is proposed. The interior wall of the applicator contains six evenly spaced helical keyways that rigidly define the emission direction of the partial radiation shield as a function of depth in the applicator. The shield contains three uniformly distributed protruding keys on its exterior wall and is attached to the source such that it rotates freely, thus longitudinal translational motion of the source is transferred to rotational motion of the shield. S-RSBT and H-RSBT treatment plans with 180° and 45° azimuthal emission angles were generated for five cervical cancer patients with a diverse range of high-risk target volume (HR-CTV) shapes and applicator positions. For each patient, the total number of emission angles was held nearly constant for S-RSBT and H-RSBT by using dwell positions separated by 5 and 1.7 mm, respectively, and emission directions separated by 22.5° and 60°, respectively. Treatment delivery time and tumor coverage (D{sub 90} of HR-CTV) were the two metrics used as the basis for evaluation and comparison. For all the generated treatment plans, the D{sub 90} of the HR-CTV in units of equivalent dose in 2 Gy fractions (EQD2) was escalated until the D{sub 2cc} (minimum dose to hottest 2 cm{sup 3}) tolerance of either the bladder (90 Gy{sub 3}), rectum (75 Gy{sub 3}), or sigmoid colon (75 Gy{sub 3}) was reached. Results: Treatment time changed for H-RSBT versus S-RSBT by −7.62% to 1.17% with an average change of −2.8%, thus H-RSBT treatments times tended to be shorter than for S-RSBT. The HR-CTV D{sub 90} also changed by −2.7% to 2.38% with an average of −0.65%. Conclusions: H-RSBT is a mechanically feasible delivery technique for use in the curved applicators needed for cervical cancer brachytherapy. S-RSBT and H-RSBT were clinically equivalent for all patients considered, with the H-RSBT technique tending to require less time for delivery.« less

  14. All the noncontextuality inequalities for arbitrary prepare-and-measure experiments with respect to any fixed set of operational equivalences

    NASA Astrophysics Data System (ADS)

    Schmid, David; Spekkens, Robert W.; Wolfe, Elie

    2018-06-01

    Within the framework of generalized noncontextuality, we introduce a general technique for systematically deriving noncontextuality inequalities for any experiment involving finitely many preparations and finitely many measurements, each of which has a finite number of outcomes. Given any fixed sets of operational equivalences among the preparations and among the measurements as input, the algorithm returns a set of noncontextuality inequalities whose satisfaction is necessary and sufficient for a set of operational data to admit of a noncontextual model. Additionally, we show that the space of noncontextual data tables always defines a polytope. Finally, we provide a computationally efficient means for testing whether any set of numerical data admits of a noncontextual model, with respect to any fixed operational equivalences. Together, these techniques provide complete methods for characterizing arbitrary noncontextuality scenarios, both in theory and in practice. Because a quantum prepare-and-measure experiment admits of a noncontextual model if and only if it admits of a positive quasiprobability representation, our techniques also determine the necessary and sufficient conditions for the existence of such a representation.

  15. Whatever Gave You That Idea? False Memories Following Equivalence Training: A Behavioral Account of the Misinformation Effect

    PubMed Central

    Challies, Danna M; Hunt, Maree; Garry, Maryanne; Harper, David N

    2011-01-01

    The misinformation effect is a term used in the cognitive psychological literature to describe both experimental and real-world instances in which misleading information is incorporated into an account of an historical event. In many real-world situations, it is not possible to identify a distinct source of misinformation, and it appears that the witness may have inferred a false memory by integrating information from a variety of sources. In a stimulus equivalence task, a small number of trained relations between some members of a class of arbitrary stimuli result in a large number of untrained, or emergent relations, between all members of the class. Misleading information was introduced into a simple memory task between a learning phase and a recognition test by means of a match-to-sample stimulus equivalence task that included both stimuli from the original learning task and novel stimuli. At the recognition test, participants given equivalence training were more likely to misidentify patterns than those who were not given such training. The misinformation effect was distinct from the effects of prior stimulus exposure, or partial stimulus control. In summary, stimulus equivalence processes may underlie some real-world manifestations of the misinformation effect. PMID:22084495

  16. Equivalent Expressions Using CAS and Paper-and-Pencil Techniques

    ERIC Educational Resources Information Center

    Fonger, Nicole L.

    2014-01-01

    How can the key concept of equivalent expressions be addressed so that students strengthen their representational fluency with symbols, graphs, and numbers? How can research inform the synergistic use of both paper-and-pencil analysis and computer algebra systems (CAS) in a classroom learning environment? These and other related questions have…

  17. Articulating Syntactic and Numeric Perspectives on Equivalence: The Case of Rational Expressions

    ERIC Educational Resources Information Center

    Solares, Armando; Kieran, Carolyn

    2013-01-01

    Our study concerns the conceptual mathematical knowledge that emerges during the resolution of tasks on the equivalence of polynomial and rational algebraic expressions, by using CAS and paper-and-pencil techniques. The theoretical framework we adopt is the Anthropological Theory of Didactics ("Chevallard" 19:221-266, 1999), in…

  18. Do Adjusting-Amount and Adjusting-Delay Procedures Produce Equivalent Estimates of Subjective Value in Pigeons?

    ERIC Educational Resources Information Center

    Green, Leonard; Myerson, Joel; Shah, Anuj K.; Estle, Sara J.; Holt, Daniel D.

    2007-01-01

    The current experiment examined whether adjusting-amount and adjusting-delay procedures provide equivalent measures of discounting. Pigeons' discounting on the two procedures was compared using a within-subject yoking technique in which the indifference point (number of pellets or time until reinforcement) obtained with one procedure determined…

  19. Alternative Fuels Data Center: Delaware Transportation Data for Alternative

    Science.gov Websites

    local stakeholders. Gasoline Diesel Natural Gas Transportation Fuel Consumption Source: State Energy Plants 1 Renewable Power Plant Capacity (nameplate, MW) 2 Source: BioFuels Atlas from the National /gallon $2.66/GGE Source: Average prices per gasoline gallon equivalent (GGE) for the Central Atlantic

  20. Jewish Studies: A Guide to Reference Sources.

    ERIC Educational Resources Information Center

    McGill Univ., Montreal (Quebec). McLennan Library.

    An annotated bibliography to the reference sources for Jewish Studies in the McLennan Library of McGill University (Canada) is presented. Any titles in Hebrew characters are listed by their transliterated equivalents. There is also a list of relevant Library of Congress Subject Headings. General reference sources listed are: encyclopedias,…

  1. Breath Analysis Using Laser Spectroscopic Techniques: Breath Biomarkers, Spectral Fingerprints, and Detection Limits

    PubMed Central

    Wang, Chuji; Sahay, Peeyush

    2009-01-01

    Breath analysis, a promising new field of medicine and medical instrumentation, potentially offers noninvasive, real-time, and point-of-care (POC) disease diagnostics and metabolic status monitoring. Numerous breath biomarkers have been detected and quantified so far by using the GC-MS technique. Recent advances in laser spectroscopic techniques and laser sources have driven breath analysis to new heights, moving from laboratory research to commercial reality. Laser spectroscopic detection techniques not only have high-sensitivity and high-selectivity, as equivalently offered by the MS-based techniques, but also have the advantageous features of near real-time response, low instrument costs, and POC function. Of the approximately 35 established breath biomarkers, such as acetone, ammonia, carbon dioxide, ethane, methane, and nitric oxide, 14 species in exhaled human breath have been analyzed by high-sensitivity laser spectroscopic techniques, namely, tunable diode laser absorption spectroscopy (TDLAS), cavity ringdown spectroscopy (CRDS), integrated cavity output spectroscopy (ICOS), cavity enhanced absorption spectroscopy (CEAS), cavity leak-out spectroscopy (CALOS), photoacoustic spectroscopy (PAS), quartz-enhanced photoacoustic spectroscopy (QEPAS), and optical frequency comb cavity-enhanced absorption spectroscopy (OFC-CEAS). Spectral fingerprints of the measured biomarkers span from the UV to the mid-IR spectral regions and the detection limits achieved by the laser techniques range from parts per million to parts per billion levels. Sensors using the laser spectroscopic techniques for a few breath biomarkers, e.g., carbon dioxide, nitric oxide, etc. are commercially available. This review presents an update on the latest developments in laser-based breath analysis. PMID:22408503

  2. Dual-source spiral CT with pitch up to 3.2 and 75 ms temporal resolution: image reconstruction and assessment of image quality.

    PubMed

    Flohr, Thomas G; Leng, Shuai; Yu, Lifeng; Aiimendinger, Thomas; Bruder, Herbert; Petersilka, Martin; Eusemann, Christian D; Stierstorfer, Karl; Schmidt, Bernhard; McCollough, Cynthia H

    2009-12-01

    To present the theory for image reconstruction of a high-pitch, high-temporal-resolution spiral scan mode for dual-source CT (DSCT) and evaluate its image quality and dose. With the use of two x-ray sources and two data acquisition systems, spiral CT exams having a nominal temporal resolution per image of up to one-quarter of the gantry rotation time can be acquired using pitch values up to 3.2. The scan field of view (SFOV) for this mode, however, is limited to the SFOV of the second detector as a maximum, depending on the pitch. Spatial and low contrast resolution, image uniformity and noise, CT number accuracy and linearity, and radiation dose were assessed using the ACR CT accreditation phantom, a 30 cm diameter cylindrical water phantom or a 32 cm diameter cylindrical PMMA CTDI phantom. Slice sensitivity profiles (SSPs) were measured for different nominal slice thicknesses, and an anthropomorphic phantom was used to assess image artifacts. Results were compared between single-source scans at pitch = 1.0 and dual-source scans at pitch = 3.2. In addition, image quality and temporal resolution of an ECG-triggered version of the DSCT high-pitch spiral scan mode were evaluated with a moving coronary artery phantom, and radiation dose was assessed in comparison with other existing cardiac scan techniques. No significant differences in quantitative measures of image quality were found between single-source scans at pitch = 1.0 and dual-source scans at pitch = 3.2 for spatial and low contrast resolution, CT number accuracy and linearity, SSPs, image uniformity, and noise. The pitch value (1.6 pitch 3.2) had only a minor impact on radiation dose and image noise when the effective tube current time product (mA s/pitch) was kept constant. However, while not severe, artifacts were found to be more prevalent for the dual-source pitch = 3.2 scan mode when structures varied markedly along the z axis, particularly for head scans. Images of the moving coronary artery phantom acquired with the ECG-triggered high-pitch scan mode were visually free from motion artifacts at heart rates of 60 and 70 bpm. However, image quality started to deteriorate for higher heart rates. At equivalent image quality, the ECG-triggered high-pitch scan mode demonstrated lower radiation dose than other cardiac scan techniques on the same DSCT equipment (25% and 60% dose reduction compared to ECG-triggered sequential step-and-shoot and ECG-gated spiral with x-ray pulsing). A high-pitch (up to pitch = 3.2), high-temporal-resolution (up to 75 ms) dual-source CT scan mode produced equivalent image quality relative to single-source scans using a more typical pitch value (pitch = 1.0). The resultant reduction in the overall acquisition time may offer clinical advantage for cardiovascular, trauma, and pediatric CT applications. In addition, ECG-triggered high-pitch scanning may be useful as an alternative to ECG-triggered sequential scanning for patients with low to moderate heart rates up to 70 bpm, with the potential to scan the heart within one heart beat at reduced radiation dose.

  3. Predicting tropical cyclone intensity using satellite measured equivalent blackbody temperatures of cloud tops. [regression analysis

    NASA Technical Reports Server (NTRS)

    Gentry, R. C.; Rodgers, E.; Steranka, J.; Shenk, W. E.

    1978-01-01

    A regression technique was developed to forecast 24 hour changes of the maximum winds for weak (maximum winds less than or equal to 65 Kt) and strong (maximum winds greater than 65 Kt) tropical cyclones by utilizing satellite measured equivalent blackbody temperatures around the storm alone and together with the changes in maximum winds during the preceding 24 hours and the current maximum winds. Independent testing of these regression equations shows that the mean errors made by the equations are lower than the errors in forecasts made by the peristence techniques.

  4. High precision test of the equivalence principle

    NASA Astrophysics Data System (ADS)

    Schlamminger, Stephan; Wagner, Todd; Choi, Ki-Young; Gundlach, Jens; Adelberger, Eric

    2007-05-01

    The equivalence principle is the underlying foundation of General Relativity. Many modern quantum theories of gravity predict violations of the equivalence principle. We are using a rotating torsion balance to search for a new equivalence principle violating, long range interaction. A sensitive torsion balance is mounted on a turntable rotating with constant angular velocity. On the torsion pendulum beryllium and titanium test bodies are installed in a composition dipole configuration. A violation of the equivalence principle would yield to a differential acceleration of the two materials towards a source mass. I will present measurements with a differential acceleration sensitivity of 3x10-15;m/s^2. To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2007.NWS07.B3.5

  5. An equivalent viscoelastic model for rock mass with parallel joints

    NASA Astrophysics Data System (ADS)

    Li, Jianchun; Ma, Guowei; Zhao, Jian

    2010-03-01

    An equivalent viscoelastic medium model is proposed for rock mass with parallel joints. A concept of "virtual wave source (VWS)" is proposed to take into account the wave reflections between the joints. The equivalent model can be effectively applied to analyze longitudinal wave propagation through discontinuous media with parallel joints. Parameters in the equivalent viscoelastic model are derived analytically based on longitudinal wave propagation across a single rock joint. The proposed model is then verified by applying identical incident waves to the discontinuous and equivalent viscoelastic media at one end to compare the output waves at the other end. When the wavelength of the incident wave is sufficiently long compared to the joint spacing, the effect of the VWS on wave propagation in rock mass is prominent. The results from the equivalent viscoelastic medium model are very similar to those determined from the displacement discontinuity method. Frequency dependence and joint spacing effect on the equivalent viscoelastic model and the VWS method are discussed.

  6. Skyshine at neutron energies less than or equal to 400 MeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alsmiller, A.G. Jr.; Barish, J.; Childs, R.L.

    1980-10-01

    The dose equivalent at an air-ground interface as a function of distance from an assumed azimuthally symmetric point source of neutrons can be calculated as a double integral. The integration is over the source strength as a function of energy and polar angle weighted by an importance function that depends on the source variables and on the distance from the source to the filed point. The neutron importance function for a source 15 m above the ground emitting only into the upper hemisphere has been calculated using the two-dimensional discrete ordinates code, DOT, and the first collision source code, GRTUNCL,more » in the adjoint mode. This importance function is presented for neutron energies less than or equal to 400 MeV, for source cosine intervals of 1 to .8, .8 to .6 to .4, .4 to .2 and .2 to 0, and for various distances from the source to the field point. As part of the adjoint calculations a photon importance function is also obtained. This importance function for photon energies less than or equal to 14 MEV and for various source cosine intervals and source-to-field point distances is also presented. These importance functions may be used to obtain skyshine dose equivalent estimates for any known source energy-angle distribution.« less

  7. Distributed source model for the full-wave electromagnetic simulation of nonlinear terahertz generation.

    PubMed

    Fumeaux, Christophe; Lin, Hungyen; Serita, Kazunori; Withayachumnankul, Withawat; Kaufmann, Thomas; Tonouchi, Masayoshi; Abbott, Derek

    2012-07-30

    The process of terahertz generation through optical rectification in a nonlinear crystal is modeled using discretized equivalent current sources. The equivalent terahertz sources are distributed in the active volume and computed based on a separately modeled near-infrared pump beam. This approach can be used to define an appropriate excitation for full-wave electromagnetic numerical simulations of the generated terahertz radiation. This enables predictive modeling of the near-field interactions of the terahertz beam with micro-structured samples, e.g. in a near-field time-resolved microscopy system. The distributed source model is described in detail, and an implementation in a particular full-wave simulation tool is presented. The numerical results are then validated through a series of measurements on square apertures. The general principle can be applied to other nonlinear processes with possible implementation in any full-wave numerical electromagnetic solver.

  8. Lineal energy calibration of mini tissue-equivalent gas-proportional counters (TEPC)

    NASA Astrophysics Data System (ADS)

    Conte, V.; Moro, D.; Grosswendt, B.; Colautti, P.

    2013-07-01

    Mini TEPCs are cylindrical gas proportional counters of 1 mm or less of sensitive volume diameter. The lineal energy calibration of these tiny counters can be performed with an external gamma-ray source. However, to do that, first a method to get a simple and precise spectral mark has to be found and then the keV/μm value of this mark. A precise method (less than 1% of uncertainty) to identify this markis described here, and the lineal energy value of this mark has been measured for different simulated site sizes by using a 137Cs gamma source and a cylindrical TEPC equipped with a precision internal 244Cm alpha-particle source, and filled with propane-based tissue-equivalent gas mixture. Mini TEPCs can be calibrated in terms of lineal energy, by exposing them to 137Cesium sources, with an overall uncertainty of about 5%.

  9. Urea and urine are a viable and cost-effective nitrogen source for Yarrowia lipolytica biomass and lipid accumulation.

    PubMed

    Brabender, Matthew; Hussain, Murtaza Shabbir; Rodriguez, Gabriel; Blenner, Mark A

    2018-03-01

    Yarrowia lipolytica is an industrial yeast that has been used in the sustainable production of fatty acid-derived and lipid compounds due to its high growth capacity, genetic tractability, and oleaginous properties. This investigation examines the possibility of utilizing urea or urine as an alternative to ammonium sulfate as a nitrogen source to culture Y. lipolytica. The use of a stoichiometrically equivalent concentration of urea in lieu of ammonium sulfate significantly increased cell growth when glucose was used as the carbon source. Furthermore, Y. lipolytica growth was equally improved when grown with synthetic urine and real human urine. Equivalent or better lipid production was achieved when cells are grown on urea or urine. The successful use of urea and urine as nitrogen sources for Y. lipolytica growth highlights the potential of using cheaper media components as well as exploiting and recycling non-treated human waste streams for biotechnology processes.

  10. Equivalent isotropic scattering formulation for transient short-pulse radiative transfer in anisotropic scattering planar media.

    PubMed

    Guo, Z; Kumar, S

    2000-08-20

    An isotropic scaling formulation is evaluated for transient radiative transfer in a one-dimensional planar slab subject to collimated and/or diffuse irradiation. The Monte Carlo method is used to implement the equivalent scattering and exact simulations of the transient short-pulse radiation transport through forward and backward anisotropic scattering planar media. The scaled equivalent isotropic scattering results are compared with predictions of anisotropic scattering in various problems. It is found that the equivalent isotropic scaling law is not appropriate for backward-scattering media in transient radiative transfer. Even for an optically diffuse medium, the differences in temporal transmittance and reflectance profiles between predictions of backward anisotropic scattering and equivalent isotropic scattering are large. Additionally, for both forward and backward anisotropic scattering media, the transient equivalent isotropic results are strongly affected by the change of photon flight time, owing to the change of flight direction associated with the isotropic scaling technique.

  11. Wavelet transform analysis of the small-scale X-ray structure of the cluster Abell 1367

    NASA Technical Reports Server (NTRS)

    Grebeney, S. A.; Forman, W.; Jones, C.; Murray, S.

    1995-01-01

    We have developed a new technique based on a wavelet transform analysis to quantify the small-scale (less than a few arcminutes) X-ray structure of clusters of galaxies. We apply this technique to the ROSAT position sensitive proportional counter (PSPC) and Einstein high-resolution imager (HRI) images of the central region of the cluster Abell 1367 to detect sources embedded within the diffuse intracluster medium. In addition to detecting sources and determining their fluxes and positions, we show that the wavelet analysis allows a characterization of the sources extents. In particular, the wavelet scale at which a given source achieves a maximum signal-to-noise ratio in the wavelet images provides an estimate of the angular extent of the source. To account for the widely varying point response of the ROSAT PSPC as a function of off-axis angle requires a quantitative measurement of the source size and a comparison to a calibration derived from the analysis of a Deep Survey image. Therefore, we assume that each source could be described as an isotropic two-dimensional Gaussian and used the wavelet amplitudes, at different scales, to determine the equivalent Gaussian Full Width Half-Maximum (FWHM) (and its uncertainty) appropriate for each source. In our analysis of the ROSAT PSPC image, we detect 31 X-ray sources above the diffuse cluster emission (within a radius of 24 min), 16 of which are apparently associated with cluster galaxies and two with serendipitous, background quasars. We find that the angular extents of 11 sources exceed the nominal width of the PSPC point-spread function. Four of these extended sources were previously detected by Bechtold et al. (1983) as 1 sec scale features using the Einstein HRI. The same wavelet analysis technique was applied to the Einstein HRI image. We detect 28 sources in the HRI image, of which nine are extended. Eight of the extended sources correspond to sources previously detected by Bechtold et al. Overall, using both the PSPC and the HRI observations, we detect 16 extended features, of which nine have galaxies coincided with the X-ray-measured positions (within the positional error circles). These extended sources have luminosities lying in the range (3 - 30) x 10(exp 40) ergs/s and gas masses of approximately (1 - 30) x 10(exp 9) solar mass, if the X-rays are of thermal origin. We confirm the presence of extended features in A1367 first reported by Bechtold et al. (1983). The nature of these systems remains uncertain. The luminosities are large if the emission is attributed to single galaxies, and several of the extended features have no associated galaxy counterparts. The extended features may be associated with galaxy groups, as suggested by Canizares, Fabbiano, & Trinchieri (1987), although the number required is large.

  12. Resonance treatment using pin-based pointwise energy slowing-down method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Sooyoung, E-mail: csy0321@unist.ac.kr; Lee, Changho, E-mail: clee@anl.gov; Lee, Deokjung, E-mail: deokjung@unist.ac.kr

    A new resonance self-shielding method using a pointwise energy solution has been developed to overcome the drawbacks of the equivalence theory. The equivalence theory uses a crude resonance scattering source approximation, and assumes a spatially constant scattering source distribution inside a fuel pellet. These two assumptions cause a significant error, in that they overestimate the multi-group effective cross sections, especially for {sup 238}U. The new resonance self-shielding method solves pointwise energy slowing-down equations with a sub-divided fuel rod. The method adopts a shadowing effect correction factor and fictitious moderator material to model a realistic pointwise energy solution. The slowing-down solutionmore » is used to generate the multi-group cross section. With various light water reactor problems, it was demonstrated that the new resonance self-shielding method significantly improved accuracy in the reactor parameter calculation with no compromise in computation time, compared to the equivalence theory.« less

  13. Arthroscopic Double-Row Transosseous Equivalent Rotator Cuff Repair with a Knotless Self-Reinforcing Technique.

    PubMed

    Mook, William R; Greenspoon, Joshua A; Millett, Peter J

    2016-01-01

    Rotator cuff tears are a significant cause of shoulder morbidity. Surgical techniques for repair have evolved to optimize the biologic and mechanical variables critical to tendon healing. Double-row repairs have demonstrated superior biomechanical advantages to a single-row. The preferred technique for rotator cuff repair of the senior author was reviewed and described in a step by step fashion. The final construct is a knotless double row transosseous equivalent construct. The described technique includes the advantages of a double-row construct while also offering self reinforcement, decreased risk of suture cut through, decreased risk of medial row overtensioning and tissue strangulation, improved vascularity, the efficiency of a knotless system, and no increased risk for subacromial impingement from the burden of suture knots. Arthroscopic knotless double row rotator cuff repair is a safe and effective method to repair rotator cuff tears.

  14. Arthroscopic Double-Row Transosseous Equivalent Rotator Cuff Repair with a Knotless Self-Reinforcing Technique

    PubMed Central

    Mook, William R.; Greenspoon, Joshua A.; Millett, Peter J.

    2016-01-01

    Background: Rotator cuff tears are a significant cause of shoulder morbidity. Surgical techniques for repair have evolved to optimize the biologic and mechanical variables critical to tendon healing. Double-row repairs have demonstrated superior biomechanical advantages to a single-row. Methods: The preferred technique for rotator cuff repair of the senior author was reviewed and described in a step by step fashion. The final construct is a knotless double row transosseous equivalent construct. Results: The described technique includes the advantages of a double-row construct while also offering self reinforcement, decreased risk of suture cut through, decreased risk of medial row overtensioning and tissue strangulation, improved vascularity, the efficiency of a knotless system, and no increased risk for subacromial impingement from the burden of suture knots. Conclusion: Arthroscopic knotless double row rotator cuff repair is a safe and effective method to repair rotator cuff tears. PMID:27733881

  15. On the equivalence of experimental B(E2) values determined by various techniques

    DOE PAGES

    Birch, M.; Pritychenko, B.; Singh, B.

    2016-06-30

    In this paper, we establish the equivalence of the various techniques for measuring B(E2) values using a statistical analysis. Data used in this work come from the recent compilation by B. Pritychenko et al. (2016). We consider only those nuclei for which the B(E2) values were measured by at least two different methods, with each method being independently performed at least twice. Our results indicate that most prevalent methods of measuring B(E2) values are equivalent, with some weak evidence that Doppler-shift attenuation method (DSAM) measurements may differ from Coulomb excitation (CE) and nuclear resonance fluorescence (NRF) measurements. However, such anmore » evidence appears to arise from discrepant DSAM measurements of the lifetimes for 60Ni and some Sn nuclei rather than a systematic deviation in the method itself.« less

  16. Measurement of absorbed dose with a bone-equivalent extrapolation chamber.

    PubMed

    DeBlois, François; Abdel-Rahman, Wamied; Seuntjens, Jan P; Podgorsak, Ervin B

    2002-03-01

    A hybrid phantom-embedded extrapolation chamber (PEEC) made of Solid Water and bone-equivalent material was used for determining absorbed dose in a bone-equivalent phantom irradiated with clinical radiation beams (cobalt-60 gamma rays; 6 and 18 MV x rays; and 9 and 15 MeV electrons). The dose was determined with the Spencer-Attix cavity theory, using ionization gradient measurements and an indirect determination of the chamber air-mass through measurements of chamber capacitance. The collected charge was corrected for ionic recombination and diffusion in the chamber air volume following the standard two-voltage technique. Due to the hybrid chamber design, correction factors accounting for scatter deficit and electrode composition were determined and applied in the dose equation to obtain absorbed dose in bone for the equivalent homogeneous bone phantom. Correction factors for graphite electrodes were calculated with Monte Carlo techniques and the calculated results were verified through relative air cavity dose measurements for three different polarizing electrode materials: graphite, steel, and brass in conjunction with a graphite collecting electrode. Scatter deficit, due mainly to loss of lateral scatter in the hybrid chamber, reduces the dose to the air cavity in the hybrid PEEC in comparison with full bone PEEC by 0.7% to approximately 2% depending on beam quality and energy. In megavoltage photon and electron beams, graphite electrodes do not affect the dose measurement in the Solid Water PEEC but decrease the cavity dose by up to 5% in the bone-equivalent PEEC even for very thin graphite electrodes (<0.0025 cm). In conjunction with appropriate correction factors determined with Monte Carlo techniques, the uncalibrated hybrid PEEC can be used for measuring absorbed dose in bone material to within 2% for high-energy photon and electron beams.

  17. Novel three-dimensional autologous tissue-engineered vaginal tissues using the self-assembly technique.

    PubMed

    Orabi, Hazem; Saba, Ingrid; Rousseau, Alexandre; Bolduc, Stéphane

    2017-02-01

    Many diseases necessitate the substitution of vaginal tissues. Current replacement therapies are associated with many complications. In this study, we aimed to create bioengineered neovaginas with the self-assembly technique using autologous vaginal epithelial (VE) and vaginal stromal (VS) cells without the use of exogenous materials and to document the survival and incorporation of these grafts into the tissues of nude female mice. Epithelial and stromal cells were isolated from vaginal biopsies. Stromal cells were driven to form collagen sheets, 3 of which were superimposed to form vaginal stromas. VE cells were seeded on top of these stromas and allowed to mature in an air-liquid interface. The vaginal equivalents were implanted subcutaneously in female nude mice, which were sacrificed after 1 and 2 weeks after surgery. The in vitro and animal-retrieved equivalents were assessed using histologic, functional, and mechanical evaluations. Vaginal equivalents could be handled easily. VE cells formed a well-differentiated epithelial layer with a continuous basement membrane. The equivalent matrix was composed of collagen I and III and elastin. The epithelium, basement membrane, and stroma were comparable to those of native vaginal tissues. The implanted equivalents formed mature vaginal epithelium and matrix that were integrated into the mice tissues. Using the self-assembly technique, in vitro vaginal tissues were created with many functional and biological similarities to native vagina without any foreign material. They formed functional vaginal tissues after in vivo animal implantation. It is appropriate for vaginal substitution and disease modeling for infectious studies, vaginal applicants, and drug testing. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Make Your Own Paint Chart: A Realistic Context for Developing Proportional Reasoning with Ratios

    ERIC Educational Resources Information Center

    Beswick, Kim

    2011-01-01

    Proportional reasoning has been recognised as a crucial focus of mathematics in the middle years and also as a frequent source of difficulty for students (Lamon, 2007). Proportional reasoning concerns the equivalence of pairs of quantities that are related multiplicatively; that is, equivalent ratios including those expressed as fractions and…

  19. 40 CFR Table 5 to Subpart Mmmm of... - Model Rule-Toxic Equivalency Factors

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 7 2013-07-01 2013-07-01 false Model Rule-Toxic Equivalency Factors 5 Table 5 to Subpart MMMM of Part 60 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines and Compliance Times for Existing Sewage Sludge...

  20. 40 CFR Table 5 to Subpart Mmmm of... - Model Rule-Toxic Equivalency Factors

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 7 2014-07-01 2014-07-01 false Model Rule-Toxic Equivalency Factors 5 Table 5 to Subpart MMMM of Part 60 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines and Compliance Times for Existing Sewage Sludge...

  1. A Complete Multimode Equivalent-Circuit Theory for Electrical Design

    PubMed Central

    Williams, Dylan F.; Hayden, Leonard A.; Marks, Roger B.

    1997-01-01

    This work presents a complete equivalent-circuit theory for lossy multimode transmission lines. Its voltages and currents are based on general linear combinations of standard normalized modal voltages and currents. The theory includes new expressions for transmission line impedance matrices, symmetry and lossless conditions, source representations, and the thermal noise of passive multiports. PMID:27805153

  2. Quantification of isomerically summed hydrocarbon contributions to crude oil by carbon number, double bond equivalent, and aromaticity using gas chromatography with tunable vacuum ultraviolet ionization.

    PubMed

    Nowak, Jeremy A; Weber, Robert J; Goldstein, Allen H

    2018-03-12

    The ability to structurally characterize and isomerically quantify crude oil hydrocarbons relevant to refined fuels such as motor oil, diesel, and gasoline represents an extreme challenge for chromatographic and mass spectrometric techniques. This work incorporates two-dimensional gas chromatography coupled to a tunable vacuum ultraviolet soft photoionization source, the Chemical Dynamics Beamline 9.0.2 of the Advanced Light Source at the Lawrence Berkeley National Laboratory, with a time-of-flight mass spectrometer (GC × GC-VUV-TOF) to directly characterize and isomerically sum the contributions of aromatic and aliphatic species to hydrocarbon classes of four crude oils. When the VUV beam is tuned to 10.5 ± 0.2 eV, both aromatic and aliphatic crude oil hydrocarbons are ionized to reveal the complete chemical abundance of C 9 -C 30 hydrocarbons. When the VUV beam is tuned to 9.0 ± 0.2 eV only aromatic hydrocarbons are ionized, allowing separation of the aliphatic and aromatic fractions of the crude oil hydrocarbon chemical classes in an efficient manner while maintaining isomeric quantification. This technique provides an effective tool to determine the isomerically summed aromatic and aliphatic hydrocarbon compositions of crude oil, providing information that goes beyond typical GC × GC separations of the most dominant hydrocarbon isomers.

  3. Alternative Fuels Data Center: Maine Transportation Data for Alternative

    Science.gov Websites

    connect with other local stakeholders. Gasoline Diesel Natural Gas Transportation Fuel Consumption Source Renewable Power Plants 58 Renewable Power Plant Capacity (nameplate, MW) 984 Source: BioFuels Atlas from the $2.96/gallon $2.66/GGE Source: Average prices per gasoline gallon equivalent (GGE) for the New England

  4. Alternative Fuels Data Center: West Virginia Transportation Data for

    Science.gov Websites

    Transportation Fuel Consumption Source: State Energy Data System based on beta data converted to gasoline gallon (bbl/day) 20,000 Renewable Power Plants 13 Renewable Power Plant Capacity (nameplate, MW) 751 Source Source: Average prices per gasoline gallon equivalent (GGE) for the Lower Atlantic PADD from the

  5. Alternative Fuels Data Center: Hawaii Transportation Data for Alternative

    Science.gov Websites

    Diesel Natural Gas Transportation Fuel Consumption Source: State Energy Data System based on beta data Plant Capacity (nameplate, MW) 145 Source: BioFuels Atlas from the National Renewable Energy Laboratory $2.96/gallon $2.66/GGE Source: Average prices per gasoline gallon equivalent (GGE) for the West Coast

  6. Alternative Fuels Data Center: Oklahoma Transportation Data for Alternative

    Science.gov Websites

    Fuel Consumption Source: State Energy Data System based on beta data converted to gasoline gallon ) 2,573 Source: BioFuels Atlas from the National Renewable Energy Laboratory Case Studies Video thumbnail Source: Average prices per gasoline gallon equivalent (GGE) for the Midwest PADD from the Alternative

  7. Alternative Fuels Data Center: Nevada Transportation Data for Alternative

    Science.gov Websites

    . Gasoline Diesel Natural Gas Electricity Transportation Fuel Consumption Source: State Energy Data System Renewable Power Plant Capacity (nameplate, MW) 1,684 Source: BioFuels Atlas from the National Renewable Source: Average prices per gasoline gallon equivalent (GGE) for the West Coast PADD from the Alternative

  8. Alternative Fuels Data Center: Montana Transportation Data for Alternative

    Science.gov Websites

    . Gasoline Diesel Natural Gas Transportation Fuel Consumption Source: State Energy Data System based on beta Renewable Power Plant Capacity (nameplate, MW) 2,955 Source: BioFuels Atlas from the National Renewable /gallon $2.66/GGE Source: Average prices per gasoline gallon equivalent (GGE) for the Rocky Mountain PADD

  9. 40 CFR 63.55 - Maximum achievable control technology (MACT) determinations for affected sources subject to case...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 10 2013-07-01 2013-07-01 false Maximum achievable control technology (MACT) determinations for affected sources subject to case-by-case determination of equivalent emission... Requirements for Control Technology Determinations for Major Sources in Accordance With Clean Air Act Sections...

  10. 40 CFR 63.55 - Maximum achievable control technology (MACT) determinations for affected sources subject to case...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 10 2012-07-01 2012-07-01 false Maximum achievable control technology (MACT) determinations for affected sources subject to case-by-case determination of equivalent emission... Requirements for Control Technology Determinations for Major Sources in Accordance With Clean Air Act Sections...

  11. 40 CFR 63.55 - Maximum achievable control technology (MACT) determinations for affected sources subject to case...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 9 2011-07-01 2011-07-01 false Maximum achievable control technology (MACT) determinations for affected sources subject to case-by-case determination of equivalent emission... Requirements for Control Technology Determinations for Major Sources in Accordance With Clean Air Act Sections...

  12. 40 CFR 63.55 - Maximum achievable control technology (MACT) determinations for affected sources subject to case...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 10 2014-07-01 2014-07-01 false Maximum achievable control technology (MACT) determinations for affected sources subject to case-by-case determination of equivalent emission... Requirements for Control Technology Determinations for Major Sources in Accordance With Clean Air Act Sections...

  13. Qualification tests for {sup 192}Ir sealed sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iancso, Georgeta, E-mail: georgetaiancso@yahoo.com; Iliescu, Elena, E-mail: georgetaiancso@yahoo.com; Iancu, Rodica, E-mail: georgetaiancso@yahoo.com

    This paper describes the results of qualification tests for {sup 192}Ir sealed sources, available in Testing and Nuclear Expertise Laboratory of National Institute for Physics and Nuclear Engineering 'Horia Hulubei' (I.F.I.N.-HH), Romania. These sources had to be produced in I.F.I.N.-HH and were tested in order to obtain the authorization from The National Commission for Nuclear Activities Control (CNCAN). The sources are used for gammagraphy procedures or in gammadefectoscopy equipments. Tests, measurement methods and equipments used, comply with CNCAN, AIEA and International Quality Standards and regulations. The qualification tests are: 1. Radiological tests and measurements: dose equivalent rate at 1 m;more » tightness; dose equivalent rate at the surface of the transport and storage container; external unfixed contamination of the container surface. 2. Mechanical and climatic tests: thermal shock; external pressure; mechanic shock; vibrations; boring; thermal conditions for storage and transportation. Passing all tests, it was obtained the Radiological Security Authorization for producing the {sup 192}Ir sealed sources. Now IFIN-HH can meet many demands for this sealed sources, as the only manufacturer in Romania.« less

  14. Integration of different data gap filling techniques to facilitate assessment of polychlorinated biphenyls: A proof of principle case study (ASCCT meeting)

    EPA Science Inventory

    Data gap filling techniques are commonly used to predict hazard in the absence of empirical data. The most established techniques are read-across, trend analysis and quantitative structure-activity relationships (QSARs). Toxic equivalency factors (TEFs) are less frequently used d...

  15. Equivalence and Differences between Structural Equation Modeling and State-Space Modeling Techniques

    ERIC Educational Resources Information Center

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, Ellen L.; Dolan, Conor V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and differences through analytic comparisons and…

  16. Dioxin equivalency: Challenge to dose extrapolation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, J.F. Jr.; Silkworth, J.B.

    1995-12-31

    Extensive research has shown that all biological effects of dioxin-like agents are mediated via a single biochemical target, the Ah receptor (AhR), and that the relative biologic potencies of such agents in any given system, coupled with their exposure levels, may be described in terms of toxic equivalents (TEQ). It has also shown that the TEQ sources include not only chlorinated species such as the dioxins (PCDDs), PCDFs, and coplanar PCBs, but also non-chlorinated substances such as the PAHs of wood smoke, the AhR agonists of cooked meat, and the indolocarbazol (ICZ) derived from cruciferous vegetables. Humans have probably hadmore » elevated exposures to these non-chlorinated TEQ sources ever since the discoveries of fire, cooking, and the culinary use of Brassica spp. Recent assays of CYP1A2 induction show that these ``natural`` or ``traditional`` AhR agonists are contributing 50--100 times as much to average human TEQ exposures as do the chlorinated xenobiotics. Currently, the safe doses of the xenobiotic TEQ sources are estimated from their NOAELs and large extrapolation factors, derived from arbitrary mathematical models, whereas the NOAELs themselves are regarded as the safe doses for the TEQs of traditional dietary components. Available scientific data can neither support nor refute either approach to assessing the health risk of an individual chemical substance. However, if two substances be toxicologically equivalent, then their TEQ-adjusted health risks must also be equivalent, and the same dose extrapolation procedure should be used for both.« less

  17. The study on increasing the equivalent SNR in the certain DOI by adjusting the SD separation in near-infrared brain imaging application

    NASA Astrophysics Data System (ADS)

    Wang, Jinhai; Liu, Dongyuan; Sun, Jinggong; Zhang, Yanjun; Sun, Qiuming; Ma, Jun; Zheng, Yu; Wang, Huiquan

    2016-10-01

    Near-infrared (NIR) brain imaging is one of the most promising techniques for brain research in recent years. As a significant supplement to the clinical imaging technique, such as CT and MRI, the NIR technique can achieve a fast, non-invasive, and low cost imaging of the brain, which is widely used for the brain functional imaging and hematoma detection. NIR imaging can achieve an imaging depth up to only several centimeters due to the reduced optical attenuation. The structure of the human brain is so particularly complex, from the perspective of optical detection, the measurement light needs go through the skin, skull, cerebrospinal fluid (CSF), grey matter, and white matter, and then reverses the order reflected by the detector. The more photons from the Depth of Interest (DOI) in brain the detector capture, the better detection accuracy and stability can be obtained. In this study, the Equivalent Signal to Noise Ratio (ESNR) was defined as the proportion of the photons from the DOI to the total photons the detector evaluated the best Source and Detector (SD) separation. The Monte-Carlo (MC) simulation was used to establish a multi brain layer model to analyze the distribution of the ESNR along the radial direction for different DOIs and several basic brain optical and structure parameters. A map between the best detection SD separation, in which distance the ESNR was the highest, and the brain parameters was established for choosing the best detection point in the NIR brain imaging application. The results showed that the ESNR was very sensitivity to the SD separation. So choosing the best SD separation based on the ESNR is very significant for NIR brain imaging application. It provides an important reference and new thinking for the brain imaging in the near infrared.

  18. The carbon footprint of Australian ambulance operations.

    PubMed

    Brown, Lawrence H; Canyon, Deon V; Buettner, Petra G; Crawford, J Mac; Judd, Jenni

    2012-12-01

    To determine the greenhouse gas emissions associated with the energy consumption of Australian ambulance operations, and to identify the predominant energy sources that contribute to those emissions. A two-phase study of operational and financial data from a convenience sample of Australian ambulance operations to inventory their energy consumption and greenhouse gas emissions for 1 year. State- and territory-based ambulance systems serving 58% of Australia's population and performing 59% of Australia's ambulance responses provided data for the study. Emissions for the participating systems totalled 67 390 metric tons of carbon dioxide equivalents. For ground ambulance operations, emissions averaged 22 kg of carbon dioxide equivalents per ambulance response, 30 kg of carbon dioxide equivalents per patient transport and 3 kg of carbon dioxide equivalents per capita. Vehicle fuels accounted for 58% of the emissions from ground ambulance operations, with the remainder primarily attributable to electricity consumption. Emissions from air ambulance transport were nearly 200 times those for ground ambulance transport. On a national level, emissions from Australian ambulance operations are estimated to be between 110 000 and 120 000 tons of carbon dioxide equivalents each year. Vehicle fuels are the primary source of emissions for ground ambulance operations. Emissions from air ambulance transport are substantially higher than those for ground ambulance transport. © 2012 The Authors. EMA © 2012 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  19. In vitro assessment of thyroid hormone disrupting activities in drinking water sources along the Yangtze River.

    PubMed

    Hu, Xinxin; Shi, Wei; Zhang, Fengxian; Cao, Fu; Hu, Guanjiu; Hao, Yingqun; Wei, Si; Wang, Xinru; Yu, Hongxia

    2013-02-01

    The thyroid hormone disrupting activities of drinking water sources from the lower reaches of Yangtze River were examined using a reporter gene assay based on African green monkey kidney fibroblast (CV-1) cells. None of the eleven tested samples showed thyroid receptor (TR) agonist activity. Nine water samples exhibited TR antagonist activities with the equivalents referring to Di-n-butyl phthalate (DNBP) (TR antagonist activity equivalents, ATR-EQ(50)s) ranging from 6.92 × 10(1) to 2.85 × 10(2) μg DNBP/L. The ATR-EQ(50)s and TR antagonist equivalent ranges (ATR-EQ(30-80) ranges) for TR antagonist activities indicated that the water sample from site WX-8 posed the greatest health risks. The ATR-EQ(80)s of the water samples ranging from 1.56 × 10(3) to 6.14 × 10(3) μg DNBP/L were higher than the NOEC of DNBP. The results from instrumental analysis showed that DNBP might be responsible for the TR antagonist activities in these water samples. Water sources along Yangtze River had thyroid hormone disrupting potential. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Folk Theorems on the Correspondence between State-Based and Event-Based Systems

    NASA Astrophysics Data System (ADS)

    Reniers, Michel A.; Willemse, Tim A. C.

    Kripke Structures and Labelled Transition Systems are the two most prominent semantic models used in concurrency theory. Both models are commonly believed to be equi-expressive. One can find many ad-hoc embeddings of one of these models into the other. We build upon the seminal work of De Nicola and Vaandrager that firmly established the correspondence between stuttering equivalence in Kripke Structures and divergence-sensitive branching bisimulation in Labelled Transition Systems. We show that their embeddings can also be used for a range of other equivalences of interest, such as strong bisimilarity, simulation equivalence, and trace equivalence. Furthermore, we extend the results by De Nicola and Vaandrager by showing that there are additional translations that allow one to use minimisation techniques in one semantic domain to obtain minimal representatives in the other semantic domain for these equivalences.

  1. Accounting for optical errors in microtensiometry.

    PubMed

    Hinton, Zachary R; Alvarez, Nicolas J

    2018-09-15

    Drop shape analysis (DSA) techniques measure interfacial tension subject to error in image analysis and the optical system. While considerable efforts have been made to minimize image analysis errors, very little work has treated optical errors. There are two main sources of error when considering the optical system: the angle of misalignment and the choice of focal plane. Due to the convoluted nature of these sources, small angles of misalignment can lead to large errors in measured curvature. We demonstrate using microtensiometry the contributions of these sources to measured errors in radius, and, more importantly, deconvolute the effects of misalignment and focal plane. Our findings are expected to have broad implications on all optical techniques measuring interfacial curvature. A geometric model is developed to analytically determine the contributions of misalignment angle and choice of focal plane on measurement error for spherical cap interfaces. This work utilizes a microtensiometer to validate the geometric model and to quantify the effect of both sources of error. For the case of a microtensiometer, an empirical calibration is demonstrated that corrects for optical errors and drastically simplifies implementation. The combination of geometric modeling and experimental results reveal a convoluted relationship between the true and measured interfacial radius as a function of the misalignment angle and choice of focal plane. The validated geometric model produces a full operating window that is strongly dependent on the capillary radius and spherical cap height. In all cases, the contribution of optical errors is minimized when the height of the spherical cap is equivalent to the capillary radius, i.e. a hemispherical interface. The understanding of these errors allow for correct measure of interfacial curvature and interfacial tension regardless of experimental setup. For the case of microtensiometry, this greatly decreases the time for experimental setup and increases experiential accuracy. In a broad sense, this work outlines the importance of optical errors in all DSA techniques. More specifically, these results have important implications for all microscale and microfluidic measurements of interface curvature. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Mission Connect Mild TBI Translational Research Consortium

    DTIC Science & Technology

    2009-08-01

    and only minimally effective in-vivo. Initially, we identified carbon nanomaterials as potent antioxidants using a chemical ORAC assay. We...radical absorbency capacity ( ORAC ) assay using a chemical source for the oxygen radical (,′-azodiisobutyramidine dihydrochloride) (Table 1). We...nanomaterials determined with a chemically based ORAC assay.a Nanomaterial Trolox Equivalents (TE) Trolox Mass Equivalents (TME) p-SWCNT 14046 5.02

  3. Cross-cultural equivalence of the patient- and parent-reported quality of life in short stature youth (QoLISSY) questionnaire.

    PubMed

    Bullinger, Monika; Quitmann, Julia; Silva, Neuza; Rohenkohl, Anja; Chaplin, John E; DeBusk, Kendra; Mimoun, Emmanuelle; Feigerlova, Eva; Herdman, Michael; Sanz, Dolores; Wollmann, Hartmut; Pleil, Andreas; Power, Michael

    2014-01-01

    Testing cross-cultural equivalence of patient-reported outcomes requires sufficiently large samples per country, which is difficult to achieve in rare endocrine paediatric conditions. We describe a novel approach to cross-cultural testing of the Quality of Life in Short Stature Youth (QoLISSY) questionnaire in five countries by sequentially taking one country out (TOCO) from the total sample and iteratively comparing the resulting psychometric performance. Development of the QoLISSY proceeded from focus group discussions through pilot testing to field testing in 268 short-statured patients and their parents. To explore cross-cultural equivalence, the iterative TOCO technique was used to examine and compare the validity, reliability, and convergence of patient and parent responses on QoLISSY in the field test dataset, and to predict QoLISSY scores from clinical, socio-demographic and psychosocial variables. Validity and reliability indicators were satisfactory for each sample after iteratively omitting one country. Comparisons with the total sample revealed cross-cultural equivalence in internal consistency and construct validity for patients and parents, high inter-rater agreement and a substantial proportion of QoLISSY variance explained by predictors. The TOCO technique is a powerful method to overcome problems of country-specific testing of patient-reported outcome instruments. It provides an empirical support to QoLISSY's cross-cultural equivalence and is recommended for future research.

  4. Resonant Rectifier ICs for Piezoelectric Energy Harvesting Using Low-Voltage Drop Diode Equivalents

    PubMed Central

    Din, Amad Ud; Chandrathna, Seneke Chamith; Lee, Jong-Wook

    2017-01-01

    Herein, we present the design technique of a resonant rectifier for piezoelectric (PE) energy harvesting. We propose two diode equivalents to reduce the voltage drop in the rectifier operation, a minuscule-drop-diode equivalent (MDDE) and a low-drop-diode equivalent (LDDE). The diode equivalents are embedded in resonant rectifier integrated circuits (ICs), which use symmetric bias-flip to reduce the power used for charging and discharging the internal capacitance of a PE transducer. The self-startup function is supported by synchronously generating control pulses for the bias-flip from the PE transducer. Two resonant rectifier ICs, using both MDDE and LDDE, are fabricated in a 0.18 μm CMOS process and their performances are characterized under external and self-power conditions. Under the external-power condition, the rectifier using LDDE delivers an output power POUT of 564 μW and a rectifier output voltage VRECT of 3.36 V with a power transfer efficiency of 68.1%. Under self-power conditions, the rectifier using MDDE delivers a POUT of 288 μW and a VRECT of 2.4 V with a corresponding efficiency of 78.4%. Using the proposed bias-flip technique, the power extraction capability of the proposed rectifier is 5.9 and 3.0 times higher than that of a conventional full-bridge rectifier. PMID:28422085

  5. Resonant Rectifier ICs for Piezoelectric Energy Harvesting Using Low-Voltage Drop Diode Equivalents.

    PubMed

    Din, Amad Ud; Chandrathna, Seneke Chamith; Lee, Jong-Wook

    2017-04-19

    Herein, we present the design technique of a resonant rectifier for piezoelectric (PE) energy harvesting. We propose two diode equivalents to reduce the voltage drop in the rectifier operation, a minuscule-drop-diode equivalent (MDDE) and a low-drop-diode equivalent (LDDE). The diode equivalents are embedded in resonant rectifier integrated circuits (ICs), which use symmetric bias-flip to reduce the power used for charging and discharging the internal capacitance of a PE transducer. The self-startup function is supported by synchronously generating control pulses for the bias-flip from the PE transducer. Two resonant rectifier ICs, using both MDDE and LDDE, are fabricated in a 0.18 μm CMOS process and their performances are characterized under external and self-power conditions. Under the external-power condition, the rectifier using LDDE delivers an output power P OUT of 564 μW and a rectifier output voltage V RECT of 3.36 V with a power transfer efficiency of 68.1%. Under self-power conditions, the rectifier using MDDE delivers a P OUT of 288 μW and a V RECT of 2.4 V with a corresponding efficiency of 78.4%. Using the proposed bias-flip technique, the power extraction capability of the proposed rectifier is 5.9 and 3.0 times higher than that of a conventional full-bridge rectifier.

  6. [Preliminary investigation on emission of PCDD/Fs and DL-PCBs through flue gas from coke plants in China].

    PubMed

    Sun, Peng-Cheng; Li, Xiao-Lu; Cheng, Gang; Lu, Yong; Wu, Chang-Min; Wu, Chang-Min; Luo, Jin-Hong

    2014-07-01

    According to the Stockholm Convention, polychlorinated dibenzo-p-dioxins/dibenzofurans (PCDD/Fs) and dioxin-like polychlorinated biphenyls (DL-PCBs) are classified into unintentionally produced persistent organic pollutants (UP-POPs), and named dioxins. Coke production as a thermal process contains organic matters, metal and chlorine, is considered to be a potential source of dioxins. Intensive studies on the emission of dioxins from coking industry are still very scarce. In order to estimate the emission properties of dioxins through coke production, isotope dilution HRGC/HRMS technique was used to determine the concentration of dioxins through flue gas during heating of coal. Three results were obtained. First, total toxic equivalents at each stationary emission source were in the range of 3.9-30.0 pg x m(-3) (at WHO-TEQ) for dioxins which was lower than other thermal processes such as municipal solid waste incineration. Second, higher chlorinated PCDD/Fs were the dominant congeners. Third, emissions of dioxins were dependent on coking pattern. Stamping coking and higher coking chamber may lead to lower emission.

  7. Size-of-source Effect in Infrared Thermometers with Direct Reading of Temperature

    NASA Astrophysics Data System (ADS)

    Manoi, A.; Saunders, P.

    2017-07-01

    The size-of-source effect (SSE) for six infrared (IR) thermometers with direct reading of temperature was measured in this work. The alternative direct method for SSE determination, where the aperture size is fixed and the measurement distance is varied, was used in this study. The experimental equivalence between the usual and the alternative direct methods is presented. The magnitudes of the SSE for different types of IR thermometers were investigated. The maxima of the SSE were found to be up to 5 %, 8 %, and 28 % for focusable, closed-focus, and open-focus thermometers, respectively. At 275°C, an SSE of 28 % corresponds to 52°C, indicating the severe effect on the accuracy of this type of IR thermometer. A method to realize the calibration conditions used by the manufacturer, in terms of aperture size and measurement distance, is discussed and validated by experimental results. This study would be of benefit to users in choosing the best IR thermometer to match their work and for calibration laboratories in selecting the technique most suitable for determining the SSE.

  8. Asking the right questions in the right way: the need for a shift in research on psychological treatments for addiction.

    PubMed

    Orford, Jim

    2008-06-01

    To identify possible reasons for the disappointingly negative results of methodologically rigorous controlled trials of psychological treatments in the addictions field. A selective overview of the literature on addictive behaviour change. Eight failings of existing research are described: failing to account for the outcome equivalence paradox; neglecting relationships in favour of techniques; failing to integrate treatment research and research on unaided change; imposing an inappropriate time-scale on the change process; failing to take a systems or social network view; ignoring therapists' tacit theories; not including the patient's view; and displaying an ignorance of modern developments in the philosophy of science. Treatment research has been asking the wrong questions in the wrong way. Three necessary shifts in ways of conducting research are proposed: (i) the field should stop studying named techniques and focus instead on change processes; (ii) change processes should be studied within the broader, longer-acting systems of which treatment is part; and (iii) science in the field should be brought up to date by acknowledging a variety of sources of useful knowledge.

  9. ComDim for explorative multi-block data analysis of Cantal-type cheeses: Effects of salts, gentle heating and ripening.

    PubMed

    Loudiyi, M; Rutledge, D N; Aït-Kaddour, A

    2018-10-30

    Common Dimension (ComDim) chemometrics method for multi-block data analysis was employed to evaluate the impact of different added salts and ripening times on physicochemical, color, dynamic low amplitude oscillatory rheology, texture profile, and molecular structure (fluorescence and MIR spectroscopies) of five Cantal-type cheeses. Firstly, Independent Components Analysis (ICA) was applied separately on fluorescence and MIR spectra in order to extract the relevant signal source and the associated proportions related to molecular structure characteristics. ComDim was then applied on the 31 data tables corresponding to the proportion of ICA signals obtained for spectral methods and the global analysis of cheeses by the other techniques. The ComDim results indicated that generally cheeses made with 50% NaCl or with 75:25% NaCl/KCl exhibit the equivalent characteristics in structural, textural, meltability and color properties. The proposed methodology demonstrates the applicability of ComDim for the characterization of samples when different techniques describe the same samples. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. Nanox: a miniature mechanical stress rig designed for near-field X-ray diffraction imaging techniques.

    PubMed

    Gueninchault, N; Proudhon, H; Ludwig, W

    2016-11-01

    Multi-modal characterization of polycrystalline materials by combined use of three-dimensional (3D) X-ray diffraction and imaging techniques may be considered as the 3D equivalent of surface studies in the electron microscope combining diffraction and other imaging modalities. Since acquisition times at synchrotron sources are nowadays compatible with four-dimensional (time lapse) studies, suitable mechanical testing devices are needed which enable switching between these different imaging modalities over the course of a mechanical test. Here a specifically designed tensile device, fulfilling severe space constraints and permitting to switch between X-ray (holo)tomography, diffraction contrast tomography and topotomography, is presented. As a proof of concept the 3D characterization of an Al-Li alloy multicrystal by means of diffraction contrast tomography is presented, followed by repeated topotomography characterization of one selected grain at increasing levels of deformation. Signatures of slip bands and sudden lattice rotations inside the grain have been shown by means of in situ topography carried out during the load ramps, and diffraction spot peak broadening has been monitored throughout the experiment.

  11. Nanox: a miniature mechanical stress rig designed for near-field X-ray diffraction imaging techniques

    PubMed Central

    Gueninchault, N.; Proudhon, H.; Ludwig, W.

    2016-01-01

    Multi-modal characterization of polycrystalline materials by combined use of three-dimensional (3D) X-ray diffraction and imaging techniques may be considered as the 3D equivalent of surface studies in the electron microscope combining diffraction and other imaging modalities. Since acquisition times at synchrotron sources are nowadays compatible with four-dimensional (time lapse) studies, suitable mechanical testing devices are needed which enable switching between these different imaging modalities over the course of a mechanical test. Here a specifically designed tensile device, fulfilling severe space constraints and permitting to switch between X-ray (holo)tomography, diffraction contrast tomography and topotomography, is presented. As a proof of concept the 3D characterization of an Al–Li alloy multicrystal by means of diffraction contrast tomography is presented, followed by repeated topotomography characterization of one selected grain at increasing levels of deformation. Signatures of slip bands and sudden lattice rotations inside the grain have been shown by means of in situ topography carried out during the load ramps, and diffraction spot peak broadening has been monitored throughout the experiment. PMID:27787253

  12. Alternative Fuels Data Center: District of Columbia Transportation Data for

    Science.gov Websites

    Electricity Transportation Fuel Consumption Source: State Energy Data System based on beta data converted to (nameplate, MW) 0 Source: BioFuels Atlas from the National Renewable Energy Laboratory Videos Text Version /GGE $2.96/gallon $2.66/GGE Source: Average prices per gasoline gallon equivalent (GGE) for the Central

  13. 78 FR 53020 - Branch Technical Position on the Import of Non-U.S. Origin Radioactive Sources

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-28

    ... produced radioisotopes or Radium- 226 which can be disposed of in non-Part 61 or equivalent facilities'' as... Import of Non-U.S. Origin Radioactive Sources AGENCY: U.S. Nuclear Regulatory Commission. ACTION: Final... Non-U.S. Origin Sources to provide additional guidance on the application of this exclusion in the...

  14. An analysis of the radiation from apertures in curved surfaces by the geometrical theory of diffraction. [ray technique for electromagnetic fields

    NASA Technical Reports Server (NTRS)

    Pathak, P. H.; Kouyoumjian, R. G.

    1974-01-01

    In this paper the geometrical theory of diffraction is extended to treat the radiation from apertures of slots in convex perfectly conducting surfaces. It is assumed that the tangential electric field in the aperture is known so that an equivalent infinitesimal source can be defined at each point in the aperture. Surface rays emanate from this source which is a caustic of the ray system. A launching coefficient is introduced to describe the excitation of the surface ray modes. If the field radiated from the surface is desired, the ordinary diffraction coefficients are used to determine the field of the rays shed tangentially from the surface rays. The field of the surface ray modes is not the field on the surface; hence if the mutual coupling between slots is of interest, a second coefficient related to the launching coefficient must be employed. In the region adjacent to the shadow boundary, the component of the field directly radiated from the source is represented by Fock-type functions. In the illuminated region the incident radiation from the source (this does not include the diffracted field components) is treated by geometrical optics. This extension of the geometrical theory of diffraction is applied to calculate the radiation from slots on elliptic cylinders, spheres, and spheroids.

  15. Health risk assessment of polycyclic aromatic hydrocarbons in the source water and drinking water of China: Quantitative analysis based on published monitoring data.

    PubMed

    Wu, Bing; Zhang, Yan; Zhang, Xu-Xiang; Cheng, Shu-Pei

    2011-12-01

    A carcinogenic risk assessment of polycyclic aromatic hydrocarbons (PAHs) in source water and drinking water of China was conducted using probabilistic techniques from a national perspective. The published monitoring data of PAHs were gathered and converted into BaP equivalent (BaP(eq)) concentrations. Based on the transformed data, comprehensive risk assessment was performed by considering different age groups and exposure pathways. Monte Carlo simulation and sensitivity analysis were applied to quantify uncertainties of risk estimation. The risk analysis indicated that, the risk values for children and teens were lower than the accepted value (1.00E-05), indicating no significant carcinogenic risk. The probability of risk values above 1.00E-05 was 5.8% and 6.7% for adults and lifetime groups, respectively. Overall, carcinogenic risks of PAHs in source water and drinking water of China were mostly accepted. However, specific regions, such as Yellow river of Lanzhou reach and Qiantang river should be paid more attention. Notwithstanding the uncertainties inherent in the risk assessment, this study is the first attempt to provide information on carcinogenic risk of PAHs in source water and drinking water of China, and might be useful for potential strategies of carcinogenic risk management and reduction. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Settling equivalence of detrital minerals and grain-size dependence of sediment composition

    NASA Astrophysics Data System (ADS)

    Garzanti, Eduardo; Andò, Sergio; Vezzoli, Giovanni

    2008-08-01

    This study discusses the laws which govern sediment deposition, and consequently determine size-dependent compositional variability. A theoretical approach is substantiated by robust datasets on major Alpine, Himalayan, and African sedimentary systems. Integrated (bulk-petrography, heavy-mineral, X-ray powder diffraction) multiple-window analyses at 0.25ϕ to 0.50ϕ sieve interval of eighty-five fluvial, beach, and eolian-dune samples, ranging from very fine silt to coarse sand, document homologous intrasample compositional trends, revealed by systematic concentration of denser grains in finer-grained fractions (“size-density sorting”). These trends are explained by the settling-equivalence principle, stating that detrital minerals are deposited together if their settling velocity is the same. Settling of silt is chiefly resisted by fluid viscosity, and Stokes' law predicts that size differences between detrital minerals in ϕ units (“size shifts”) are half the difference between the logarithms of their submerged densities. Settling of pebbles is chiefly resisted by turbulence effects, and the Impact law predicts double size shifts than Stokes' law. Settling of sand is resisted by both viscosity and turbulence, the settling-equivalence formula is complex, and size shifts increase - with increasing settling velocity and grain size - from those predicted by Stokes' law to those predicted by the Impact law. In wind-laid sands, size shifts match those predicted by the Impact law; size-density sorting is thus greater than in water-laid fine sands. New analytical, graphical, and statistical techniques for rigorous settling-equivalence analysis of terrigenous sediments are illustrated. Deviations associated with non-spherical shape, density anomalies, inheritance from source rocks, or mixing of detrital species with contrasting provenance and different size distribution are also tentatively assessed. Such integrated theoretical and experimental approach allows us to mathematically predict intrasample compositional variability of water-laid and wind-laid sediments, once the density of detrital components is known.

  17. Improving cerebellar segmentation with statistical fusion

    NASA Astrophysics Data System (ADS)

    Plassard, Andrew J.; Yang, Zhen; Prince, Jerry L.; Claassen, Daniel O.; Landman, Bennett A.

    2016-03-01

    The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multiatlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non- Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution.

  18. Yield and depth Estimation of Selected NTS Nuclear and SPE Chemical Explosions Using Source Equalization by modeling Local and Regional Seismograms (Invited)

    NASA Astrophysics Data System (ADS)

    Saikia, C. K.; Roman-nieves, J. I.; Woods, M. T.

    2013-12-01

    Source parameters of nuclear and chemical explosions are often estimated by matching either the corner frequency and spectral level of a single event or the spectral ratio when spectra from two events are available with known source parameters for one. In this study, we propose an alternative method in which waveforms from two or more events can be simultaneously equalized by setting the differential of the processed seismograms at one station from any two individual events to zero. The method involves convolving the equivalent Mueller-Murphy displacement source time function (MMDSTF) of one event with the seismogram of the second event and vice-versa, and then computing their difference seismogram. MMDSTF is computed at the elastic radius including both near and far-field terms. For this method to yield accurate source parameters, an inherent assumption is that green's functions for the any paired events from the source to a receiver are same. In the frequency limit of the seismic data, this is a reasonable assumption and is concluded based on the comparison of green's functions computed for flat-earth models at various source depths ranging from 100m to 1Km. Frequency domain analysis of the initial P wave is, however, sensitive to the depth phase interaction, and if tracked meticulously can help estimating the event depth. We applied this method to the local waveforms recorded from the three SPE shots and precisely determined their yields. These high-frequency seismograms exhibit significant lateral path effects in spectrogram analysis and 3D numerical computations, but the source equalization technique is independent of any variation as long as their instrument characteristics are well preserved. We are currently estimating the uncertainty in the derived source parameters assuming the yields of the SPE shots as unknown. We also collected regional waveforms from 95 NTS explosions at regional stations ALQ, ANMO, CMB, COR, JAS LON, PAS, PFO and RSSD. We are currently employing a station based analysis using the equalization technique to estimate depth and yields of many relative to those of the announced explosions; and to develop their relationship with the Mw and Mo for the NTS explosions.

  19. Heavy ion contributions to organ dose equivalent for the 1977 galactic cosmic ray spectrum

    NASA Astrophysics Data System (ADS)

    Walker, Steven A.; Townsend, Lawrence W.; Norbury, John W.

    2013-05-01

    Estimates of organ dose equivalents for the skin, eye lens, blood forming organs, central nervous system, and heart of female astronauts from exposures to the 1977 solar minimum galactic cosmic radiation spectrum for various shielding geometries involving simple spheres and locations within the Space Transportation System (space shuttle) and the International Space Station (ISS) are made using the HZETRN 2010 space radiation transport code. The dose equivalent contributions are broken down by charge groups in order to better understand the sources of the exposures to these organs. For thin shields, contributions from ions heavier than alpha particles comprise at least half of the organ dose equivalent. For thick shields, such as the ISS locations, heavy ions contribute less than 30% and in some cases less than 10% of the organ dose equivalent. Secondary neutron production contributions in thick shields also tend to be as large, or larger, than the heavy ion contributions to the organ dose equivalents.

  20. Incorporating Measurement Non-Equivalence in a Cross-Study Latent Growth Curve Analysis

    PubMed Central

    Flora, David B.; Curran, Patrick J.; Hussong, Andrea M.; Edwards, Michael C.

    2009-01-01

    A large literature emphasizes the importance of testing for measurement equivalence in scales that may be used as observed variables in structural equation modeling applications. When the same construct is measured across more than one developmental period, as in a longitudinal study, it can be especially critical to establish measurement equivalence, or invariance, across the developmental periods. Similarly, when data from more than one study are combined into a single analysis, it is again important to assess measurement equivalence across the data sources. Yet, how to incorporate non-equivalence when it is discovered is not well described for applied researchers. Here, we present an item response theory approach that can be used to create scale scores from measures while explicitly accounting for non-equivalence. We demonstrate these methods in the context of a latent curve analysis in which data from two separate studies are combined to create a single longitudinal model spanning several developmental periods. PMID:19890440

  1. Analysis of Ground Motion from An Underground Chemical Explosion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pitarka, Arben; Mellors, Robert J.; Walter, William R.

    Here in this paper we investigate the excitation and propagation of far-field seismic waves from the 905 kg trinitrotoluene equivalent underground chemical explosion SPE-3 recorded during the Source Physics Experiment (SPE) at the Nevada National Security Site. The recorded far-field ground motion at short and long distances is characterized by substantial shear-wave energy, and large azimuthal variations in P-and S-wave amplitudes. The shear waves observed on the transverse component of sensors at epicentral distances <50 m suggests they were generated at or very near the source. The relative amplitude of the shear waves grows as the waves propagate away frommore » the source. We analyze and model the shear-wave excitation during the explosion in the 0.01–10 Hz frequency range, at epicentral distances of up to 1 km. We used two simulation techniques. One is based on the empirical isotropic Mueller–Murphy (MM) (Mueller and Murphy, 1971) nuclear explosion source model, and 3D anelastic wave propagation modeling. The second uses a physics-based approach that couples hydrodynamic modeling of the chemical explosion source with anelastic wave propagation modeling. Comparisons with recorded data show the MM source model overestimates the SPE-3 far-field ground motion by an average factor of 4. The observations show that shear waves with substantial high-frequency energy were generated at the source. However, to match the observations additional shear waves from scattering, including surface topography, and heterogeneous shallow structure contributed to the amplification of far-field shear motion. Comparisons between empirically based isotropic and physics-based anisotropic source models suggest that both wave-scattering effects and near-field nonlinear effects are needed to explain the amplitude and irregular radiation pattern of shear motion observed during the SPE-3 explosion.« less

  2. Analysis of Ground Motion from An Underground Chemical Explosion

    DOE PAGES

    Pitarka, Arben; Mellors, Robert J.; Walter, William R.; ...

    2015-09-08

    Here in this paper we investigate the excitation and propagation of far-field seismic waves from the 905 kg trinitrotoluene equivalent underground chemical explosion SPE-3 recorded during the Source Physics Experiment (SPE) at the Nevada National Security Site. The recorded far-field ground motion at short and long distances is characterized by substantial shear-wave energy, and large azimuthal variations in P-and S-wave amplitudes. The shear waves observed on the transverse component of sensors at epicentral distances <50 m suggests they were generated at or very near the source. The relative amplitude of the shear waves grows as the waves propagate away frommore » the source. We analyze and model the shear-wave excitation during the explosion in the 0.01–10 Hz frequency range, at epicentral distances of up to 1 km. We used two simulation techniques. One is based on the empirical isotropic Mueller–Murphy (MM) (Mueller and Murphy, 1971) nuclear explosion source model, and 3D anelastic wave propagation modeling. The second uses a physics-based approach that couples hydrodynamic modeling of the chemical explosion source with anelastic wave propagation modeling. Comparisons with recorded data show the MM source model overestimates the SPE-3 far-field ground motion by an average factor of 4. The observations show that shear waves with substantial high-frequency energy were generated at the source. However, to match the observations additional shear waves from scattering, including surface topography, and heterogeneous shallow structure contributed to the amplification of far-field shear motion. Comparisons between empirically based isotropic and physics-based anisotropic source models suggest that both wave-scattering effects and near-field nonlinear effects are needed to explain the amplitude and irregular radiation pattern of shear motion observed during the SPE-3 explosion.« less

  3. New seminal variety of Stevia rebaudiana: Obtaining fractions with high antioxidant potential of leaves.

    PubMed

    Milani, Paula G; Formigoni, Maysa; Dacome, Antonio S; Benossi, Livia; Costa, Cecília E M DA; Costa, Silvio C DA

    2017-01-01

    The aim of this study was to determine the composition and antioxidant potential of leaves of a new variety of Stevia rebaudiana (Stevia UEM-13). Stevia leaves of UEM-13 contain rebaudioside A as the main glycoside, while most wild Stevia plants contain stevioside. Furthermore can be multiplied by seed, which reduces the cost of plant culture techniques as other clonal varieties are multiplied by buds, requiring sophisticated and expensive seedling production systems. Ethanol and methanol were used in the extraction to determine the bioactive compounds. The methanolic extract was fractionated sequentially with hexane, chloroform, ethyl acetate and isobutanol, and the highest concentration of phenolic compounds and flavonoids was obtained in the ethyl acetate fraction (524.20 mg galic acid equivalent/g; 380.62 µg quercetin equivalent/g). The glycoside content varied greatly among the fractions (0.5% - 65.3%). Higher antioxidant potential was found in the methanol extract and the ethyl acetate fraction with 93.5% and 97.32%, respectively. In addition to being an excellent source for obtaining of extracts rich in glycoside, this new variety can also be used as raw material for the production of extracts or fractions with a significant amount of antioxidant activity and potential to be used as additives in food.

  4. Characterization of neutron calibration fields at the TINT's 50 Ci americium-241/beryllium neutron irradiator

    NASA Astrophysics Data System (ADS)

    Liamsuwan, T.; Channuie, J.; Ratanatongchai, W.

    2015-05-01

    Reliable measurement of neutron radiation is important for monitoring and protection in workplace where neutrons are present. Although Thailand has been familiar with applications of neutron sources and neutron beams for many decades, there is no calibration facility dedicated to neutron measuring devices available in the country. Recently, Thailand Institute of Nuclear Technology (TINT) has set up a multi-purpose irradiation facility equipped with a 50 Ci americium-241/beryllium neutron irradiator. The facility is planned to be used for research, nuclear analytical techniques and, among other applications, calibration of neutron measuring devices. In this work, the neutron calibration fields were investigated in terms of neutron energy spectra and dose equivalent rates using Monte Carlo simulations, an in-house developed neutron spectrometer and commercial survey meters. The characterized neutron fields can generate neutron dose equivalent rates ranging from 156 μSv/h to 3.5 mSv/h with nearly 100% of dose contributed by neutrons of energies larger than 0.01 MeV. The gamma contamination was less than 4.2-7.5% depending on the irradiation configuration. It is possible to use the described neutron fields for calibration test and routine quality assurance of neutron dose rate meters and passive dosemeters commonly used in radiation protection dosimetry.

  5. Transport synthetic acceleration with opposing reflecting boundary conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zika, M.R.; Adams, M.L.

    2000-02-01

    The transport synthetic acceleration (TSA) scheme is extended to problems with opposing reflecting boundary conditions. This synthetic method employs a simplified transport operator as its low-order approximation. A procedure is developed that allows the use of the conjugate gradient (CG) method to solve the resulting low-order system of equations. Several well-known transport iteration algorithms are cast in a linear algebraic form to show their equivalence to standard iterative techniques. Source iteration in the presence of opposing reflecting boundary conditions is shown to be equivalent to a (poorly) preconditioned stationary Richardson iteration, with the preconditioner defined by the method of iteratingmore » on the incident fluxes on the reflecting boundaries. The TSA method (and any synthetic method) amounts to a further preconditioning of the Richardson iteration. The presence of opposing reflecting boundary conditions requires special consideration when developing a procedure to realize the CG method for the proposed system of equations. The CG iteration may be applied only to symmetric positive definite matrices; this condition requires the algebraic elimination of the boundary angular corrections from the low-order equations. As a consequence of this elimination, evaluating the action of the resulting matrix on an arbitrary vector involves two transport sweeps and a transmission iteration. Results of applying the acceleration scheme to a simple test problem are presented.« less

  6. Estimation of ambient dose equivalent distribution in the 18F-FDG administration room using Monte Carlo simulation.

    PubMed

    Nagamine, Shuji; Fujibuchi, Toshioh; Umezu, Yoshiyuki; Himuro, Kazuhiko; Awamoto, Shinichi; Tsutsui, Yuji; Nakamura, Yasuhiko

    2017-03-01

    In this study, we estimated the ambient dose equivalent rate (hereafter "dose rate") in the fluoro-2-deoxy-D-glucose (FDG) administration room in our hospital using Monte Carlo simulations, and examined the appropriate medical-personnel locations and a shielding method to reduce the dose rate during FDG injection using a lead glass shield. The line source was assumed to be the FDG feed tube and the patient a cube source. The dose rate distribution was calculated with a composite source that combines the line and cube sources. The dose rate distribution was also calculated when a lead glass shield was placed in the rear section of the lead-acrylic shield. The dose rate behind the automatic administration device decreased by 87 % with respect to that behind the lead-acrylic shield. Upon positioning a 2.8-cm-thick lead glass shield, the dose rate behind the lead-acrylic shield decreased by 67 %.

  7. CORRECTIONS ASSOCIATED WITH ON-PHANTOM CALIBRATIONS OF NEUTRON PERSONAL DOSEMETERS.

    PubMed

    Hawkes, N P; Thomas, D J; Taylor, G C

    2016-09-01

    The response of neutron personal dosemeters as a function of neutron energy and angle of incidence is typically measured by mounting the dosemeters on a slab phantom and exposing them to neutrons from an accelerator-based or radionuclide source. The phantom is placed close to the source (75 cm) so that the effect of scattered neutrons is negligible. It is usual to mount several dosemeters on the phantom together. Because the source is close, the source distance and the neutron incidence angle vary significantly over the phantom face, and each dosemeter may receive a different dose equivalent. This is particularly important when the phantom is angled away from normal incidence. With accelerator-produced neutrons, the neutron energy and fluence vary with emission angle relative to the charged particle beam that produces the neutrons, contributing further to differences in dose equivalent, particularly when the phantom is located at other than the straight-ahead position (0° to the beam). Corrections for these effects are quantified and discussed in this article. © Crown copyright 2015.

  8. Simulated impedance of diffusion in porous media

    DOE PAGES

    Cooper, Samuel J.; Bertei, Antonio; Finegan, Donal P.; ...

    2017-07-27

    This paper describes the use of a frequency domain, finite-difference scheme to simulate the impedance spectra of diffusion in porous microstructures. We investigate both open and closed systems for a range of ideal geometries, as well as some randomly generated synthetic volumes and tomographically derived microstructural data. In many cases, the spectra deviate significantly from the conventional Warburg-type elements typically used to represent diffusion in equivalent circuit analysis. Furthermore, a key finding is that certain microstructures show multiple peaks in the complex plane, which may be misinterpreted as separate electrochemical processes in real impedance data. This is relevant to batterymore » electrode design as the techniques for nano-scale fabrication become more widespread. This simulation tool is provided as an open-source MatLab application and is freely available online as part of the TauFactor platform.« less

  9. Monitoring of atmospheric particles and ozone in Sequoia National Park: 1985-1987. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cahill, T.A.

    1989-06-01

    The Air Quality Group Monitored particles and ozone in Sequoia National Park as part of an effort to understand the impact of acid deposition and other air pollutants on the park's forests and watersheds. For high-elevation ozone measurement, the project developed a new solar-powered ozone monitoring system. The particulate matter sampled was analyzed for elemental content using nuclear techniques. The measurements were correlated with meteorology, known elemental sources, and wet and dry deposition measurements. The results show that particulate matter at Sequoia National Park is similar to that present at other sites on the western slope of the Sierra Nevadamore » range at equivalent elevations. Some anthropogenic species, including nickel and sulfate, are present in higher concentrations at Sequoia than at Yosemite National Park.« less

  10. Distinguishing Provenance Equivalence of Earth Science Data

    NASA Technical Reports Server (NTRS)

    Tilmes, Curt; Yesha, Ye; Halem, M.

    2010-01-01

    Reproducibility of scientific research relies on accurate and precise citation of data and the provenance of that data. Earth science data are often the result of applying complex data transformation and analysis workflows to vast quantities of data. Provenance information of data processing is used for a variety of purposes, including understanding the process and auditing as well as reproducibility. Certain provenance information is essential for producing scientifically equivalent data. Capturing and representing that provenance information and assigning identifiers suitable for precisely distinguishing data granules and datasets is needed for accurate comparisons. This paper discusses scientific equivalence and essential provenance for scientific reproducibility. We use the example of an operational earth science data processing system to illustrate the application of the technique of cascading digital signatures or hash chains to precisely identify sets of granules and as provenance equivalence identifiers to distinguish data made in an an equivalent manner.

  11. Language Measurement Equivalence of the Ethnic Identity Scale With Mexican American Early Adolescents

    PubMed Central

    White, Rebecca M. B.; Umaña-Taylor, Adriana J.; Knight, George P.; Zeiders, Katharine H.

    2011-01-01

    The current study considers methodological challenges in developmental research with linguistically diverse samples of young adolescents. By empirically examining the cross-language measurement equivalence of a measure assessing three components of ethnic identity development (i.e., exploration, resolution, and affirmation) among Mexican American adolescents, the study both assesses the cross-language measurement equivalence of a common measure of ethnic identity and provides an appropriate conceptual and analytical model for researchers needing to evaluate measurement scales translated into multiple languages. Participants are 678 Mexican-origin early adolescents and their mothers. Measures of exploration and resolution achieve the highest levels of equivalence across language versions. The measure of affirmation achieves high levels of equivalence. Results highlight potential ways to correct for any problems of nonequivalence across language versions of the affirmation measure. Suggestions are made for how researchers working with linguistically diverse samples can use the highlighted techniques to evaluate their own translated measures. PMID:22116736

  12. Measurement of greenhouse gas emissions from agricultural sites using open-path optical remote sensing method.

    PubMed

    Ro, Kyoung S; Johnson, Melvin H; Varma, Ravi M; Hashmonay, Ram A; Hunt, Patrick

    2009-08-01

    Improved characterization of distributed emission sources of greenhouse gases such as methane from concentrated animal feeding operations require more accurate methods. One promising method is recently used by the USEPA. It employs a vertical radial plume mapping (VRPM) algorithm using optical remote sensing techniques. We evaluated this method to estimate emission rates from simulated distributed methane sources. A scanning open-path tunable diode laser was used to collect path-integrated concentrations (PICs) along different optical paths on a vertical plane downwind of controlled methane releases. Each cycle consists of 3 ground-level PICs and 2 above ground PICs. Three- to 10-cycle moving averages were used to reconstruct mass equivalent concentration plum maps on the vertical plane. The VRPM algorithm estimated emission rates of methane along with meteorological and PIC data collected concomitantly under different atmospheric stability conditions. The derived emission rates compared well with actual released rates irrespective of atmospheric stability conditions. The maximum error was 22 percent when 3-cycle moving average PICs were used; however, it decreased to 11% when 10-cycle moving average PICs were used. Our validation results suggest that this new VRPM method may be used for improved estimations of greenhouse gas emission from a variety of agricultural sources.

  13. Virtual welding equipment for simulation of GMAW processes with integration of power source regulation

    NASA Astrophysics Data System (ADS)

    Reisgen, Uwe; Schleser, Markus; Mokrov, Oleg; Zabirov, Alexander

    2011-06-01

    A two dimensional transient numerical analysis and computational module for simulation of electrical and thermal characteristics during electrode melting and metal transfer involved in Gas-Metal-Arc-Welding (GMAW) processes is presented. Solution of non-linear transient heat transfer equation is carried out using a control volume finite difference technique. The computational module also includes controlling and regulation algorithms of industrial welding power sources. The simulation results are the current and voltage waveforms, mean voltage drops at different parts of circuit, total electric power, cathode, anode and arc powers and arc length. We describe application of the model for normal process (constant voltage) and for pulsed processes with U/I and I/I-modulation modes. The comparisons with experimental waveforms of current and voltage show that the model predicts current, voltage and electric power with a high accuracy. The model is used in simulation package SimWeld for calculation of heat flux into the work-piece and the weld seam formation. From the calculated heat flux and weld pool sizes, an equivalent volumetric heat source according to Goldak model, can be generated. The method was implemented and investigated with the simulation software SimWeld developed by the ISF at RWTH Aachen University.

  14. Experimental Investigation of Unsteady Thrust Augmentation Using a Speaker-Driven Jet

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Wernet, Mark P.; John, Wentworth T.

    2007-01-01

    An experimental investigation is described in which a simple speaker-driven jet was used as a pulsed thrust source (driver) for an ejector configuration. The objectives of the investigation were twofold. The first was to expand the experimental body of evidence showing that an unsteady thrust source, combined with a properly sized ejector generally yields higher thrust augmentation values than a similarly sized, steady driver of equivalent thrust. The second objective was to identify characteristics of the unsteady driver that may be useful for sizing ejectors, and for predicting the thrust augmentation levels that may be achieved. The speaker-driven jet provided a convenient source for the investigation because it is entirely unsteady (i.e., it has no mean velocity component) and because relevant parameters such as frequency, time-averaged thrust, and diameter are easily variable. The experimental setup will be described, as will the two main measurements techniques employed. These are thrust and digital particle imaging velocimetry of the driver. It will be shown that thrust augmentation values as high as 1.8 were obtained, that the diameter of the best ejector scaled with the dimensions of the emitted vortex, and that the so-called formation time serves as a useful dimensionless parameter by which to characterize the jet and predict performance.

  15. Primary radiotherapy for carcinoma of the endometrium using external beam radiotherapy and single line source brachytherapy.

    PubMed

    Churn, M; Jones, B

    1999-01-01

    A small proportion of patients with adenocarcinoma of the endometrium are inoperable by virtue of severe concurrent medical conditions, gross obesity or advanced stage disease. They can be treated with primary radiotherapy with either curative or palliative intent. We report 37 such patients treated mainly with a combination of external beam radiotherapy and intracavitary brachytherapy using a single line source technique. The 5-year disease-specific survival for nonsurgically staged patients was 68.4% for FIGO Stages I and II and 33.3% for Stages III and IV. The incidence of late morbidity was acceptably low. Using the Franco-Italian Glossary, there was 27.0% grade 1 but no grade 2-4 bladder toxicity. For the rectum the rates were 18.9% grade 1, 5.4% grade 2, 2.7% grade 3, and no grade 4 toxicity. Methods of optimizing the dose distribution of the brachytherapy by means of variation of treatment length, radioactive source positions, and prescription point according to tumour bulk and individual anatomy are discussed. The biologically equivalent doses (BED) for combined external beam radiotherapy and brachytherapy were calculated to be in the range of 78-107 Gy(3) or 57-75 Gy(10) at point 'A' and appear adequate for the control of Stage I cancers.

  16. An evaluation of kurtosis beamforming in magnetoencephalography to localize the epileptogenic zone in drug resistant epilepsy patients.

    PubMed

    Hall, Michael B H; Nissen, Ida A; van Straaten, Elisabeth C W; Furlong, Paul L; Witton, Caroline; Foley, Elaine; Seri, Stefano; Hillebrand, Arjan

    2018-06-01

    Kurtosis beamforming is a useful technique for analysing magnetoencephalograpy (MEG) data containing epileptic spikes. However, the implementation varies and few studies measure concordance with subsequently resected areas. We evaluated kurtosis beamforming as a means of localizing spikes in drug-resistant epilepsy patients. We retrospectively applied kurtosis beamforming to MEG recordings of 22 epilepsy patients that had previously been analysed using equivalent current dipole (ECD) fitting. Virtual electrodes were placed in the kurtosis volumetric peaks and visually inspected to select a candidate source. The candidate sources were compared to the ECD localizations and resection areas. The kurtosis beamformer produced interpretable localizations in 18/22 patients, of which the candidate source coincided with the resection lobe in 9/13 seizure-free patients and in 3/5 patients with persistent seizures. The sublobar accuracy of the kurtosis beamformer with respect to the resection zone was higher than ECD (56% and 50%, respectively), however, ECD resulted in a higher lobar accuracy (75%, 67%). Kurtosis beamforming may provide additional value when spikes are not clearly discernible on the sensors and support ECD localizations when dipoles are scattered. Kurtosis beamforming should be integrated with existing clinical protocols to assist in localizing the epileptogenic zone. Copyright © 2018 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  17. Equivalent source modeling of the core magnetic field using magsat data

    NASA Technical Reports Server (NTRS)

    Mayhew, M. A.; Estes, R. H.

    1983-01-01

    Experiments are carried out on fitting the main field using different numbers of equivalent sources arranged in equal area at fixed radii at and inside the core-mantle boundary. In fixing the radius for a given series of runs, the convergence problems that result from the extreme nonlinearity of the problem when dipole positions are allowed to vary are avoided. Results are presented from a comparison between this approach and the standard spherical harmonic approach for modeling the main field in terms of accuracy and computational efficiency. The modeling of the main field with an equivalent dipole representation is found to be comparable to the standard spherical harmonic approach in accuracy. The 32 deg dipole density (42 dipoles) corresponds approximately to an eleventh degree/order spherical harmonic expansion (143 parameters), whereas the 21 dipole density (92 dipoles) corresponds to approximately a seventeenth degree and order expansion (323 parameters). It is pointed out that fixing the dipole positions results in rapid convergence of the dipole solutions for single-epoch models.

  18. Adipose-derived stromal cells for the reconstruction of a human vesical equivalent.

    PubMed

    Rousseau, Alexandre; Fradette, Julie; Bernard, Geneviève; Gauvin, Robert; Laterreur, Véronique; Bolduc, Stéphane

    2015-11-01

    Despite a wide panel of tissue-engineering models available for vesical reconstruction, the lack of a differentiated urothelium remains their main common limitation. For the first time to our knowledge, an entirely human vesical equivalent, free of exogenous matrix, has been reconstructed using the self-assembly method. Moreover, we tested the contribution of adipose-derived stromal cells, an easily available source of mesenchymal cells featuring many potential advantages, by reconstructing three types of equivalent, named fibroblast vesical equivalent, adipose-derived stromal cell vesical equivalent and hybrid vesical equivalent--the latter containing both adipose-derived stromal cells and fibroblasts. The new substitutes have been compared and characterized for matrix composition and organization, functionality and mechanical behaviour. Although all three vesical equivalents displayed adequate collagen type I and III expression, only two of them, fibroblast vesical equivalent and hybrid vesical equivalent, sustained the development of a differentiated and functional urothelium. The presence of uroplakins Ib, II and III and the tight junction marker ZO-1 was detected and correlated with impermeability. The mechanical resistance of these tissues was sufficient for use by surgeons. We present here in vitro tissue-engineered vesical equivalents, built without the use of any exogenous matrix, able to sustain mechanical stress and to support the formation of a functional urothelium, i.e. able to display a barrier function similar to that of native tissue. Copyright © 2013 John Wiley & Sons, Ltd.

  19. A study of microwave downcoverters operating in the K sub u band

    NASA Technical Reports Server (NTRS)

    Fellers, R. G.; Simpson, T. L.; Tseng, B.

    1982-01-01

    A computer program for parametric amplifier design is developed with special emphasis on practical design considerations for microwave integrated circuit degenerate amplifiers. Precision measurement techniques are developed to obtain a more realistic varactor equivalent circuit. The existing theory of a parametric amplifier is modified to include the equivalent circuit, and microwave properties, such as loss characteristics and circuit discontinuities are investigated.

  20. Reliable Early Classification on Multivariate Time Series with Numerical and Categorical Attributes

    DTIC Science & Technology

    2015-05-22

    design a procedure of feature extraction in REACT named MEG (Mining Equivalence classes with shapelet Generators) based on the concept of...Equivalence Classes Mining [12, 15]. MEG can efficiently and effectively generate the discriminative features. In addition, several strategies are proposed...technique of parallel computing [4] to propose a process of pa- rallel MEG for substantially reducing the computational overhead of discovering shapelet

  1. Flexible feature interface for multimedia sources

    DOEpatents

    Coffland, Douglas R [Livermore, CA

    2009-06-09

    A flexible feature interface for multimedia sources system that includes a single interface for the addition of features and functions to multimedia sources and for accessing those features and functions from remote hosts. The interface utilizes the export statement: export "C" D11Export void FunctionName(int argc, char ** argv,char * result, SecureSession *ctrl) or the binary equivalent of the export statement.

  2. Natural hybridization within seed sources of shortleaf pine (Pinus echinata Mill.) and loblolly pine (Pinus taeda L.)

    Treesearch

    Shiqin Xu; C.G. Tauer; C. Dana Nelson

    2008-01-01

    Shortleaf and loblolly pine trees (n=93 and 102, respectively) from 22 seed sources of the Southwide Southern Pine Seed Source Study plantings or equivalent origin were evaluated for amplified fragment length polymorphism (AFLP) variation. These sampled trees represent shortleaf pine and loblolly pine, as they existed across their native geographic ranges before...

  3. 32 CFR 806.20 - Records of non-U.S. government source.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 6 2012-07-01 2012-07-01 false Records of non-U.S. government source. 806.20... ADMINISTRATION AIR FORCE FREEDOM OF INFORMATION ACT PROGRAM § 806.20 Records of non-U.S. government source. (a... notify their MAJCOM (or equivalent) FOIA office, in writing, via fax or e-mail when the Department of...

  4. 32 CFR 806.20 - Records of non-U.S. government source.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 6 2014-07-01 2014-07-01 false Records of non-U.S. government source. 806.20... ADMINISTRATION AIR FORCE FREEDOM OF INFORMATION ACT PROGRAM § 806.20 Records of non-U.S. government source. (a... notify their MAJCOM (or equivalent) FOIA office, in writing, via fax or e-mail when the Department of...

  5. MEASURING THE GEOMETRY OF THE UNIVERSE FROM WEAK GRAVITATIONAL LENSING BEHIND GALAXY GROUPS IN THE HST COSMOS SURVEY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, James E.; Massey, Richard J.; Leauthaud, Alexie

    2012-04-20

    Gravitational lensing can provide pure geometric tests of the structure of spacetime, for instance by determining empirically the angular diameter distance-redshift relation. This geometric test has been demonstrated several times using massive clusters which produce a large lensing signal. In this case, matter at a single redshift dominates the lensing signal, so the analysis is straightforward. It is less clear how weaker signals from multiple sources at different redshifts can be stacked to demonstrate the geometric dependence. We introduce a simple measure of relative shear which for flat cosmologies separates the effect of lens and source positions into multiplicative terms,more » allowing signals from many different source-lens pairs to be combined. Applying this technique to a sample of groups and low-mass clusters in the COSMOS survey, we detect a clear variation of shear with distance behind the lens. This represents the first detection of the geometric effect using weak lensing by multiple, low-mass groups. The variation of distance with redshift is measured with sufficient precision to constrain the equation of state of the universe under the assumption of flatness, equivalent to a detection of a dark energy component {Omega}{sub X} at greater than 99% confidence for an equation-of-state parameter -2.5 {<=} w {<=} -0.1. For the case w = -1, we find a value for the cosmological constant density parameter {Omega}{sub {Lambda}} = 0.85{sup +0.044}{sub -}0{sub .19} (68% CL) and detect cosmic acceleration (q{sub 0} < 0) at the 98% CL. We consider the systematic uncertainties associated with this technique and discuss the prospects for applying it in forthcoming weak-lensing surveys.« less

  6. An evaluation and comparison of intraventricular, intraparenchymal, and fluid-coupled techniques for intracranial pressure monitoring in patients with severe traumatic brain injury.

    PubMed

    Vender, John; Waller, Jennifer; Dhandapani, Krishnan; McDonnell, Dennis

    2011-08-01

    Intracranial pressure measurements have become one of the mainstays of traumatic brain injury management. Various technologies exist to monitor intracranial pressure from a variety of locations. Transducers are usually placed to assess pressure in the brain parenchyma and the intra-ventricular fluid, which are the two most widely accepted compartmental monitoring sites. The individual reliability and inter-reliability of these devices with and without cerebrospinal fluid diversion is not clear. The predictive capability of monitors in both of these sites to local, regional, and global changes also needs further clarification. The technique of monitoring intraventricular pressure with a fluid-coupled transducer system is also reviewed. There has been little investigation into the relationship among pressure measurements obtained from these two sources using these three techniques. Eleven consecutive patients with severe, closed traumatic brain injury not requiring intracranial mass lesion evacuation were admitted into this prospective study. Each patient underwent placement of a parenchymal and intraventricular pressure monitor. The ventricular catheter tubing was also connected to a sensor for fluid-coupled measurement. Pressure from all three sources was measured hourly with and without ventricular drainage. Statistically significant correlation within each monitoring site was seen. No monitoring location was more predictive of global pressure changes or more responsive to pressure changes related to patient stimulation. However, the intraventricular pressure measurements were not reliable in the presence of cerebrospinal fluid drainage whereas the parenchymal measurements remained unaffected. Intraparenchymal pressure monitoring provides equivalent, statistically similar pressure measurements when compared to intraventricular monitors in all care and clinical settings. This is particularly valuable when uninterrupted cerebrospinal fluid drainage is desirable.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borisova, Elena; Lilly, Simon J.; Cantalupo, Sebastiano

    A toy model is developed to understand how the spatial distribution of fluorescent emitters in the vicinity of bright quasars could be affected by the geometry of the quasar bi-conical radiation field and by its lifetime. The model is then applied to the distribution of high-equivalent-width Ly α emitters (with rest-frame equivalent widths above 100 Å, threshold used in, e.g., Trainor and Steidel) identified in a deep narrow-band 36 × 36 arcmin{sup 2} image centered on the luminous quasar Q0420–388. These emitters are found near the edge of the field and show some evidence of an azimuthal asymmetry on themore » sky of the type expected if the quasar is radiating in a bipolar cone. If these sources are being fluorescently illuminated by the quasar, the two most distant objects require a lifetime of at least 15 Myr for an opening angle of 60° or more, increasing to more than 40 Myr if the opening angle is reduced to a minimum of 30°. However, some other expected signatures of boosted fluorescence are not seen at the current survey limits, e.g., a fall off in Ly α brightness, or equivalent width, with distance. Furthermore, to have most of the Ly α emission of the two distant sources to be fluorescently boosted would require the quasar to have been significantly brighter in the past. This suggests that these particular sources may not be fluorescent, invalidating the above lifetime constraints. This would cast doubt on the use of this relatively low equivalent width threshold and thus also on the lifetime analysis in Trainor and Steidel.« less

  8. A Statistical Review of Alternative Zinc and Copper Extraction from Mineral Fertilizers and Industrial By-Products.

    PubMed

    Cenciani de Souza, Camila Prado; Aparecida de Abreu, Cleide; Coscione, Aline Renée; Alberto de Andrade, Cristiano; Teixeira, Luiz Antonio Junqueira; Consolini, Flavia

    2018-01-01

    Rapid, accurate, and low-cost alternative analytical methods for micronutrient quantification in fertilizers are fundamental in QC. The purpose of this study was to evaluate whether zinc (Zn) and copper (Cu) content in mineral fertilizers and industrial by-products determined by the alternative methods USEPA 3051a, 10% HCl, and 10% H2SO4 are statistically equivalent to the standard method, consisting of hot-plate digestion using concentrated HCl. The commercially marketed Zn and Cu sources in Brazil consisted of oxides, carbonate, and sulfate fertilizers and by-products consisting of galvanizing ash, galvanizing sludge, brass ash, and brass or scrap slag. The contents of sources ranged from 15 to 82% and 10 to 45%, respectively, for Zn and Cu. The Zn and Cu contents refer to the variation of the elements found in the different sources evaluated with the concentrated HCl method as shown in Table 1. A protocol based on the following criteria was used for the statistical analysis assessment of the methods: F-test modified by Graybill, t-test for the mean error, and linear correlation coefficient analysis. In terms of equivalents, 10% HCl extraction was equivalent to the standard method for Zn, and the results of the USEPA 3051a and 10% HCl methods indicated that these methods were equivalents for Cu. Therefore, these methods can be considered viable alternatives to the standard method of determination for Cu and Zn in mineral fertilizers and industrial by-products in future research for their complete validation.

  9. Turbulent flow separation control through passive techniques

    NASA Technical Reports Server (NTRS)

    Lin, J. C.; Howard, F. G.; Selby, G. V.

    1989-01-01

    Several passive separation control techniques for controlling moderate two-dimensional turbulent flow separation over a backward-facing ramp are studied. Small transverse and swept grooves, passive porous surfaces, large longitudinal grooves, and vortex generators were among the techniques used. It was found that, unlike the transverse and longitudinal grooves of an equivalent size, the 45-deg swept-groove configurations tested tended to enhance separation.

  10. 40 CFR 466.24 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... pretreatment standards the following equivalent mass standards are provided. (1) There shall be no discharge of... 40 Protection of Environment 30 2014-07-01 2014-07-01 false Pretreatment standards for existing...) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) PORCELAIN ENAMELING POINT SOURCE CATEGORY Cast Iron Basis...

  11. 40 CFR 466.24 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... pretreatment standards the following equivalent mass standards are provided. (1) There shall be no discharge of... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Pretreatment standards for existing...) EFFLUENT GUIDELINES AND STANDARDS PORCELAIN ENAMELING POINT SOURCE CATEGORY Cast Iron Basis Material...

  12. 40 CFR 466.24 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... pretreatment standards the following equivalent mass standards are provided. (1) There shall be no discharge of... 40 Protection of Environment 31 2012-07-01 2012-07-01 false Pretreatment standards for existing...) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) PORCELAIN ENAMELING POINT SOURCE CATEGORY Cast Iron Basis...

  13. 40 CFR 466.24 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... pretreatment standards the following equivalent mass standards are provided. (1) There shall be no discharge of... 40 Protection of Environment 30 2011-07-01 2011-07-01 false Pretreatment standards for existing...) EFFLUENT GUIDELINES AND STANDARDS PORCELAIN ENAMELING POINT SOURCE CATEGORY Cast Iron Basis Material...

  14. 40 CFR 466.24 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... pretreatment standards the following equivalent mass standards are provided. (1) There shall be no discharge of... 40 Protection of Environment 31 2013-07-01 2013-07-01 false Pretreatment standards for existing...) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) PORCELAIN ENAMELING POINT SOURCE CATEGORY Cast Iron Basis...

  15. Mathematical Fluid Dynamics of Store and Stage Separation

    DTIC Science & Technology

    2005-05-01

    coordinates r = stretched inner radius S, (x) = effective source strength Re, = transition Reynolds number t = time r = reflection coefficient T = temperature...wave drag due to lift integral has the same form as that due to thickness, the source strength of the equivalent body depends on streamwise derivatives...revolution in which the source strength S, (x) is proportional to the x rate of change of cross sectional area, the source strength depends on the streamwise

  16. A mesostate-space model for EEG and MEG.

    PubMed

    Daunizeau, Jean; Friston, Karl J

    2007-10-15

    We present a multi-scale generative model for EEG, that entails a minimum number of assumptions about evoked brain responses, namely: (1) bioelectric activity is generated by a set of distributed sources, (2) the dynamics of these sources can be modelled as random fluctuations about a small number of mesostates, (3) mesostates evolve in a temporal structured way and are functionally connected (i.e. influence each other), and (4) the number of mesostates engaged by a cognitive task is small (e.g. between one and a few). A Variational Bayesian learning scheme is described that furnishes the posterior density on the models parameters and its evidence. Since the number of meso-sources specifies the model, the model evidence can be used to compare models and find the optimum number of meso-sources. In addition to estimating the dynamics at each cortical dipole, the mesostate-space model and its inversion provide a description of brain activity at the level of the mesostates (i.e. in terms of the dynamics of meso-sources that are distributed over dipoles). The inclusion of a mesostate level allows one to compute posterior probability maps of each dipole being active (i.e. belonging to an active mesostate). Critically, this model accommodates constraints on the number of meso-sources, while retaining the flexibility of distributed source models in explaining data. In short, it bridges the gap between standard distributed and equivalent current dipole models. Furthermore, because it is explicitly spatiotemporal, the model can embed any stochastic dynamical causal model (e.g. a neural mass model) as a Markov process prior on the mesostate dynamics. The approach is evaluated and compared to standard inverse EEG techniques, using synthetic data and real data. The results demonstrate the added-value of the mesostate-space model and its variational inversion.

  17. Theoretical Studies of Microstrip Antennas : Volume I, General Design Techniques and Analyses of Single and Coupled Elements

    DOT National Transportation Integrated Search

    1979-09-01

    Volume 1 of Theoretical Studies of Microstrip Antennas deals with general techniques and analyses of single and coupled radiating elements. Specifically, we review and then employ an important equivalence theorem that allows a pair of vector potentia...

  18. Experimental investigation of microwave interaction with magnetoplasma in miniature multipolar configuration using impedance measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dey, Indranuj, E-mail: indranuj@aees.kyushu-u.ac.jp; Toyoda, Yuji; Yamamoto, Naoji

    2014-09-15

    A miniature microwave plasma source employing both radial and axial magnetic fields for plasma confinement has been developed for micro-propulsion applications. Plasma is initiated by launching microwaves via a short monopole antenna to circumvent geometrical cutoff limitations. The amplitude and phase of the forward and reflected microwave power is measured to obtain the complex reflection coefficient from which the equivalent impedance of the plasma source is determined. Effect of critical plasma density condition is reflected in the measurements and provides insight into the working of the miniature plasma source. A basic impedance calculation model is developed to help in understandingmore » the experimental observations. From experiment and theory, it is seen that the equivalent impedance magnitude is controlled by the coaxial discharge boundary conditions, and the phase is influenced primarily by the plasma immersed antenna impedance.« less

  19. Petroleum systems of the Northwest Java Province, Java and offshore southeast Sumatra, Indonesia

    USGS Publications Warehouse

    Bishop, Michele G.

    2000-01-01

    Mature, synrift lacustrine shales of Eocene to Oligocene age and mature, late-rift coals and coaly shales of Oligocene to Miocene age are source rocks for oil and gas in two important petroleum systems of the onshore and offshore areas of the Northwest Java Basin. Biogenic gas and carbonate-sourced gas have also been identified. These hydrocarbons are trapped primarily in anticlines and fault blocks involving sandstone and carbonate reservoirs. These source rocks and reservoir rocks were deposited in a complex of Tertiary rift basins formed from single or multiple half-grabens on the south edge of the Sunda Shelf plate. The overall transgressive succession was punctuated by clastic input from the exposed Sunda Shelf and marine transgressions from the south. The Northwest Java province may contain more than 2 billion barrels of oil equivalent in addition to the 10 billion barrels of oil equivalent already identified.

  20. An equivalent n-source for WGPu derived from a spectrum-shifted PuBe source

    NASA Astrophysics Data System (ADS)

    Ghita, Gabriel; Sjoden, Glenn; Baciak, James; Walker, Scotty; Cornelison, Spring

    2008-04-01

    We have designed, built, and laboratory-tested a unique shield design that transforms the complex neutron spectrum from PuBe source neutrons, generated at high energies, to nearly exactly the neutron signature leaking from a significant spherical mass of weapons grade plutonium (WGPu). This equivalent "X-material shield assembly" (Patent Pending) enables the harder PuBe source spectrum (average energy of 4.61 MeV) from a small encapsulated standard 1-Ci PuBe source to be transformed, through interactions in the shield, so that leakage neutrons are shifted in energy and yield to become a close reproduction of the neutron spectrum leaking from a large subcritical mass of WGPu metal (mean energy 2.11 MeV). The utility of this shielded PuBe surrogate for WGPu is clear, since it directly enables detector field testing without the expense and risk of handling large amounts of Special Nuclear Materials (SNM) as WGPu. Also, conventional sources using Cf-252, which is difficult to produce, and decays with a 2.7 year half life, could be replaced by this shielded PuBe technology in order to simplify operational use, since a sealed PuBe source relies on Pu-239 (T½=24,110 y), and remains viable for more than hundreds of years.

  1. Thermally assisted OSL application for equivalent dose estimation; comparison of multiple equivalent dose values as well as saturation levels determined by luminescence and ESR techniques for a sedimentary sample collected from a fault gouge

    NASA Astrophysics Data System (ADS)

    Şahiner, Eren; Meriç, Niyazi; Polymeris, George S.

    2017-02-01

    Equivalent dose estimation (De) constitutes the most important part of either trap-charge dating techniques or dosimetry applications. In the present work, multiple, independent equivalent dose estimation approaches were adopted, using both luminescence and ESR techniques; two different minerals were studied, namely quartz as well as feldspathic polymineral samples. The work is divided into three independent parts, depending on the type of signal employed. Firstly, different De estimation approaches were carried out on both polymineral and contaminated quartz, using single aliquot regenerative dose protocols employing conventional OSL and IRSL signals, acquired at different temperatures. Secondly, ESR equivalent dose estimations using the additive dose procedure both at room temperature and at 90 K were discussed. Lastly, for the first time in the literature, a single aliquot regenerative protocol employing a thermally assisted OSL signal originating from Very Deep Traps was applied for natural minerals. Rejection criteria such as recycling and recovery ratios are also presented. The SAR protocol, whenever applied, provided with compatible De estimations with great accuracy, independent on either the type of mineral or the stimulation temperature. Low temperature ESR signals resulting from Al and Ti centers indicate very large De values due to bleaching in-ability, associated with large uncertainty values. Additionally, dose saturation of different approaches was investigated. For the signal arising from Very Deep Traps in quartz saturation is extended almost by one order of magnitude. It is interesting that most of De values yielded using different luminescence signals agree with each other and ESR Ge center has very large D0 values. The results presented above highly support the argument that the stability and the initial ESR signal of the Ge center is highly sample-dependent, without any instability problems for the cases of quartz resulting from fault gouge.

  2. Measurements of neutron dose equivalent for a proton therapy center using uniform scanning proton beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng Yuanshui; Liu Yaxi; Zeidan, Omar

    Purpose: Neutron exposure is of concern in proton therapy, and varies with beam delivery technique, nozzle design, and treatment conditions. Uniform scanning is an emerging treatment technique in proton therapy, but neutron exposure for this technique has not been fully studied. The purpose of this study is to investigate the neutron dose equivalent per therapeutic dose, H/D, under various treatment conditions for uniform scanning beams employed at our proton therapy center. Methods: Using a wide energy neutron dose equivalent detector (SWENDI-II, ThermoScientific, MA), the authors measured H/D at 50 cm lateral to the isocenter as a function of proton range,more » modulation width, beam scanning area, collimated field size, and snout position. They also studied the influence of other factors on neutron dose equivalent, such as aperture material, the presence of a compensator, and measurement locations. They measured H/D for various treatment sites using patient-specific treatment parameters. Finally, they compared H/D values for various beam delivery techniques at various facilities under similar conditions. Results: H/D increased rapidly with proton range and modulation width, varying from about 0.2 mSv/Gy for a 5 cm range and 2 cm modulation width beam to 2.7 mSv/Gy for a 30 cm range and 30 cm modulation width beam when 18 Multiplication-Sign 18 cm{sup 2} uniform scanning beams were used. H/D increased linearly with the beam scanning area, and decreased slowly with aperture size and snout retraction. The presence of a compensator reduced the H/D slightly compared with that without a compensator present. Aperture material and compensator material also have an influence on neutron dose equivalent, but the influence is relatively small. H/D varied from about 0.5 mSv/Gy for a brain tumor treatment to about 3.5 mSv/Gy for a pelvic case. Conclusions: This study presents H/D as a function of various treatment parameters for uniform scanning proton beams. For similar treatment conditions, the H/D value per uncollimated beam size for uniform scanning beams was slightly lower than that from a passive scattering beam and higher than that from a pencil beam scanning beam, within a factor of 2. Minimizing beam scanning area could effectively reduce neutron dose equivalent for uniform scanning beams, down to the level close to pencil beam scanning.« less

  3. Full waveform time domain solutions for source and induced magnetotelluric and controlled-source electromagnetic fields using quasi-equivalent time domain decomposition and GPU parallelization

    NASA Astrophysics Data System (ADS)

    Imamura, N.; Schultz, A.

    2015-12-01

    Recently, a full waveform time domain solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations of non-zero wavenumber, the ability to operate in areas of high levels of source signal spatial complexity and non-stationarity, etc. This goal would not be obtainable if one were to adopt the finite difference time-domain (FDTD) approach for the forward problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across the large frequency bandwidth. It means that for FDTD simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a linear system that is computationally burdensome to solve. We have implemented our code that addresses this situation through the use of a fictitious wave domain method and GPUs to speed up the computation time. We also substantially reduce the size of the linear systems by applying concepts from successive cascade decimation, through quasi-equivalent time domain decomposition. By combining these refinements, we have made good progress toward implementing the core of a full waveform joint source field/earth conductivity inverse modeling method. From results, we found the use of previous generation of CPU/GPU speeds computations by an order of magnitude over a parallel CPU only approach. In part, this arises from the use of the quasi-equivalent time domain decomposition, which shrinks the size of the linear system dramatically.

  4. GEANT4 and PHITS simulations of the shielding of neutrons from the 252Cf source

    NASA Astrophysics Data System (ADS)

    Shin, Jae Won; Hong, Seung-Woo; Bak, Sang-In; Kim, Do Yoon; Kim, Chong Yeal

    2014-09-01

    Monte Carlo simulations are performed by using the GEANT4 and the PHITS for studying the neutron-shielding abilities of several materials, such as graphite, iron, polyethylene, NS-4-FR and KRAFTON-HB. As a neutron source, 252Cf is considered. For the Monte Carlo simulations by using the GEANT4, high precision (G4HP) models with the G4NDL 4.2 based on ENDF/B-VII data are used. For the simulations by using the PHITS, the JENDL-4.0 library is used. The neutron-dose-equivalent rates with or without five different shielding materials are estimated and compared with the experimental values. The differences between the shielding abilities calculated by using the GEANT4 with the G4NDL 4.2 and the PHITS with the JENDL-4.0 are found not to be significant for all the cases considered in this work. The neutron-dose-equivalent rates obtained by using the GEANT4 and the PHITS are compared with experimental data and other simulation results. Our neutron-dose-equivalent rates agree well with the experimental dose-equivalent rates, within 20% errors, except for polyethylene. For polyethylene, the discrepancies between our calculations and the experiments are less than 40%, as observed in other simulation results.

  5. GLOBAL SOLUTIONS TO FOLDED CONCAVE PENALIZED NONCONVEX LEARNING

    PubMed Central

    Liu, Hongcheng; Yao, Tao; Li, Runze

    2015-01-01

    This paper is concerned with solving nonconvex learning problems with folded concave penalty. Despite that their global solutions entail desirable statistical properties, there lack optimization techniques that guarantee global optimality in a general setting. In this paper, we show that a class of nonconvex learning problems are equivalent to general quadratic programs. This equivalence facilitates us in developing mixed integer linear programming reformulations, which admit finite algorithms that find a provably global optimal solution. We refer to this reformulation-based technique as the mixed integer programming-based global optimization (MIPGO). To our knowledge, this is the first global optimization scheme with a theoretical guarantee for folded concave penalized nonconvex learning with the SCAD penalty (Fan and Li, 2001) and the MCP penalty (Zhang, 2010). Numerical results indicate a significant outperformance of MIPGO over the state-of-the-art solution scheme, local linear approximation, and other alternative solution techniques in literature in terms of solution quality. PMID:27141126

  6. Αntioxidant activity of Cynara scolymus L. and Cynara cardunculus L. extracts obtained by different extraction techniques.

    PubMed

    Kollia, Eleni; Markaki, Panagiota; Zoumpoulakis, Panagiotis; Proestos, Charalampos

    2017-05-01

    Extracts of different parts (heads, bracts and stems) of Cynara cardunculus L. (cardoon) and Cynara scolymus L. (globe artichoke), obtained by two different extraction techniques (Ultrasound-Assisted Extraction (UAE) and classical extraction (CE)) were examined and compared for their total phenolic content (TPC) and their antioxidant activity. Moreover, infusions of the plant's parts were also analysed and compared to aforementioned samples. Results showed that cardoon's heads extract (obtained by Ultrasound-Assisted Extraction) displayed the highest TPC values (1.57 mg Gallic Acid Equivalents (GAE) g -1 fresh weight (fw)), the highest DPPH • scavenging activity (IC50; 0.91 mg ml -1 ) and the highest ABTS •+ radical scavenging capacity (2.08 mg Trolox Equivalents (TE) g -1 fw) compared to infusions and other extracts studied. Moreover, Ultrasound-Assisted Extraction technique proved to be more appropriate and effective for the extraction of antiradical and phenolic compounds.

  7. Objective characterization of airway dimensions using image processing.

    PubMed

    Pepper, Victoria K; Francom, Christian; Best, Cameron A; Onwuka, Ekene; King, Nakesha; Heuer, Eric; Mahler, Nathan; Grischkan, Jonathan; Breuer, Christopher K; Chiang, Tendy

    2016-12-01

    With the evolution of medical and surgical management for pediatric airway disorders, the development of easily translated techniques of measuring airway dimensions can improve the quantification of outcomes of these interventions. We have developed a technique that improves the ability to characterize endoscopic airway dimensions using common bronchoscopic equipment and an open-source image-processing platform. We validated our technique of Endoscopic Airway Measurement (EAM) using optical instruments in simulation tracheas. We then evaluated EAM in a large animal model (Ovis aries, n = 5), comparing tracheal dimensions obtained with EAM to measurements obtained via 3-D fluoroscopic reconstruction. The animal then underwent resection of the measured segment, and direct measurement of this segment was performed and compared to radiographic measurements and those obtained using EAM. The simulation tracheas had a direct measurement of 13.6, 18.5, and 24.2 mm in diameter. The mean difference of diameter in simulation tracheas between direct measurements and measurements obtained using EAM was 0.70 ± 0.57 mm. The excised ovine tracheas had an average diameter of 18.54 ± 0.68 mm. The percent difference in diameter obtained from EAM and from 3-D fluoroscopic reconstruction when compared to measurement of the excised tracheal segment was 4.98 ± 2.43% and 10.74 ± 4.07% respectively. Comparison of these three measurements (EAM, measurement of resected trachea, 3-D fluoroscopic reconstruction) with repeated measures ANOVA demonstrated no statistical significance. Endoscopic airway measurement (EAM) provides equivalent measurements of the airway with the improved versatility of measuring non-circular and multi-level dimensions. Using optical bronchoscopic instruments and open-source image-processing software, our data supports preclinical and clinical translation of an accessible technique to provide objective quantification of airway diameter. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. 40 CFR 70.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... section 183(f) of the Act; (11) Any standard or other requirement of the program to control air pollution... emissions which could not reasonably pass through a stack, chimney, vent, or other functionally-equivalent... means any stationary source (or any group of stationary sources that are located on one or more...

  9. Exploring cover crops as carbon sources for anaerobic soil disinfestation in a vegetable production system

    USDA-ARS?s Scientific Manuscript database

    In a raised-bed plasticulture vegetable production system utilizing anaerobic soil disinfestation (ASD) in Florida field trials, pathogen, weed, and parasitic nematode control was equivalent to or better than the methyl bromide control. Molasses was used as the labile carbon source to stimulate micr...

  10. 40 CFR 430.107 - Pretreatment standards for new sources (PSNS).

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Secondary Fiber Non-Deink Subcategory § 430.107 Pretreatment standards for new sources (PSNS). Except as... biocides: Subpart J [PSNS for secondary fiber non-deink facilities where paperboard from wastepaper is....00030 y = wastewater discharged in kgal per ton of product. a The following equivalent mass limitations...

  11. 40 CFR 430.106 - Pretreatment standards for existing sources (PSES).

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... CATEGORY Secondary Fiber Non-Deink Subcategory § 430.106 Pretreatment standards for existing sources (PSES... [PSES for secondary fiber non-deink facilities where paperboard from wastepaper is produced] Pollutant... = wastewater discharged in kgal per ton of product. a The following equivalent mass limitations are provided as...

  12. Differences in staining intensities affect reported occurrences and concentrations of Giardia spp. in surface drinking water sources

    EPA Science Inventory

    Aim USEPA Method 1623, or its equivalent, is currently used to monitor for protozoan contamination of surface drinking water sources worldwide. At least three approved staining kits used for detecting Cryptosporidium and Giardia are commercially available. This study focuses on ...

  13. 40 CFR 430.106 - Pretreatment standards for existing sources (PSES).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Fiber Non-Deink Subcategory § 430.106 Pretreatment standards for existing sources (PSES). Except as... secondary fiber non-deink facilities where paperboard from wastepaper is produced] Pollutant or pollutant... per ton of product. a The following equivalent mass limitations are provided as guidance in cases when...

  14. 40 CFR 430.107 - Pretreatment standards for new sources (PSNS).

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Secondary Fiber Non-Deink Subcategory § 430.107 Pretreatment standards for new sources (PSNS). Except as... biocides: Subpart J [PSNS for secondary fiber non-deink facilities where paperboard from wastepaper is....00030 y = wastewater discharged in kgal per ton of product. a The following equivalent mass limitations...

  15. 40 CFR 430.106 - Pretreatment standards for existing sources (PSES).

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Fiber Non-Deink Subcategory § 430.106 Pretreatment standards for existing sources (PSES). Except as... secondary fiber non-deink facilities where paperboard from wastepaper is produced] Pollutant or pollutant... per ton of product. a The following equivalent mass limitations are provided as guidance in cases when...

  16. 40 CFR 430.106 - Pretreatment standards for existing sources (PSES).

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... CATEGORY Secondary Fiber Non-Deink Subcategory § 430.106 Pretreatment standards for existing sources (PSES... [PSES for secondary fiber non-deink facilities where paperboard from wastepaper is produced] Pollutant... = wastewater discharged in kgal per ton of product. a The following equivalent mass limitations are provided as...

  17. 40 CFR 430.107 - Pretreatment standards for new sources (PSNS).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) EFFLUENT GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Secondary Fiber Non...: Subpart J [PSNS for secondary fiber non-deink facilities where paperboard from wastepaper is produced... = wastewater discharged in kgal per ton of product. a The following equivalent mass limitations are provided as...

  18. 40 CFR 430.107 - Pretreatment standards for new sources (PSNS).

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) EFFLUENT GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Secondary Fiber Non...: Subpart J [PSNS for secondary fiber non-deink facilities where paperboard from wastepaper is produced... = wastewater discharged in kgal per ton of product. a The following equivalent mass limitations are provided as...

  19. 40 CFR 430.57 - Pretreatment standards for new sources (PSNS).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... compounds as biocides. In cases when POTWs find it necessary to impose mass effluent standards, equivalent mass standards are provided as guidance: Subpart E Pollutant or pollutant property Supplemental PSNS... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Pretreatment standards for new sources...

  20. 40 CFR 430.57 - Pretreatment standards for new sources (PSNS).

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... compounds as biocides. In cases when POTWs find it necessary to impose mass effluent standards, equivalent mass standards are provided as guidance: Subpart E Pollutant or pollutant property Supplemental PSNS... 40 Protection of Environment 30 2011-07-01 2011-07-01 false Pretreatment standards for new sources...

  1. Small-area snow surveys on the northern plains of North Dakota

    USGS Publications Warehouse

    Emerson, Douglas G.; Carroll, T.R.; Steppuhn, Harold

    1985-01-01

    Snow-cover data are needed for many facets of hydrology. The variation in snow cover over small areas is the focus of this study. The feasibility of using aerial surveys to obtain information on the snow water equivalent of the snow cover in order to minimize the necessity of labor intensive ground snow surveys was- evaluated. A low-flying aircraft was used to measure attenuations of natural terrestrial gamma radiation by snow cover. Aerial and ground snow surveys of eight 1-mile snow courses and one 4-mile snow course were used in the evaluation, with ground snow surveys used as the base to evaluate aerial data. Each of the 1-mile snow courses consisted of a single land use and all had the same terrain type (plane). The 4-mile snow course consists of a variety of land uses and the same terrain type (plane). Using the aerial snow-survey technique, the snow water equivalent of the 1-mile snow courses was. measured with three passes of the aircraft. Use of more than one pass did not improve the results. The mean absolute difference between the aerial- and ground-measured snow water equivalents for the 1-mile snow courses was 26 percent (0.77 inches). The aerial snow water equivalents determined for the 1-mile snow courses were used to estimate the variations in the snow water equivalents over the 4-mile snow course. The weighted mean absolute difference for the 4-mile snow course was 27 percent (0.8 inches). Variations in snow water equivalents could not be verified adequately by segmenting the aerial snow-survey data because of the uniformity found in the snow cover. On the 4-mile snow coirse, about two-thirds of the aerial snow-survey data agreed with the ground snow-survey data within the accuracy of the aerial technique ( + 0.5 inch of the mean snow water equivalent).

  2. A general dual-bolus approach for quantitative DCE-MRI.

    PubMed

    Kershaw, Lucy E; Cheng, Hai-Ling Margaret

    2011-02-01

    To present a dual-bolus technique for quantitative dynamic contrast-enhanced MRI (DCE-MRI) and show that it can give an arterial input function (AIF) measurement equivalent to that from a single-bolus protocol. Five rabbits were imaged using a dual-bolus technique applicable for high-resolution DCE-MRI, incorporating a time resolved imaging of contrast kinetics (TRICKS) sequence for rapid temporal sampling. AIFs were measured from both the low-dose prebolus and the high-dose main bolus in the abdominal aorta. In one animal, TRICKS and fast spoiled gradient echo (FSPGR) acquisitions were compared. The scaled prebolus AIF was shown to match the main bolus AIF, with 95% confidence intervals overlapping for fits of gamma-variate functions to the first pass and linear fits to the washout phase, with the exception of one case. The AIFs measured using TRICKS and FSPGR were shown to be equivalent in one animal. The proposed technique can capture even the rapid circulation kinetics in the rabbit aorta, and the scaled prebolus AIF is equivalent to the AIF from a high-dose injection. This allows separate measurements of the AIF and tissue uptake curves, meaning that each curve can then be acquired using a protocol tailored to its specific requirements. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Lower Cody Shale (Niobrara equivalent) in the Bighorn Basin, Wyoming and Montana: thickness, distribution, and source rock potential

    USGS Publications Warehouse

    Finn, Thomas M.

    2014-01-01

    The lower shaly member of the Cody Shale in the Bighorn Basin, Wyoming and Montana is Coniacian to Santonian in age and is equivalent to the upper part of the Carlile Shale and basal part of the Niobrara Formation in the Powder River Basin to the east. The lower Cody ranges in thickness from 700 to 1,200 feet and underlies much of the central part of the basin. It is composed of gray to black shale, calcareous shale, bentonite, and minor amounts of siltstone and sandstone. Sixty-six samples, collected from well cuttings, from the lower Cody Shale were analyzed using Rock-Eval and total organic carbon analysis to determine the source rock potential. Total organic carbon content averages 2.28 weight percent for the Carlile equivalent interval and reaches a maximum of nearly 5 weight percent. The Niobrara equivalent interval averages about 1.5 weight percent and reaches a maximum of over 3 weight percent, indicating that both intervals are good to excellent source rocks. S2 values from pyrolysis analysis also indicate that both intervals have a good to excellent source rock potential. Plots of hydrogen index versus oxygen index, hydrogen index versus Tmax, and S2/S3 ratios indicate that organic matter contains both Type II and Type III kerogen capable of generating oil and gas. Maps showing the distribution of kerogen types and organic richness for the lower shaly member of the Cody Shale show that it is more organic-rich and more oil-prone in the eastern and southeastern parts of the basin. Thermal maturity based on vitrinite reflectance (Ro) ranges from 0.60–0.80 percent Ro around the margins of the basin, increasing to greater than 2.0 percent Ro in the deepest part of the basin, indicates that the lower Cody is mature to overmature with respect to hydrocarbon generation.

  4. Formal Requirements-Based Programming for Complex Systems

    NASA Technical Reports Server (NTRS)

    Rash, James L.; Hinchey, Michael G.; Rouff, Christopher A.; Gracanin, Denis

    2005-01-01

    Computer science as a field has not yet produced a general method to mechanically transform complex computer system requirements into a provably equivalent implementation. Such a method would be one major step towards dealing with complexity in computing, yet it remains the elusive holy grail of system development. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that such tools and methods leave unfilled is that the formal models cannot be proven to be equivalent to the system requirements as originated by the customer For the classes of complex systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations. While other techniques are available, this method is unique in offering full mathematical tractability while using notations and techniques that are well known and well trusted. We illustrate the application of the method to an example procedure from the Hubble Robotic Servicing Mission currently under study and preliminary formulation at NASA Goddard Space Flight Center.

  5. The Evaluation of the 0.07 and 3 mm Dose Equivalent with a Portable Beta Spectrometer

    NASA Astrophysics Data System (ADS)

    Hoshi, Katsuya; Yoshida, Tadayoshi; Tsujimura, Norio; Okada, Kazuhiko

    Beta spectra of various nuclide species were measured using a commercially available compact spectrometer. The shape of the spectra obtained via the spectrometer was almost similar to that of the theoretical spectra. The beta dose equivalent at any depth was obtained as a product of the measured pulse height spectra and the appropriate conversion coefficients of ICRP Publication 74. The dose rates evaluated from the spectra were comparable with the reference dose rates of standard beta calibration sources. In addition, we were able to determine the dose equivalents with a relative error of indication of 10% without the need for complicated correction.

  6. The effect of a paraffin screen on the neutron dose at the maze door of a 15 MV linear accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krmar, M.; Kuzmanović, A.; Nikolić, D.

    2013-08-15

    Purpose: The purpose of this study was to explore the effects of a paraffin screen located at various positions in the maze on the neutron dose equivalent at the maze door.Methods: The neutron dose equivalent was measured at the maze door of a room containing a 15 MV linear accelerator for x-ray therapy. Measurements were performed for several positions of the paraffin screen covering only 27.5% of the cross-sectional area of the maze. The neutron dose equivalent was also measured at all screen positions. Two simple models of the neutron source were considered in which the first assumed that themore » source was the cross-sectional area at the inner entrance of the maze, radiating neutrons in an isotropic manner. In the second model the reduction in the neutron dose equivalent at the maze door due to the paraffin screen was considered to be a function of the mean values of the neutron fluence and energy at the screen.Results: The results of this study indicate that the equivalent dose at the maze door was reduced by a factor of 3 through the use of a paraffin screen that was placed inside the maze. It was also determined that the contributions to the dosage from areas that were not covered by the paraffin screen as viewed from the dosimeter, were 2.5 times higher than the contributions from the covered areas. This study also concluded that the contributions of the maze walls, ceiling, and floor to the total neutron dose equivalent were an order of magnitude lower than those from the surface at the far end of the maze.Conclusions: This study demonstrated that a paraffin screen could be used to reduce the neutron dose equivalent at the maze door by a factor of 3. This paper also found that the reduction of the neutron dose equivalent was a linear function of the area covered by the maze screen and that the decrease in the dose at the maze door could be modeled as an exponential function of the product φ·E at the screen.« less

  7. Sequential time interleaved random equivalent sampling for repetitive signal.

    PubMed

    Zhao, Yijiu; Liu, Jingjing

    2016-12-01

    Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.

  8. Chemiluminescence-based multivariate sensing of local equivalence ratios in premixed atmospheric methane-air flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathi, Markandey M.; Krishnan, Sundar R.; Srinivasan, Kalyan K.

    Chemiluminescence emissions from OH*, CH*, C2, and CO2 formed within the reaction zone of premixed flames depend upon the fuel-air equivalence ratio in the burning mixture. In the present paper, a new partial least square regression (PLS-R) based multivariate sensing methodology is investigated and compared with an OH*/CH* intensity ratio-based calibration model for sensing equivalence ratio in atmospheric methane-air premixed flames. Five replications of spectral data at nine different equivalence ratios ranging from 0.73 to 1.48 were used in the calibration of both models. During model development, the PLS-R model was initially validated with the calibration data set using themore » leave-one-out cross validation technique. Since the PLS-R model used the entire raw spectral intensities, it did not need the nonlinear background subtraction of CO2 emission that is required for typical OH*/CH* intensity ratio calibrations. An unbiased spectral data set (not used in the PLS-R model development), for 28 different equivalence ratio conditions ranging from 0.71 to 1.67, was used to predict equivalence ratios using the PLS-R and the intensity ratio calibration models. It was found that the equivalence ratios predicted with the PLS-R based multivariate calibration model matched the experimentally measured equivalence ratios within 7%; whereas, the OH*/CH* intensity ratio calibration grossly underpredicted equivalence ratios in comparison to measured equivalence ratios, especially under rich conditions ( > 1.2). The practical implications of the chemiluminescence-based multivariate equivalence ratio sensing methodology are also discussed.« less

  9. Single-energy pediatric chest computed tomography with spectral filtration at 100 kVp: effects on radiation parameters and image quality.

    PubMed

    Bodelle, Boris; Fischbach, Constanze; Booz, Christian; Yel, Ibrahim; Frellesen, Claudia; Kaup, Moritz; Beeres, Martin; Vogl, Thomas J; Scholtz, Jan-Erik

    2017-06-01

    Most of the applied radiation dose at CT is in the lower photon energy range, which is of limited diagnostic importance. To investigate image quality and effects on radiation parameters of 100-kVp spectral filtration single-energy chest CT using a tin-filter at third-generation dual-source CT in comparison to standard 100-kVp chest CT. Thirty-three children referred for a non-contrast chest CT performed on a third-generation dual-source CT scanner were examined at 100 kVp with a dedicated tin filter with a tube current-time product resulting in standard protocol dose. We compared resulting images with images from children examined using standard single-source chest CT at 100 kVp. We assessed objective and subjective image quality and compared radiation dose parameters. Radiation dose was comparable for children 5 years old and younger, and it was moderately decreased for older children when using spectral filtration (P=0.006). Effective tube current increased significantly (P=0.0001) with spectral filtration, up to a factor of 10. Signal-to-noise ratio and image noise were similar for both examination techniques (P≥0.06). Subjective image quality showed no significant differences (P≥0.2). Using 100-kVp spectral filtration chest CT in children by means of a tube-based tin-filter on a third-generation dual-source CT scanner increases effective tube current up to a factor of 10 to provide similar image quality at equivalent dose compared to standard single-source CT without spectral filtration.

  10. Comparison of in vitro estrogenic activity and estrogen concentrations in source and treated waters from 25 U.S. drinking water treatment plants.

    PubMed

    Conley, Justin M; Evans, Nicola; Mash, Heath; Rosenblum, Laura; Schenck, Kathleen; Glassmeyer, Susan; Furlong, Ed T; Kolpin, Dana W; Wilson, Vickie S

    2017-02-01

    In vitro bioassays have been successfully used to screen for estrogenic activity in wastewater and surface water, however, few have been applied to treated drinking water. Here, extracts of source and treated water samples were assayed for estrogenic activity using T47D-KBluc cells and analyzed by liquid chromatography-Fourier transform mass spectrometry (LC-FTMS) for natural and synthetic estrogens (including estrone, 17β-estradiol, estriol, and ethinyl estradiol). None of the estrogens were detected above the LC-FTMS quantification limits in treated samples and only 5 source waters had quantifiable concentrations of estrone, whereas 3 treated samples and 16 source samples displayed in vitro estrogenicity. Estrone accounted for the majority of estrogenic activity in respective samples, however the remaining samples that displayed estrogenic activity had no quantitative detections of known estrogenic compounds by chemical analyses. Source water estrogenicity (max, 0.47ng 17β-estradiol equivalents (E2Eq) L -1 ) was below levels that have been linked to adverse effects in fish and other aquatic organisms. Treated water estrogenicity (max, 0.078ngE2EqL -1 ) was considerably below levels that are expected to be biologically relevant to human consumers. Overall, the advantage of using in vitro techniques in addition to analytical chemical determinations was displayed by the sensitivity of the T47D-KBluc bioassay, coupled with the ability to measure cumulative effects of mixtures, specifically when unknown chemicals may be present. Published by Elsevier B.V.

  11. Techniques for forced response involving discrete nonlinearities. I - Theory. II - Applications

    NASA Astrophysics Data System (ADS)

    Avitabile, Peter; Callahan, John O.

    Several new techniques developed for the forced response analysis of systems containing discrete nonlinear connection elements are presented and compared to the traditional methods. In particular, the techniques examined are the Equivalent Reduced Model Technique (ERMT), Modal Modification Response Technique (MMRT), and Component Element Method (CEM). The general theory of the techniques is presented, and applications are discussed with particular reference to the beam nonlinear system model using ERMT, MMRT, and CEM; frame nonlinear response using the three techniques; and comparison of the results obtained by using the ERMT, MMRT, and CEM models.

  12. Parallel State Space Construction for a Model Checking Based on Maximality Semantics

    NASA Astrophysics Data System (ADS)

    El Abidine Bouneb, Zine; Saīdouni, Djamel Eddine

    2009-03-01

    The main limiting factor of the model checker integrated in the concurrency verification environment FOCOVE [1, 2], which use the maximality based labeled transition system (noted MLTS) as a true concurrency model[3, 4], is currently the amount of available physical memory. Many techniques have been developed to reduce the size of a state space. An interesting technique among them is the alpha equivalence reduction. Distributed memory execution environment offers yet another choice. The main contribution of the paper is to show that the parallel state space construction algorithm proposed in [5], which is based on interleaving semantics using LTS as semantic model, may be adapted easily to the distributed implementation of the alpha equivalence reduction for the maximality based labeled transition systems.

  13. Understanding the paradox of selenium contamination in mercury mining areas: high soil content and low accumulation in rice.

    PubMed

    Zhang, Hua; Feng, Xinbin; Jiang, Chengxin; Li, Qiuhua; Liu, Yi; Gu, Chunhao; Shang, Lihai; Li, Ping; Lin, Yan; Larssen, Thorjørn

    2014-05-01

    Rice is an important source of Se for billions of people throughout the world. The Wanshan area can be categorized as a seleniferous region due to its high soil Se content, but the Se content in the rice in Wanshan is much lower than that from typical seleniferous regions with an equivalent soil Se level. To investigate why the Se bioaccumulation in Wanshan is low, we measured the soil Se speciation using a sequential partial dissolution technique. The results demonstrated that the bioavailable species only accounted for a small proportion of the total Se in the soils from Wanshan, a much lower quantity than that found in the seleniferous regions. The potential mechanisms may be associated with the existence of Hg contamination, which is likely related to the formation of an inert Hg-Se insoluble precipitate in soils in Wanshan. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Atomic emission spectrometer/spectrograph for the determination of barium in microamounts of diatom ash

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bankston, D.C.; Fisher, N.S.

    1977-06-01

    The development and routine application of a method for the determination of trace levels of barium in microsamples (5-10 mg) of diatom ash is described Acid-dissolved lithium metaborate fusion melts of ash samples are analyzed using a spectrometer/spectrograph equipped with a dc argon plasma jet excitation source and an echelle diffraction grating. Sample, standard, and blank solutions are buffered by lithium contributed by the flux, to a degree sufficient to reduce matrix effects to acceptable levels. Previous barium determinations by other analytical techniques, on seven interlaboratory reference materials, have been used to establish the accuracy of our results. The averagemore » relative standard deviation for the instrumental analyses was 0.07. Using recommended instrument settings, moreover, the lowest concentration of barium visible in synthetic standard solutions lies just below 2 ..mu..g/L, which is equivalent to 2 ..mu..g/g in the ash.« less

  15. Electromagnetic interference of cardiac rhythmic monitoring devices to radio frequency identification: analytical analysis and mitigation methodology.

    PubMed

    Ogirala, Ajay; Stachel, Joshua R; Mickle, Marlin H

    2011-11-01

    Increasing density of wireless communication and development of radio frequency identification (RFID) technology in particular have increased the susceptibility of patients equipped with cardiac rhythmic monitoring devices (CRMD) to environmental electro magnetic interference (EMI). Several organizations reported observing CRMD EMI from different sources. This paper focuses on mathematically analyzing the energy as perceived by the implanted device, i.e., voltage. Radio frequency (RF) energy transmitted by RFID interrogators is considered as an example. A simplified front-end equivalent circuit of a CRMD sensing circuitry is proposed for the analysis following extensive black-box testing of several commercial pacemakers and implantable defibrillators. After careful understanding of the mechanics of the CRMD signal processing in identifying the QRS complex of the heart-beat, a mitigation technique is proposed. The mitigation methodology introduced in this paper is logical in approach, simple to implement and is therefore applicable to all wireless communication protocols.

  16. Optical aperture synthesis with electronically connected telescopes

    PubMed Central

    Dravins, Dainis; Lagadec, Tiphaine; Nuñez, Paul D.

    2015-01-01

    Highest resolution imaging in astronomy is achieved by interferometry, connecting telescopes over increasingly longer distances and at successively shorter wavelengths. Here, we present the first diffraction-limited images in visual light, produced by an array of independent optical telescopes, connected electronically only, with no optical links between them. With an array of small telescopes, second-order optical coherence of the sources is measured through intensity interferometry over 180 baselines between pairs of telescopes, and two-dimensional images reconstructed. The technique aims at diffraction-limited optical aperture synthesis over kilometre-long baselines to reach resolutions showing details on stellar surfaces and perhaps even the silhouettes of transiting exoplanets. Intensity interferometry circumvents problems of atmospheric turbulence that constrain ordinary interferometry. Since the electronic signal can be copied, many baselines can be built up between dispersed telescopes, and over long distances. Using arrays of air Cherenkov telescopes, this should enable the optical equivalent of interferometric arrays currently operating at radio wavelengths. PMID:25880705

  17. In-vivo assessment of total body protein in rats by prompt-γ neutron activation analysis

    NASA Astrophysics Data System (ADS)

    Stamatelatos, Ion E.; Boozer, Carol N.; Ma, Ruimei; Yasumura, Seiichi

    1997-02-01

    A prompt-(gamma) neutron activation analysis facility for in vivo determination of total body protein (TBP) in rats has been designed. TBP is determined in vivo by assessment of total body nitrogen. The facility is based on a 252Cf radionuclide neutron source within a heavy water moderator assembly and two NaI(Tl) scintillation detectors. The in vivo precision of the technique, as estimated by three repeated measurements of 15 rats is 6 percent, for a radiation dose equivalent of 60 mSv. The radiation dose per measurement is sufficiently low to enable serial measurements on the same animal. MCNP-4A Monte Carlo transport code was utilized to calculate thermal neutron flux correction factors to account for differences in size and shape of the rats and calibration phantoms. Good agrement was observed in comparing body nitrogen assessment by prompt-(gamma) neutron activation and chemical carcass analysis.

  18. The influence of visible light and inorganic pigments on fluorescence excitation emission spectra of egg-, casein- and collagen-based painting media

    NASA Astrophysics Data System (ADS)

    Nevin, A.; Anglos, D.; Cather, S.; Burnstock, A.

    2008-07-01

    Spectrofluorimetric analysis of proteinaceous binding media is particularly promising because proteins employed in paintings are often fluorescent and media from different sources have significantly different fluorescence spectral profiles. Protein-based binding media derived from eggs, milk and animal tissue have been used for painting and for conservation, but their analysis using non-destructive techniques is complicated by interferences with pigments, their degradation and their low concentration. Changes in the fluorescence excitation emission spectra of films of binding media following artificial ageing to an equivalent of 50 and 100 years of museum lighting include the reduction of bands ascribed to tyrosine, tryptophan and Maillard reaction products and an increase in fluorescent photodegradation. Fluorescence of naturally aged paint is dependent on the nature of the pigment present and, with egg-based media, in comparison with un-pigmented films, emissions ascribed to amino acids are more pronounced.

  19. Optimizing an Actuator Array for the Control of Multi-Frequency Noise in Aircraft Interiors

    NASA Technical Reports Server (NTRS)

    Palumbo, D. L.; Padula, S. L.

    1997-01-01

    Techniques developed for selecting an optimized actuator array for interior noise reduction at a single frequency are extended to the multi-frequency case. Transfer functions for 64 actuators were obtained at 5 frequencies from ground testing the rear section of a fully trimmed DC-9 fuselage. A single loudspeaker facing the left side of the aircraft was the primary source. A combinatorial search procedure (tabu search) was employed to find optimum actuator subsets of from 2 to 16 actuators. Noise reduction predictions derived from the transfer functions were used as a basis for evaluating actuator subsets during optimization. Results indicate that it is necessary to constrain actuator forces during optimization. Unconstrained optimizations selected actuators which require unrealistically large forces. Two methods of constraint are evaluated. It is shown that a fast, but approximate, method yields results equivalent to an accurate, but computationally expensive, method.

  20. Age-dependence of the average and equivalent refractive indices of the crystalline lens

    PubMed Central

    Charman, W. Neil; Atchison, David A.

    2013-01-01

    Lens average and equivalent refractive indices are required for purposes such as lens thickness estimation and optical modeling. We modeled the refractive index gradient as a power function of the normalized distance from lens center. Average index along the lens axis was estimated by integration. Equivalent index was estimated by raytracing through a model eye to establish ocular refraction, and then backward raytracing to determine the constant refractive index yielding the same refraction. Assuming center and edge indices remained constant with age, at 1.415 and 1.37 respectively, average axial refractive index increased (1.408 to 1.411) and equivalent index decreased (1.425 to 1.420) with age increase from 20 to 70 years. These values agree well with experimental estimates based on different techniques, although the latter show considerable scatter. The simple model of index gradient gives reasonable estimates of average and equivalent lens indices, although refinements in modeling and measurements are required. PMID:24466474

  1. Evaluation of Exposure From a Low Energy X-Ray Device Using Thermoluminescent Dosimeters

    NASA Technical Reports Server (NTRS)

    Edwards, David L.; Harris, William S., Jr.

    1997-01-01

    The exposure from an electron beam welding device was evaluated using thermoluminescent dosimeters (TLDs). The device generated low energy X-rays which the current dose equivalent conversion algorithm was not designed to evaluate making it necessary to obtain additional information relating to TLD operation at the photon energies encountered with the device. This was accomplished by performing irradiations at the National Institute of Standards and Technology (NIST) using low energy X-ray techniques. The resulting data was used to determine TLD badge response for low energy X-rays and to establish the relationship between TLD element response and the dose equivalent at specific depths in tissue for these photon energies. The new energy/dose equivalent calibration data was used to calculate the shallow and eye dose equivalent of badges exposed to the device.

  2. Shale characterization in mass transport complex as a potential source rock: An example from onshore West Java Basin, Indonesia

    NASA Astrophysics Data System (ADS)

    Nugraha, A. M. S.; Widiarti, R.; Kusumah, E. P.

    2017-12-01

    This study describes a deep-water slump facies shale of the Early Miocene Jatiluhur/Cibulakan Formation to understand its potential as a source rock in an active tectonic region, the onshore West Java. The formation is equivalent with the Gumai Formation, which has been well-known as another prolific source rock besides the Oligocene Talang Akar Formation in North West Java Basin, Indonesia. The equivalent shale formation is expected to have same potential source rock towards the onshore of Central Java. The shale samples were taken onshore, 150 km away from the basin. The shale must be rich of organic matter, have good quality of kerogen, and thermally matured to be categorized as a potential source rock. Investigations from petrography, X-Ray diffractions (XRD), and backscattered electron show heterogeneous mineralogy in the shales. The mineralogy consists of clay minerals, minor quartz, muscovite, calcite, chlorite, clinopyroxene, and other weathered minerals. This composition makes the shale more brittle. Scanning Electron Microscope (SEM) analysis indicate secondary porosities and microstructures. Total Organic Carbon (TOC) shows 0.8-1.1 wt%, compared to the basinal shale 1.5-8 wt%. The shale properties from this outcropped formation indicate a good potential source rock that can be found in the subsurface area with better quality and maturity.

  3. 78 FR 31315 - Kraft Pulp Mills NSPS Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-23

    ... furnaces to levels equivalent to the new source PM limits in the NESHAP for chemical recovery combustion... will enable a broader understanding of condensable PM emissions from pulp and paper combustion sources... for 0.5 seconds (no ppmdv limit). 2. Use non-combustion control device with a limit of 5 ppmdv. 3. It...

  4. 40 CFR 63.602 - Standards for existing sources.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) National Emission Standards for Hazardous Air Pollutants From Phosphoric Acid Manufacturing Plants § 63.602 Standards for existing sources. (a) Wet process phosphoric acid process line. On and after the date on which... of equivalent P2O5 feed (0.020 lb/ton). (b) Superphosphoric acid process line—(1) Vacuum evaporation...

  5. 40 CFR 63.602 - Standards for existing sources.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) National Emission Standards for Hazardous Air Pollutants From Phosphoric Acid Manufacturing Plants § 63.602 Standards for existing sources. (a) Wet process phosphoric acid process line. On and after the date on which... of equivalent P2O5 feed (0.020 lb/ton). (b) Superphosphoric acid process line—(1) Vacuum evaporation...

  6. 40 CFR 63.602 - Standards for existing sources.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) National Emission Standards for Hazardous Air Pollutants From Phosphoric Acid Manufacturing Plants § 63.602 Standards for existing sources. (a) Wet process phosphoric acid process line. On and after the date on which... of equivalent P2O5 feed (0.020 lb/ton). (b) Superphosphoric acid process line—(1) Vacuum evaporation...

  7. 21 CFR 573.140 - Ammoniated cottonseed meal.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... required by the act, the following: (1) The name of the additive. (2) The maximum percentage of equivalent crude protein from the nonprotein nitrogen. (3) Directions for use to provide not more than 20 percent... source of protein and/or as a source of nonprotein nitrogen in an amount not to exceed 20 percent of the...

  8. 21 CFR 573.140 - Ammoniated cottonseed meal.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... required by the act, the following: (1) The name of the additive. (2) The maximum percentage of equivalent crude protein from the nonprotein nitrogen. (3) Directions for use to provide not more than 20 percent... source of protein and/or as a source of nonprotein nitrogen in an amount not to exceed 20 percent of the...

  9. 21 CFR 573.140 - Ammoniated cottonseed meal.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... required by the act, the following: (1) The name of the additive. (2) The maximum percentage of equivalent crude protein from the nonprotein nitrogen. (3) Directions for use to provide not more than 20 percent... source of protein and/or as a source of nonprotein nitrogen in an amount not to exceed 20 percent of the...

  10. 21 CFR 573.140 - Ammoniated cottonseed meal.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... required by the act, the following: (1) The name of the additive. (2) The maximum percentage of equivalent crude protein from the nonprotein nitrogen. (3) Directions for use to provide not more than 20 percent... source of protein and/or as a source of nonprotein nitrogen in an amount not to exceed 20 percent of the...

  11. 21 CFR 573.140 - Ammoniated cottonseed meal.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... required by the act, the following: (1) The name of the additive. (2) The maximum percentage of equivalent crude protein from the nonprotein nitrogen. (3) Directions for use to provide not more than 20 percent... source of protein and/or as a source of nonprotein nitrogen in an amount not to exceed 20 percent of the...

  12. The Scaling of Broadband Shock-Associated Noise with Increasing Temperature

    NASA Technical Reports Server (NTRS)

    Miller, Steven A. E.

    2013-01-01

    A physical explanation for the saturation of broadband shock-associated noise (BBSAN) intensity with increasing jet stagnation temperature has eluded investigators. An explanation is proposed for this phenomenon with the use of an acoustic analogy. To isolate the relevant physics, the scaling of BBSAN peak intensity level at the sideline observer location is examined. The equivalent source within the framework of an acoustic analogy for BBSAN is based on local field quantities at shock wave shear layer interactions. The equivalent source combined with accurate calculations of the propagation of sound through the jet shear layer, using an adjoint vector Green's function solver of the linearized Euler equations, allows for predictions that retain the scaling with respect to stagnation pressure and allows for saturation of BBSAN with increasing stagnation temperature. The sources and vector Green's function have arguments involving the steady Reynolds- Averaged Navier-Stokes solution of the jet. It is proposed that saturation of BBSAN with increasing jet temperature occurs due to a balance between the amplication of the sound propagation through the shear layer and the source term scaling.

  13. Seismic Interferometry at a Large, Dense Array: Capturing the Wavefield at the Source Physics Experiment

    NASA Astrophysics Data System (ADS)

    Matzel, E.; Mellors, R. J.; Magana-Zook, S. A.

    2016-12-01

    Seismic interferometry is based on the observation that the Earth's background wavefield includes coherent energy, which can be recovered by observing over long time periods, allowing the incoherent energy to cancel out. The cross correlation of the energy recorded at a pair of stations results in an estimate of the Green's Function (GF) and is equivalent to the record of a simple source located at one of the stations as recorded by the other. This allows high resolution imagery beneath dense seismic networks even in areas of low seismicity. The power of these inter-station techniques increases rapidly as the number of seismometers in a network increases. For large networks the number of correlations computed can run into the millions and this becomes a "big-data" problem where data-management dominates the efficiency of the computations. In this study, we use several methods of seismic interferometry to obtain highly detailed images at the site of the Source Physics Experiment (SPE). The objective of SPE is to obtain a physics-based understanding of how seismic waves are created at and scattered near the source. In 2015, a temporary deployment of 1,000 closely spaced geophones was added to the main network of instruments at the site. We focus on three interferometric techniques: Shot interferometry (SI) uses the SPE shots as rich sources of high frequency, high signal energy. Coda interferometry (CI) isolates the energy from the scattered wavefield of distant earthquakes. Ambient noise correlation (ANC) uses the energy of the ambient background field. In each case, the data recorded at one seismometer are correlated with the data recorded at another to obtain an estimate of the GF between the two. The large network of mixed geophone and broadband instruments at the SPE allows us to calculate over 500,000 GFs, which we use to characterize the site and measure the localized wavefield. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344

  14. The direct collapse of a massive black hole seed under the influence of an anisotropic Lyman-Werner source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Regan, John A.; Johansson, Peter H.; Wise, John H., E-mail: john.regan@helsinki.fi

    2014-11-10

    The direct collapse model of supermassive black hole seed formation requires that the gas cools predominantly via atomic hydrogen. To this end we simulate the effect of an anisotropic radiation source on the collapse of a halo at high redshift. The radiation source is placed at a distance of 3 kpc (physical) from the collapsing object and is set to emit monochromatically in the center of the Lyman-Werner (LW) band. The LW radiation emitted from the high redshift source is followed self-consistently using ray tracing techniques. Due to self-shielding, a small amount of H{sub 2} is able to form atmore » the very center of the collapsing halo even under very strong LW radiation. Furthermore, we find that a radiation source, emitting >10{sup 54} (∼ 10{sup 3} J{sub 21}) photons s{sup –1}, is required to cause the collapse of a clump of M ∼ 10{sup 5} M {sub ☉}. The resulting accretion rate onto the collapsing object is ∼0.25 M {sub ☉} yr{sup –1}. Our results display significant differences, compared to the isotropic radiation field case, in terms of the H{sub 2} fraction at an equivalent radius. These differences will significantly affect the dynamics of the collapse. With the inclusion of a strong anisotropic radiation source, the final mass of the collapsing object is found to be M ∼ 10{sup 5} M {sub ☉}. This is consistent with predictions for the formation of a supermassive star or quasi-star leading to a supermassive black hole.« less

  15. Equivalent circuit of radio frequency-plasma with the transformer model

    NASA Astrophysics Data System (ADS)

    Nishida, K.; Mochizuki, S.; Ohta, M.; Yasumoto, M.; Lettry, J.; Mattei, S.; Hatayama, A.

    2014-02-01

    LINAC4 H- source is radio frequency (RF) driven type source. In the RF system, it is required to match the load impedance, which includes H- source, to that of final amplifier. We model RF plasma inside the H- source as circuit elements using transformer model so that characteristics of the load impedance become calculable. It has been shown that the modeling based on the transformer model works well to predict the resistance and inductance of the plasma.

  16. Magnetic Field Analysis of Lorentz Motors Using a Novel Segmented Magnetic Equivalent Circuit Method

    PubMed Central

    Qian, Junbing; Chen, Xuedong; Chen, Han; Zeng, Lizhan; Li, Xiaoqing

    2013-01-01

    A simple and accurate method based on the magnetic equivalent circuit (MEC) model is proposed in this paper to predict magnetic flux density (MFD) distribution of the air-gap in a Lorentz motor (LM). In conventional MEC methods, the permanent magnet (PM) is treated as one common source and all branches of MEC are coupled together to become a MEC network. In our proposed method, every PM flux source is divided into three sub-sections (the outer, the middle and the inner). Thus, the MEC of LM is divided correspondingly into three independent sub-loops. As the size of the middle sub-MEC is small enough, it can be treated as an ideal MEC and solved accurately. Combining with decoupled analysis of outer and inner MECs, MFD distribution in the air-gap can be approximated by a quadratic curve, and the complex calculation of reluctances in MECs can be avoided. The segmented magnetic equivalent circuit (SMEC) method is used to analyze a LM, and its effectiveness is demonstrated by comparison with FEA, conventional MEC and experimental results. PMID:23358368

  17. Subspace-based analysis of the ERT inverse problem

    NASA Astrophysics Data System (ADS)

    Ben Hadj Miled, Mohamed Khames; Miller, Eric L.

    2004-05-01

    In a previous work, we proposed a source-type formulation to the electrical resistance tomography (ERT) problem. Specifically, we showed that inhomogeneities in the medium can be viewed as secondary sources embedded in the homogeneous background medium and located at positions associated with variation in electrical conductivity. Assuming a piecewise constant conductivity distribution, the support of equivalent sources is equal to the boundary of the inhomogeneity. The estimation of the anomaly shape takes the form of an inverse source-type problem. In this paper, we explore the use of subspace methods to localize the secondary equivalent sources associated with discontinuities in the conductivity distribution. Our first alternative is the multiple signal classification (MUSIC) algorithm which is commonly used in the localization of multiple sources. The idea is to project a finite collection of plausible pole (or dipole) sources onto an estimated signal subspace and select those with largest correlations. In ERT, secondary sources are excited simultaneously but in different ways, i.e. with distinct amplitude patterns, depending on the locations and amplitudes of primary sources. If the number of receivers is "large enough", different source configurations can lead to a set of observation vectors that span the data subspace. However, since sources that are spatially close to each other have highly correlated signatures, seperation of such signals becomes very difficult in the presence of noise. To overcome this problem we consider iterative MUSIC algorithms like R-MUSIC and RAP-MUSIC. These recursive algorithms pose a computational burden as they require multiple large combinatorial searches. Results obtained with these algorithms using simulated data of different conductivity patterns are presented.

  18. Microfluidic perfusion culture system for multilayer artery tissue models.

    PubMed

    Yamagishi, Yuka; Masuda, Taisuke; Matsusaki, Michiya; Akashi, Mitsuru; Yokoyama, Utako; Arai, Fumihito

    2014-11-01

    We described an assembly technique and perfusion culture system for constructing artery tissue models. This technique differed from previous studies in that it does not require a solid biodegradable scaffold; therefore, using sheet-like tissues, this technique allowed the facile fabrication of tubular tissues can be used as model. The fabricated artery tissue models had a multilayer structure. The assembly technique and perfusion culture system were applicable to many different sizes of fabricated arteries. The shape of the fabricated artery tissue models was maintained by the perfusion culture system; furthermore, the system reproduced the in vivo environment and allowed mechanical stimulation of the arteries. The multilayer structure of the artery tissue model was observed using fluorescent dyes. The equivalent Young's modulus was measured by applying internal pressure to the multilayer tubular tissues. The aim of this study was to determine whether fabricated artery tissue models maintained their mechanical properties with developing. We demonstrated both the rapid fabrication of multilayer tubular tissues that can be used as model arteries and the measurement of their equivalent Young's modulus in a suitable perfusion culture environment.

  19. 40 CFR 421.91 - Specialized definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Metallurgical Acid Plants Subcategory § 421... percent equivalent sulfuric acid, H2 SO4 capacity. [50 FR 38342, Sept. 20, 1985] ...

  20. Development of a multi-element microdosimetric detector based on a thick gas electron multiplier

    NASA Astrophysics Data System (ADS)

    Anjomani, Z.; Hanu, A. R.; Prestwich, W. V.; Byun, S. H.

    2017-03-01

    A prototype multi-element gaseous microdosimetric detector was developed using the Thick Gas Electron Multiplier (THGEM) technique. The detector aims at measuring neutron and gamma-ray dose rates for weak neutron-gamma radiation fields. The multi-element design was employed to increase the neutron detection efficiency. The prototype THGEM multi-element detector consists of three layers of tissue equivalent plastic hexagons and each layer houses a hexagonal array of seven cylindrical gas cavity elements with equal heights and diameters of 17 mm. The final detector structure incorporates 21 gaseous volumes. Owing to the absence of wire electrodes, the THGEM multi-element detector offers flexible and convenient fabrication. The detector responses to neutron and gamma-ray were investigated using the McMaster Tandetron 7Li(p,n) neutron source. The dosimetric performance of the detector is presented in contrast to the response of a commercial tissue equivalent proportional counter. Compared to the standard TEPC response, the detector gave a consistent microdosimetric response with an average discrepancy of 8 % in measured neutron absorbed dose. An improvement of a factor of 3.0 in neutron detection efficiency has been accomplished with only a small degradation in energy resolution. However, its low energy cut off is about 6 keV/μm, which is not sufficient to measure the gamma-ray dose. This problem will be addressed by increasing the electron multiplication gain using double THGEM layers.

  1. Numerical dissipation vs. subgrid-scale modelling for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Dairay, Thibault; Lamballais, Eric; Laizet, Sylvain; Vassilicos, John Christos

    2017-05-01

    This study presents an alternative way to perform large eddy simulation based on a targeted numerical dissipation introduced by the discretization of the viscous term. It is shown that this regularisation technique is equivalent to the use of spectral vanishing viscosity. The flexibility of the method ensures high-order accuracy while controlling the level and spectral features of this purely numerical viscosity. A Pao-like spectral closure based on physical arguments is used to scale this numerical viscosity a priori. It is shown that this way of approaching large eddy simulation is more efficient and accurate than the use of the very popular Smagorinsky model in standard as well as in dynamic version. The main strength of being able to correctly calibrate numerical dissipation is the possibility to regularise the solution at the mesh scale. Thanks to this property, it is shown that the solution can be seen as numerically converged. Conversely, the two versions of the Smagorinsky model are found unable to ensure regularisation while showing a strong sensitivity to numerical errors. The originality of the present approach is that it can be viewed as implicit large eddy simulation, in the sense that the numerical error is the source of artificial dissipation, but also as explicit subgrid-scale modelling, because of the equivalence with spectral viscosity prescribed on a physical basis.

  2. The HACMS program: using formal methods to eliminate exploitable bugs

    PubMed Central

    Launchbury, John; Richards, Raymond

    2017-01-01

    For decades, formal methods have offered the promise of verified software that does not have exploitable bugs. Until recently, however, it has not been possible to verify software of sufficient complexity to be useful. Recently, that situation has changed. SeL4 is an open-source operating system microkernel efficient enough to be used in a wide range of practical applications. Its designers proved it to be fully functionally correct, ensuring the absence of buffer overflows, null pointer exceptions, use-after-free errors, etc., and guaranteeing integrity and confidentiality. The CompCert Verifying C Compiler maps source C programs to provably equivalent assembly language, ensuring the absence of exploitable bugs in the compiler. A number of factors have enabled this revolution, including faster processors, increased automation, more extensive infrastructure, specialized logics and the decision to co-develop code and correctness proofs rather than verify existing artefacts. In this paper, we explore the promise and limitations of current formal-methods techniques. We discuss these issues in the context of DARPA’s HACMS program, which had as its goal the creation of high-assurance software for vehicles, including quadcopters, helicopters and automobiles. This article is part of the themed issue ‘Verified trustworthy software systems’. PMID:28871050

  3. Active noise attenuation in ventilation windows.

    PubMed

    Huang, Huahua; Qiu, Xiaojun; Kang, Jian

    2011-07-01

    The feasibility of applying active noise control techniques to attenuate low frequency noise transmission through a natural ventilation window into a room is investigated analytically and experimentally. The window system is constructed by staggering the opening sashes of a spaced double glazing window to allow ventilation and natural light. An analytical model based on the modal expansion method is developed to calculate the low frequency sound field inside the window and the room and to be used in the active noise control simulations. The effectiveness of the proposed analytical model is validated by using the finite element method. The performance of the active control system for a window with different source and receiver configurations are compared, and it is found that the numerical and experimental results are in good agreement and the best result is achieved when the secondary sources are placed in the center at the bottom of the staggered window. The extra attenuation at the observation points in the optimized window system is almost equivalent to the noise reduction at the error sensor and the frequency range of effective control is up to 390 Hz in the case of a single channel active noise control system. © 2011 Acoustical Society of America

  4. A practical and systematic review of Weibull statistics for reporting strengths of dental materials

    PubMed Central

    Quinn, George D.; Quinn, Janet B.

    2011-01-01

    Objectives To review the history, theory and current applications of Weibull analyses sufficient to make informed decisions regarding practical use of the analysis in dental material strength testing. Data References are made to examples in the engineering and dental literature, but this paper also includes illustrative analyses of Weibull plots, fractographic interpretations, and Weibull distribution parameters obtained for a dense alumina, two feldspathic porcelains, and a zirconia. Sources Informational sources include Weibull's original articles, later articles specific to applications and theoretical foundations of Weibull analysis, texts on statistics and fracture mechanics and the international standards literature. Study Selection The chosen Weibull analyses are used to illustrate technique, the importance of flaw size distributions, physical meaning of Weibull parameters and concepts of “equivalent volumes” to compare measured strengths obtained from different test configurations. Conclusions Weibull analysis has a strong theoretical basis and can be of particular value in dental applications, primarily because of test specimen size limitations and the use of different test configurations. Also endemic to dental materials, however, is increased difficulty in satisfying application requirements, such as confirming fracture origin type and diligence in obtaining quality strength data. PMID:19945745

  5. Acoustic characterization of a nonlinear vibroacoustic absorber at low frequencies and high sound levels

    NASA Astrophysics Data System (ADS)

    Chauvin, A.; Monteil, M.; Bellizzi, S.; Côte, R.; Herzog, Ph.; Pachebat, M.

    2018-03-01

    A nonlinear vibroacoustic absorber (Nonlinear Energy Sink: NES), involving a clamped thin membrane made in Latex, is assessed in the acoustic domain. This NES is here considered as an one-port acoustic system, analyzed at low frequencies and for increasing excitation levels. This dynamic and frequency range requires a suitable experimental technique, which is presented first. It involves a specific impedance tube able to deal with samples of sufficient size, and reaching high sound levels with a guaranteed linear response thank's to a specific acoustic source. The identification method presented here requires a single pressure measurement, and is calibrated from a set of known acoustic loads. The NES reflection coefficient is then estimated at increasing source levels, showing its strong level dependency. This is presented as a mean to understand energy dissipation. The results of the experimental tests are first compared to a nonlinear viscoelastic model of the membrane absorber. In a second step, a family of one degree of freedom models, treated as equivalent Helmholtz resonators is identified from the measurements, allowing a parametric description of the NES behavior over a wide range of levels.

  6. The HACMS program: using formal methods to eliminate exploitable bugs.

    PubMed

    Fisher, Kathleen; Launchbury, John; Richards, Raymond

    2017-10-13

    For decades, formal methods have offered the promise of verified software that does not have exploitable bugs. Until recently, however, it has not been possible to verify software of sufficient complexity to be useful. Recently, that situation has changed. SeL4 is an open-source operating system microkernel efficient enough to be used in a wide range of practical applications. Its designers proved it to be fully functionally correct, ensuring the absence of buffer overflows, null pointer exceptions, use-after-free errors, etc., and guaranteeing integrity and confidentiality. The CompCert Verifying C Compiler maps source C programs to provably equivalent assembly language, ensuring the absence of exploitable bugs in the compiler. A number of factors have enabled this revolution, including faster processors, increased automation, more extensive infrastructure, specialized logics and the decision to co-develop code and correctness proofs rather than verify existing artefacts. In this paper, we explore the promise and limitations of current formal-methods techniques. We discuss these issues in the context of DARPA's HACMS program, which had as its goal the creation of high-assurance software for vehicles, including quadcopters, helicopters and automobiles.This article is part of the themed issue 'Verified trustworthy software systems'. © 2017 The Authors.

  7. Proton and Electron Threshold Energy Measurements for Extravehicular Activity Space Suits. Chapter 2

    NASA Technical Reports Server (NTRS)

    Moyers, M. F.; Nelson, G. D.; Saganti, P. B.

    2003-01-01

    Construction of ISS will require more than 1000 hours of EVA. Outside of ISS during EVA, astronauts and cosmonauts are likely to be exposed to a large fluence of electrons and protons. Development of radiation protection guidelines requires the determination of the minimum energy of electrons and protons that penetrate the suits at various locations. Measurements of the water-equivalent thickness of both US. and Russian EVA suits were obtained by performing CT scans. Specific regions of interest of the suits were further evaluated using a differential range shift technique. This technique involved measuring thickness ionization curves for 6-MeV electron and 155-MeV proton beams with ionization chambers using a constant source-to-detector distance. The thicknesses were obtained by stacking polystyrene slabs immediately upstream of the detector. The thicknesses of the 50% ionizations relative to the maximum ionizations were determined. The detectors were then placed within the suit and the stack thickness adjusted until the 50% ionization was reestablished. The difference in thickness between the 50% thicknesses was then used with standard range-energy tables to determine the threshold energy for penetration. This report provides a detailed description of the experimental arrangement and results.

  8. 'Equivalence' and the translation and adaptation of health-related quality of life questionnaires.

    PubMed

    Herdman, M; Fox-Rushby, J; Badia, X

    1997-04-01

    The increasing use of health-related quality of life (HRQOL) questionnaires in multinational studies has resulted in the translation of many existing measures. Guidelines for translation have been published, and there has been some discussion of how to achieve and assess equivalence between source and target questionnaires. Our reading in this area had led us, however, to the conclusion that different types of equivalence were not clearly defined, and that a theoretical framework for equivalence was lacking. To confirm this we reviewed definitions of equivalence in the HRQOL literature on the use of generic questionnaires in multicultural settings. The literature review revealed: definitions of 19 different types of equivalence; vague or conflicting definitions, particularly in the case of conceptual equivalence; and the use of many redundant terms. We discuss these findings in the light of a framework adapted from cross-cultural psychology for describing three different orientations to cross-cultural research: absolutism, universalism and relativism. We suggest that the HRQOL field has generally adopted an absolutist approach and that this may account for some of the confusion in this area. We conclude by suggesting that there is an urgent need for a standardized terminology within the HRQOL field, by offering a standard definition of conceptual equivalence, and by suggesting that the adoption of a universalist orientation would require substantial changes to guidelines and more empirical work on the conceptualization of HRQOL in different cultures.

  9. Effect of shoulder abduction angle on biomechanical properties of the repaired rotator cuff tendons with 3 types of double-row technique.

    PubMed

    Mihata, Teruhisa; Fukuhara, Tetsutaro; Jun, Bong Jae; Watanabe, Chisato; Kinoshita, Mitsuo

    2011-03-01

    After rotator cuff repair, the shoulder is immobilized in various abduction positions. However, there is no consensus on the proper abduction angle. To assess the effect of shoulder abduction angle on the biomechanical properties of the repaired rotator cuff tendons among 3 types of double-row techniques. Controlled laboratory study. Thirty-two fresh-frozen porcine shoulders were used. A simulated rotator cuff tear was repaired by 1 of 3 double-row techniques: conventional double-row repair, transosseous-equivalent repair, and a combination of conventional double-row and bridging sutures (compression double-row repair). Each specimen underwent cyclic testing followed by tensile testing to failure at a simulated shoulder abduction angle of 0° or 40° on a material testing machine. Gap formation and failure loads were measured. Gap formation in conventional double-row repair at 0° (1.2 ± 0.5 mm) was significantly greater than that at 40° (0.5 ± 0.3mm, P = .01). The yield and ultimate failure loads for conventional double-row repair at 40° were significantly larger than those at 0° (P < .01), whereas those for transosseous-equivalent repair (P < .01) and compression double-row repair (P < .0001) at 0° were significantly larger than those at 40°. The failure load for compression double-row repair was the greatest among the 3 double-row techniques at both 0° and 40° of abduction. Bridging sutures have a greater effect on the biomechanical properties of the repaired rotator cuff tendon at a low abduction angle, and the conventional double-row technique has a greater effect at a high abduction angle. Proper abduction position after rotator cuff repair differs between conventional double-row repair and transosseous-equivalent repair. The authors recommend the use of the combined technique of conventional double-row and bridging sutures to obtain better biomechanical properties at both low and high abduction angles.

  10. Single-row, double-row, and transosseous equivalent techniques for isolated supraspinatus tendon tears with minimal atrophy: A retrospective comparative outcome and radiographic analysis at minimum 2-year followup

    PubMed Central

    McCormick, Frank; Gupta, Anil; Bruce, Ben; Harris, Josh; Abrams, Geoff; Wilson, Hillary; Hussey, Kristen; Cole, Brian J.

    2014-01-01

    Purpose: The purpose of this study was to measure and compare the subjective, objective, and radiographic healing outcomes of single-row (SR), double-row (DR), and transosseous equivalent (TOE) suture techniques for arthroscopic rotator cuff repair. Materials and Methods: A retrospective comparative analysis of arthroscopic rotator cuff repairs by one surgeon from 2004 to 2010 at minimum 2-year followup was performed. Cohorts were matched for age, sex, and tear size. Subjective outcome variables included ASES, Constant, SST, UCLA, and SF-12 scores. Objective outcome variables included strength, active range of motion (ROM). Radiographic healing was assessed by magnetic resonance imaging (MRI). Statistical analysis was performed using analysis of variance (ANOVA), Mann — Whitney and Kruskal — Wallis tests with significance, and the Fisher exact probability test <0.05. Results: Sixty-three patients completed the study requirements (20 SR, 21 DR, 22 TOE). There was a clinically and statistically significant improvement in outcomes with all repair techniques (ASES mean improvement P = <0.0001). The mean final ASES scores were: SR 83; (SD 21.4); DR 87 (SD 18.2); TOE 87 (SD 13.2); (P = 0.73). There was a statistically significant improvement in strength for each repair technique (P < 0.001). There was no significant difference between techniques across all secondary outcome assessments: ASES improvement, Constant, SST, UCLA, SF-12, ROM, Strength, and MRI re-tear rates. There was a decrease in re-tear rates from single row (22%) to double-row (18%) to transosseous equivalent (11%); however, this difference was not statistically significant (P = 0.6). Conclusions: Compared to preoperatively, arthroscopic rotator cuff repair, using SR, DR, or TOE techniques, yielded a clinically and statistically significant improvement in subjective and objective outcomes at a minimum 2-year follow-up. Level of Evidence: Therapeutic level 3. PMID:24926159

  11. Reconnaissance for radioactive deposits in eastern Alaska, 1952

    USGS Publications Warehouse

    Nelson, Arthur Edward; West, Walter S.; Matzko, John J.

    1954-01-01

    Reconnaissance for radioactive deposits was conducted in selected areas of eastern Alaska during 1952. Examination of copper, silver, and molybdenum occurrences and of a reported nickel prospect in the Slana-Nabesna and Chisana districts in the eastern Alaska Range revealed a maximum radioactivity of about 0.003 percent equivalent uranium. No appreciable radioactivity anomolies were indicated by aerial and foot traverses in the area. Reconnaissance for possible lode concentrations of uranium minerals in the vicinity of reported fluoride occurrences in the Hope Creek and Miller House-Circle Hot Springs areas of the Circle quadrangle and in the Fortymile district found a maximum of 0.055 percent equivalent uranium in a float fragment of ferruginous breccia in the Hope Creek area; analysis of samples obtained in the vicinity of the other fluoride occurrences showed a maximum of only 0.005 percent equivalent uranium. No uraniferous loads were discovered in the Koyukuk-Chandalar region, nor was the source of the monazite, previously reported in the placer concentrates from the Chandalar mining district, located. The source of the uranotheorianite in the placers at Gold Bench on the South Fork of the Koyukuk River was not found during a brief reconaissance, but a placer concentrate was obtained that contains 0.18 percent equivalent uranium. This concentrate is about ten times more radioactive than concentrates previously available from the area.

  12. Is digital photography an accurate and precise method for measuring range of motion of the hip and knee?

    PubMed

    Russo, Russell R; Burn, Matthew B; Ismaily, Sabir K; Gerrie, Brayden J; Han, Shuyang; Alexander, Jerry; Lenherr, Christopher; Noble, Philip C; Harris, Joshua D; McCulloch, Patrick C

    2017-09-07

    Accurate measurements of knee and hip motion are required for management of musculoskeletal pathology. The purpose of this investigation was to compare three techniques for measuring motion at the hip and knee. The authors hypothesized that digital photography would be equivalent in accuracy and show higher precision compared to the other two techniques. Using infrared motion capture analysis as the reference standard, hip flexion/abduction/internal rotation/external rotation and knee flexion/extension were measured using visual estimation, goniometry, and photography on 10 fresh frozen cadavers. These measurements were performed by three physical therapists and three orthopaedic surgeons. Accuracy was defined by the difference from the reference standard, while precision was defined by the proportion of measurements within either 5° or 10°. Analysis of variance (ANOVA), t-tests, and chi-squared tests were used. Although two statistically significant differences were found in measurement accuracy between the three techniques, neither of these differences met clinical significance (difference of 1.4° for hip abduction and 1.7° for the knee extension). Precision of measurements was significantly higher for digital photography than: (i) visual estimation for hip abduction and knee extension, and (ii) goniometry for knee extension only. There was no clinically significant difference in measurement accuracy between the three techniques for hip and knee motion. Digital photography only showed higher precision for two joint motions (hip abduction and knee extension). Overall digital photography shows equivalent accuracy and near-equivalent precision to visual estimation and goniometry.

  13. Mechanistic equivalent circuit modelling of a commercial polymer electrolyte membrane fuel cell

    NASA Astrophysics Data System (ADS)

    Giner-Sanz, J. J.; Ortega, E. M.; Pérez-Herranz, V.

    2018-03-01

    Electrochemical impedance spectroscopy (EIS) has been widely used in the fuel cell field since it allows deconvolving the different physic-chemical processes that affect the fuel cell performance. Typically, EIS spectra are modelled using electric equivalent circuits. In this work, EIS spectra of an individual cell of a commercial PEM fuel cell stack were obtained experimentally. The goal was to obtain a mechanistic electric equivalent circuit in order to model the experimental EIS spectra. A mechanistic electric equivalent circuit is a semiempirical modelling technique which is based on obtaining an equivalent circuit that does not only correctly fit the experimental spectra, but which elements have a mechanistic physical meaning. In order to obtain the aforementioned electric equivalent circuit, 12 different models with defined physical meanings were proposed. These equivalent circuits were fitted to the obtained EIS spectra. A 2 step selection process was performed. In the first step, a group of 4 circuits were preselected out of the initial list of 12, based on general fitting indicators as the determination coefficient and the fitted parameter uncertainty. In the second step, one of the 4 preselected circuits was selected on account of the consistency of the fitted parameter values with the physical meaning of each parameter.

  14. Translation Quality Assessment in Health Research: A Functionalist Alternative to Back-Translation.

    PubMed

    Colina, Sonia; Marrone, Nicole; Ingram, Maia; Sánchez, Daisey

    2017-09-01

    As international research studies become more commonplace, the importance of developing multilingual research instruments continues to increase and with it that of translated materials. It is therefore not unexpected that assessing the quality of translated materials (e.g., research instruments, questionnaires, etc.) has become essential to cross-cultural research, given that the reliability and validity of the research findings crucially depend on the translated instruments. In some fields (e.g., public health and medicine), the quality of translated instruments can also impact the effectiveness and success of interventions and public campaigns. Back-translation (BT) is a commonly used quality assessment tool in cross-cultural research. This quality assurance technique consists of (a) translation (target text [TT 1 ]) of the source text (ST), (b) translation (TT 2 ) of TT 1 back into the source language, and (c) comparison of TT 2 with ST to make sure there are no discrepancies. The accuracy of the BT with respect to the source is supposed to reflect equivalence/accuracy of the TT. This article shows how the use of BT as a translation quality assessment method can have a detrimental effect on a research study and proposes alternatives to BT. One alternative is illustrated on the basis of the translation and quality assessment methods used in a research study on hearing loss carried out in a border community in the southwest of the United States.

  15. Evaluation of the table Mountain Ronchi telescope for angular tracking

    NASA Technical Reports Server (NTRS)

    Lanyi, G.; Purcell, G.; Treuhaft, R.; Buffington, A.

    1992-01-01

    The performance of the University of California at San Diego (UCSD) Table Mountain telescope was evaluated to determine the potential of such an instrument for optical angular tracking. This telescope uses a Ronchi ruling to measure differential positions of stars at the meridian. The Ronchi technique is summarized and the operational features of the Table Mountain instrument are described. Results from an analytic model, simulations, and actual data are presented that characterize the telescope's current performance. For a star pair of visual magnitude 7, the differential uncertainty of a 5-min observation is about 50 nrad (10 marcsec), and tropospheric fluctuations are the dominant error source. At magnitude 11, the current differential uncertainty is approximately 800 nrad (approximately 170 marcsec). This magnitude is equivalent to that of a 2-W laser with a 0.4-m aperture transmitting to Earth from a spacecraft at Saturn. Photoelectron noise is the dominant error source for stars of visual magnitude 8.5 and fainter. If the photoelectron noise is reduced, ultimately tropospheric fluctuations will be the limiting source of error at an average level of 35 nrad (7 marcsec) for stars approximately 0.25 deg apart. Three near-term strategies are proposed for improving the performance of the telescope to the 10-nrad level: improving the efficiency of the optics, masking background starlight, and averaging tropospheric fluctuations over multiple observations.

  16. Tracking speech comprehension in space and time.

    PubMed

    Pulvermüller, Friedemann; Shtyrov, Yury; Ilmoniemi, Risto J; Marslen-Wilson, William D

    2006-07-01

    A fundamental challenge for the cognitive neuroscience of language is to capture the spatio-temporal patterns of brain activity that underlie critical functional components of the language comprehension process. We combine here psycholinguistic analysis, whole-head magnetoencephalography (MEG), the Mismatch Negativity (MMN) paradigm, and state-of-the-art source localization techniques (Equivalent Current Dipole and L1 Minimum-Norm Current Estimates) to locate the process of spoken word recognition at a specific moment in space and time. The magnetic MMN to words presented as rare "deviant stimuli" in an oddball paradigm among repetitive "standard" speech stimuli, peaked 100-150 ms after the information in the acoustic input, was sufficient for word recognition. The latency with which words were recognized corresponded to that of an MMN source in the left superior temporal cortex. There was a significant correlation (r = 0.7) of latency measures of word recognition in individual study participants with the latency of the activity peak of the superior temporal source. These results demonstrate a correspondence between the behaviorally determined recognition point for spoken words and the cortical activation in left posterior superior temporal areas. Both the MMN calculated in the classic manner, obtained by subtracting standard from deviant stimulus response recorded in the same experiment, and the identity MMN (iMMN), defined as the difference between the neuromagnetic responses to the same stimulus presented as standard and deviant stimulus, showed the same significant correlation with word recognition processes.

  17. 75 FR 74457 - Mandatory Reporting of Greenhouse Gases: Petroleum and Natural Gas Systems

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-30

    ...EPA is promulgating a regulation to require monitoring and reporting of greenhouse gas emissions from petroleum and natural gas systems. This action adds this source category to the list of source categories already required to report greenhouse gas emissions. This action applies to sources with carbon dioxide equivalent emissions above certain threshold levels as described in this regulation. This action does not require control of greenhouse gases.

  18. Space-Time Dependent Transport, Activation, and Dose Rates for Radioactivated Fluids.

    NASA Astrophysics Data System (ADS)

    Gavazza, Sergio

    Two methods are developed to calculate the space - and time-dependent mass transport of radionuclides, their production and decay, and the associated dose rates generated from the radioactivated fluids flowing through pipes. The work couples space- and time-dependent phenomena, treated as only space- or time-dependent in the open literature. The transport and activation methodology (TAM) is used to numerically calculate space- and time-dependent transport and activation of radionuclides in fluids flowing through pipes exposed to radiation fields, and volumetric radioactive sources created by radionuclide motions. The computer program Radionuclide Activation and Transport in Pipe (RNATPA1) performs the numerical calculations required in TAM. The gamma ray dose methodology (GAM) is used to numerically calculate space- and time-dependent gamma ray dose equivalent rates from the volumetric radioactive sources determined by TAM. The computer program Gamma Ray Dose Equivalent Rate (GRDOSER) performs the numerical calculations required in GAM. The scope of conditions considered by TAM and GAM herein include (a) laminar flow in straight pipe, (b)recirculating flow schemes, (c) time-independent fluid velocity distributions, (d) space-dependent monoenergetic neutron flux distribution, (e) space- and time-dependent activation process of a single parent nuclide and transport and decay of a single daughter radionuclide, and (f) assessment of space- and time-dependent gamma ray dose rates, outside the pipe, generated by the space- and time-dependent source term distributions inside of it. The methodologies, however, can be easily extended to include all the situations of interest for solving the phenomena addressed in this dissertation. A comparison is made from results obtained by the described calculational procedures with analytical expressions. The physics of the problems addressed by the new technique and the increased accuracy versus non -space and time-dependent methods are presented. The value of the methods is also discussed. It has been demonstrated that TAM and GAM can be used to enhance the understanding of the space- and time-dependent mass transport of radionuclides, their production and decay, and the associated dose rates related to radioactivated fluids flowing through pipes.

  19. Testing Moderating Detection Systems with {sup 252}Cf-Based Reference Neutron Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hertel, Nolan E.; Sweezy, Jeremy; Sauber, Jeremiah S.

    Calibration measurements were carried out on a probe designed to measure ambient dose equivalent in accordance with ICRP Pub 60 recommendations. It consists of a cylindrical {sup 3}He proportional counter surrounded by a 25-cm-diameter spherical polyethylene moderator. Its neutron response is optimized for dose rate measurements of neutrons between thermal energies and 20 MeV. The instrument was used to measure the dose rate in four separate neutron fields: unmoderated {sup 252}Cf, D{sub 2}O-moderated {sup 252}Cf, polyethylene-moderated {sup 252}Cf, and WEP neutron howitzer with {sup 252}Cf at its center. Dose equivalent measurements were performed at source-detector centerline distances from 50 tomore » 200 cm. The ratio of air-scatter- and room-return-corrected ambient dose equivalent rates to ambient dose equivalent rates calculated with the code MCNP are tabulated.« less

  20. MO-FG-CAMPUS-TeP1-04: Pseudo-In-Vivo Dose Verification of a New Mono-Isocentric Technique for the Treatment of Multiple Brain Metastases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pappas, E P; Makris, D; Lahanas, V

    2016-06-15

    Purpose: To validate dose calculation and delivery accuracy of a recently introduced mono-isocentric technique for the treatment of multiple brain metastases in a realistic clinical case. Methods: Anonymized CT scans of a patient were used to model a hollow phantom that duplicates anatomy of the skull. A 3D printer was used to construct the phantom of a radiologically bone-equivalent material. The hollow phantom was subsequently filled with a polymer gel 3D dosimeter which also acted as a water-equivalent material. Irradiation plan consisted of 5 targets and was identical to the one delivered to the specific patient except for the prescriptionmore » dose which was optimized to match the gel dose-response characteristics. Dose delivery was performed using a single setup isocenter dynamic conformal arcs technique. Gel dose read-out was carried out by a 1.5 T MRI scanner. All steps of the corresponding patient’s treatment protocol were strictly followed providing an end-to-end quality assurance test. Pseudo-in-vivo measured 3D dose distribution and calculated one were compared in terms of spatial agreement, dose profiles, 3D gamma indices (5%/2mm, 20% dose threshold), DVHs and DVH metrics. Results: MR-identified polymerized areas and calculated high dose regions were found to agree within 1.5 mm for all targets, taking into account all sources of spatial uncertainties involved (i.e., set-up errors, MR-related geometric distortions and registration inaccuracies). Good dosimetric agreement was observed in the vast majority of the examined profiles. 3D gamma index passing rate reached 91%. DVH and corresponding metrics comparison resulted in a satisfying agreement between measured and calculated datasets within targets and selected organs-at-risk. Conclusion: A novel, pseudo-in-vivo QA test was implemented to validate spatial and dosimetric accuracy in treatment of multiple metastases. End-to-end testing demonstrated that our gel dosimetry phantom is suited for such QA procedures, allowing for 3D analysis of both targeting placement and dose.« less

  1. Skyshine study for next generation of fusion devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gohar, Y.; Yang, S.

    1987-02-01

    A shielding analysis for next generation of fusion devices (ETR/INTOR) was performed to study the dose equivalent outside the reactor building during operation including the contribution from neutrons and photons scattered back by collisions with air nuclei (skyshine component). Two different three-dimensional geometrical models for a tokamak fusion reactor based on INTOR design parameters were developed for this study. In the first geometrical model, the reactor geometry and the spatial distribution of the deuterium-tritium neutron source were simplified for a parametric survey. The second geometrical model employed an explicit representation of the toroidal geometry of the reactor chamber and themore » spatial distribution of the neutron source. The MCNP general Monte Carlo code for neutron and photon transport was used to perform all the calculations. The energy distribution of the neutron source was used explicitly in the calculations with ENDF/B-V data. The dose equivalent results were analyzed as a function of the concrete roof thickness of the reactor building and the location outside the reactor building.« less

  2. 75 FR 2452 - Approval and Promulgation of Air Quality Implementation Plans; Delaware; Reasonable Further...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-15

    ... Lightering Operations. Since there will be no new VOC controls for point sources, non-point source sector... equivalent to 1.52 x 1.74 = 2.64 tpd NO X reduction shortfall. Delaware has implemented numerous controls... achieved ``as expeditious as practicable.'' Control measures under RACT constitute a major group of RACM...

  3. Biodegradation of Dense Non-Aqueous Phase Liquids (DNAPL) Through Bioaugmentation of Source Areas - Dover National Test Site, Dover, Delaware

    DTIC Science & Technology

    2008-08-01

    the distribution of DNAPL. The OSU research team evaluated the use of radon as a partitioning groundwater tracer. The DNAPL release fulfilled one...close to the source area generated more PCE equivalent mass over time. The exponential decay from the fitted line (predicted PCE, orange line in each

  4. 40 CFR 63.603 - Standards for new sources.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) National Emission Standards for Hazardous Air Pollutants From Phosphoric Acid Manufacturing Plants § 63.603 Standards for new sources. (a) Wet process phosphoric acid process line. On and after the date on which the... equivalent P2O5 feed (0.01350 lb/ton). (b) Superphosphoric acid process line. On and after the date on which...

  5. 40 CFR 63.603 - Standards for new sources.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) National Emission Standards for Hazardous Air Pollutants From Phosphoric Acid Manufacturing Plants § 63.603 Standards for new sources. (a) Wet process phosphoric acid process line. On and after the date on which the... equivalent P2O5 feed (0.01350 lb/ton). (b) Superphosphoric acid process line. On and after the date on which...

  6. 40 CFR 63.603 - Standards for new sources.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) National Emission Standards for Hazardous Air Pollutants From Phosphoric Acid Manufacturing Plants § 63.603 Standards for new sources. (a) Wet process phosphoric acid process line. On and after the date on which the... equivalent P2O5 feed (0.01350 lb/ton). (b) Superphosphoric acid process line. On and after the date on which...

  7. Influence of Coliform Source on Evaluation of Membrane Filters

    PubMed Central

    Brodsky, M. H.; Schiemann, D. A.

    1975-01-01

    Four brands of membrane filters were examined for total and fecal coliform recovery performance by two experimental approaches. Using diluted EC broth cultures of water samples, Johns-Manville filters were superior to Sartorius filters for fecal coliform but equivalent for total coliform recovery. Using river water samples, Johns-Manville filters were superior to Sartorius filters for total coliform but equivalent for fecal coliform recovery. No differences were observed between Johns-Manville and Millipore or Millipore and Sartorius filters for total or fecal coliform recoveries using either approach, nor was any difference observed between Millipore and Gelman filters for fecal coliform recovery from river water samples. These results indicate that the source of the coliform bacteria has an important influence on the conclusions of membrane filter evaluation studies. PMID:1106318

  8. A Retrospective Analysis of Hemostatic Techniques in Primary Total Knee Arthroplasty: Traditional Electrocautery, Bipolar Sealer, and Argon Beam Coagulation.

    PubMed

    Rosenthal, Brett D; Haughom, Bryan D; Levine, Brett R

    2016-01-01

    In this retrospective cohort study of 280 primary total knee arthroplasties, clinical outcomes relevant to hemostasis were compared by electrocautery type: traditional electrocautery (TE), bipolar sealer (BS), and argon beam coagulation (ABC). Age, sex, and preoperative diagnosis were not significantly different among the TE, BS, and ABC cohorts. The 3 hemostasis systems were statistically equivalent with respect to estimated blood loss. Wound drainage during the first 48 hours after surgery was equivalent between the BS and ABC cohorts but less for the TE cohort. Transfusion requirements were not significantly different among the cohorts. The 3 hemostasis systems were statistically equivalent with respect to mean change in hemoglobin level during the early postoperative period (levels were measured on postoperative day 1 and on discharge). As BS and ABC are clinically equivalent to TE, their increased cost may not be justified.

  9. A Formal Approach to Requirements-Based Programming

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    No significant general-purpose method is currently available to mechanically transform system requirements into a provably equivalent model. The widespread use of such a method represents a necessary step toward high-dependability system engineering for numerous application domains. Current tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The "gap" unfilled by such tools and methods is that the formal models cannot be proven to be equivalent to the requirements. We offer a method for mechanically transforming requirements into a provably equivalent formal model that can be used as the basis for code generation and other transformations. This method is unique in offering full mathematical tractability while using notations and techniques that are well known and well trusted. Finally, we describe further application areas we are investigating for use of the approach.

  10. Expression of ZO-1 and claudin-1 in a 3D epidermal equivalent using canine progenitor epidermal keratinocytes.

    PubMed

    Teramoto, Keiji; Asahina, Ryota; Nishida, Hidetaka; Kamishina, Hiroaki; Maeda, Sadatoshi

    2018-05-21

    Previous studies indicate that tight junctions are involved in the pathogenesis of canine atopic dermatitis (cAD). An in vitro skin model is needed to elucidate the specific role of tight junctions in cAD. A 3D epidermal equivalent model using canine progenitor epidermal keratinocytes (CPEK) has been established; the expression of tight junctions within this model is uncharacterized. To investigate the expression of tight junctions in the 3D epidermal equivalent. Two normal laboratory beagle dogs served as donors of full-thickness skin biopsy samples for comparison to the in vitro model. Immunohistochemical techniques were employed to investigate the expression of tight junctions including zonula occludens (ZO)-1 and claudin-1 in normal canine skin, and in the CPEK 3D epidermal equivalent. Results demonstrated the expression of ZO-1 and claudin-1 in the CPEK 3D epidermal equivalent, with staining patterns that were similar to those in normal canine skin. The CPEK 3D epidermal equivalent has the potential to be a suitable in vitro research tool for clarifying the specific role of tight junctions in cAD. © 2018 ESVD and ACVD.

  11. Estimation of hysteretic damping of structures by stochastic subspace identification

    NASA Astrophysics Data System (ADS)

    Bajrić, Anela; Høgsberg, Jan

    2018-05-01

    Output-only system identification techniques can estimate modal parameters of structures represented by linear time-invariant systems. However, the extension of the techniques to structures exhibiting non-linear behavior has not received much attention. This paper presents an output-only system identification method suitable for random response of dynamic systems with hysteretic damping. The method applies the concept of Stochastic Subspace Identification (SSI) to estimate the model parameters of a dynamic system with hysteretic damping. The restoring force is represented by the Bouc-Wen model, for which an equivalent linear relaxation model is derived. Hysteretic properties can be encountered in engineering structures exposed to severe cyclic environmental loads, as well as in vibration mitigation devices, such as Magneto-Rheological (MR) dampers. The identification technique incorporates the equivalent linear damper model in the estimation procedure. Synthetic data, representing the random vibrations of systems with hysteresis, validate the estimated system parameters by the presented identification method at low and high-levels of excitation amplitudes.

  12. Endoscopic versus transcranial procurement of allograft tympano-ossicular systems: a prospective double-blind randomized controlled audit.

    PubMed

    Caremans, Jeroen; Hamans, Evert; Muylle, Ludo; Van de Heyning, Paul; Van Rompaey, Vincent

    2016-06-01

    Allograft tympano-ossicular systems (ATOS) have proven their use over many decades in tympanoplasty and reconstruction after resection of cholesteatoma. The transcranial bone plug technique has been used in the past 50 years to procure en bloc ATOS (tympanic membrane with malleus, incus and stapes attached). Recently, our group reported the feasibility of the endoscopic procurement technique. The aim of this study was to assess whether clinical outcome is equivalent in ATOS acquired by using the endoscopic procurement technique compared to ATOS acquired by using the transcranial technique. A double-blind randomized controlled audit was performed in a tertiary referral center in patients that underwent allograft tympanoplasty because of chronic otitis media with and without cholesteatoma. Allograft epithelialisation was evaluated at the short-term postoperative visit by microscopic examination. Failures were reported if reperforation was observed. Fifty patients underwent allograft tympanoplasty: 34 received endoscopically procured ATOS and 16 received transcranially procured ATOS. One failed case was observed, in the endoscopic procurement group. We did not observe a statistically significant difference between the two groups in failure rate. This study demonstrates equivalence of the clinical outcome of allograft tympanoplasty using either endoscopic or transcranial procured ATOS and therefore indicates that the endoscopic technique can be considered the new standard procurement technique. Especially because the endoscopic procurement technique has several advantages compared to the former transcranial procurement technique: it avoids risk of prion transmission and it is faster while lacking any noticeable incision.

  13. SU-F-T-02: Estimation of Radiobiological Doses (BED and EQD2) of Single Fraction Electronic Brachytherapy That Equivalent to I-125 Eye Plaque: By Using Linear-Quadratic and Universal Survival Curve Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Y; Waldron, T; Pennington, E

    Purpose: To test the radiobiological impact of hypofractionated choroidal melanoma brachytherapy, we calculated single fraction equivalent doses (SFED) of the tumor that equivalent to 85 Gy of I125-BT for 20 patients. Corresponding organs-at-risks (OARs) doses were estimated. Methods: Twenty patients treated with I125-BT were retrospectively examined. The tumor SFED values were calculated from tumor BED using a conventional linear-quadratic (L-Q) model and an universal survival curve (USC). The opposite retina (α/β = 2.58), macula (2.58), optic disc (1.75), and lens (1.2) were examined. The % doses of OARs over tumor doses were assumed to be the same as for amore » single fraction delivery. The OAR SFED values were converted into BED and equivalent dose in 2 Gy fraction (EQD2) by using both L-Q and USC models, then compared to I125-BT. Results: The USC-based BED and EQD2 doses of the macula, optic disc, and the lens were on average 118 ± 46% (p < 0.0527), 126 ± 43% (p < 0.0354), and 112 ± 32% (p < 0.0265) higher than those of I125-BT, respectively. The BED and EQD2 doses of the opposite retina were 52 ± 9% lower than I125-BT. The tumor SFED values were 25.2 ± 3.3 Gy and 29.1 ± 2.5 Gy when using USC and LQ models which can be delivered within 1 hour. All BED and EQD2 values using L-Q model were significantly larger when compared to the USC model (p < 0.0274) due to its large single fraction size (> 14 Gy). Conclusion: The estimated single fraction doses were feasible to be delivered within 1 hour using a high dose rate source such as electronic brachytherapy (eBT). However, the estimated OAR doses using eBT were 112 ∼ 118% higher than when using the I125-BT technique. Continued exploration of alternative dose rate or fractionation schedules should be followed.« less

  14. Transtendon, Double-Row, Transosseous-Equivalent Arthroscopic Repair of Partial-Thickness, Articular-Surface Rotator Cuff Tears

    PubMed Central

    Dilisio, Matthew F.; Miller, Lindsay R.; Higgins, Laurence D.

    2014-01-01

    Arthroscopic transtendinous techniques for the arthroscopic repair of partial-thickness, articular-surface rotator cuff tears offer the advantage of minimizing the disruption of the patient's remaining rotator cuff tendon fibers. In addition, double-row fixation of full-thickness rotator cuff tears has shown biomechanical advantages. We present a novel method combining these 2 techniques for transtendon, double-row, transosseous-equivalent arthroscopic repair of partial-thickness, articular-surface rotator cuff tears. Direct visualization of the reduction of the retracted articular tendon layer to its insertion on the greater tuberosity is the key to the procedure. Linking the medial-row anchors and using a double-row construct provide a stable repair that allows early shoulder motion to minimize the risk of postoperative stiffness. PMID:25473606

  15. Transtendon, double-row, transosseous-equivalent arthroscopic repair of partial-thickness, articular-surface rotator cuff tears.

    PubMed

    Dilisio, Matthew F; Miller, Lindsay R; Higgins, Laurence D

    2014-10-01

    Arthroscopic transtendinous techniques for the arthroscopic repair of partial-thickness, articular-surface rotator cuff tears offer the advantage of minimizing the disruption of the patient's remaining rotator cuff tendon fibers. In addition, double-row fixation of full-thickness rotator cuff tears has shown biomechanical advantages. We present a novel method combining these 2 techniques for transtendon, double-row, transosseous-equivalent arthroscopic repair of partial-thickness, articular-surface rotator cuff tears. Direct visualization of the reduction of the retracted articular tendon layer to its insertion on the greater tuberosity is the key to the procedure. Linking the medial-row anchors and using a double-row construct provide a stable repair that allows early shoulder motion to minimize the risk of postoperative stiffness.

  16. A computational algorithm for spacecraft control and momentum management

    NASA Technical Reports Server (NTRS)

    Dzielski, John; Bergmann, Edward; Paradiso, Joseph

    1990-01-01

    Developments in the area of nonlinear control theory have shown how coordinate changes in the state and input spaces of a dynamical system can be used to transform certain nonlinear differential equations into equivalent linear equations. These techniques are applied to the control of a spacecraft equipped with momentum exchange devices. An optimal control problem is formulated that incorporates a nonlinear spacecraft model. An algorithm is developed for solving the optimization problem using feedback linearization to transform to an equivalent problem involving a linear dynamical constraint and a functional approximation technique to solve for the linear dynamics in terms of the control. The original problem is transformed into an unconstrained nonlinear quadratic program that yields an approximate solution to the original problem. Two examples are presented to illustrate the results.

  17. Develop real-time dosimetry concepts and instrumentation for long term missions

    NASA Technical Reports Server (NTRS)

    Braby, L. A.

    1981-01-01

    The development of a rugged portable dosimetry system, based on microdosimetry techniques, which will measure dose and evaluate dose equivalent in a mixed radiation field is described. Progress in the desired dosimetry system can be divided into three distinct areas: development of the radiation detector, and electron system are presented. The mathematical techniques required are investigated.

  18. An Equivalent Moment Magnitude Earthquake Catalogue for Western Turkey and its Quantitative Properties

    NASA Astrophysics Data System (ADS)

    Leptokaropoulos, Konstantinos; Vasilios, Karakostas; Eleftheria, Papadimitriou; Aggeliki, Adamaki; Onur, Tan; Zumer, Pabuçcu

    2013-04-01

    Earthquake catalogues consist a basic product of seismology, resulting from complex procedures and suffering from natural and man-made errors. The accumulation of these problems over space and time lead to inhomogeneous catalogues which in turn lead to significant uncertainties in many kinds of analyses, such as seismicity rate evaluation and seismic hazard assessment. A major source of catalogue inhomogeneity is the variety of magnitude scales (i.e. Mw, mb, MS, ML, Md), reported from different institutions and sources. Therefore an effort is made in this study to compile a catalogue as homogenous as possible regarding the magnitude scale for the region of Western Turkey (26oE - 32oE longitude, 35oN - 43oN latitude), one of the most rapidly deforming regions worldwide with intense seismic activity, complex fault systems and frequent strong earthquakes. For this purpose we established new relationships to transform as many as possible available magnitudes into equivalent moment magnitude scale, M*w. These relations yielded by the application of the General Orthogonal Regression method and the statistical significance of the results was quantified. The final equivalent moment magnitude was evaluated by taking into consideration all the available magnitudes for which a relation was obtained and also a weight inversely proportional to their standard deviation. Once the catalogue was compiled the magnitude of completeness, Mc, was investigated in both space and time regime. The b-values and their accuracy were also calculated by the maximum likelihood estimate. The spatial and temporal constraints were selected in respect to seismicity recording level, since the state and evolution of the local and regional seismic networks are unknown. We modified and applied the Goodness of Fit test of Wiemer and Wyss (2000) in order to be more effective in datasets that are characterized by smaller sample size and higher Mcthresholds. The compiled catalogue and the Mcevaluation technique introduced in this study may constitute a useful tool for future seismicity research in Western Turkey. Acknowledgements: This work was partially supported by the research project titled as "Seismotectonic properties of the eastern Aegean: Implications on the stress field evolution and seismic hazard assessment in a tectonically complex area", GSRT 10 T UR/ 1-3-9, Joint Research and Technology Programmes 2010 - 2011, financed by the Ministry of Education of Greece.

  19. On the Mathematical Modeling of Single and Multiple Scattering of Ultrasonic Guided Waves by Small Scatterers: A Structural Health Monitoring Measurement Model

    NASA Astrophysics Data System (ADS)

    Strom, Brandon William

    In an effort to assist in the paradigm shift from schedule based maintenance to conditioned based maintenance, we derive measurement models to be used within structural health monitoring algorithms. Our models are physics based, and use scattered Lamb waves to detect and quantify pitting corrosion. After covering the basics of Lamb waves and the reciprocity theorem, we develop a technique for the scattered wave solution. The first application is two-dimensional, and is employed in two different ways. The first approach integrates a traction distribution and replaces it by an equivalent force. The second approach is higher order and uses the actual traction distribution. We find that the equivalent force version of the solution technique holds well for small pits at low frequencies. The second application is three-dimensional. The equivalent force caused by the scattered wave of an arbitrary equivalent force is calculated. We obtain functions for the scattered wave displacements as a function of equivalent forces, equivalent forces as a function of incident wave, and scattered wave amplitudes as a function of incident amplitude. The third application uses self-consistency to derive governing equations for the scattered waves due to multiple corrosion pits. We decouple the implicit set of equations and solve explicitly by using a recursive series solution. Alternatively, we solve via an undetermined coefficient method which results in an interaction operator and solution via matrix inversion. The general solution is given for N pits including mode conversion. We show that the two approaches are equivalent, and give a solution for three pits. Various approximations are advanced to simplify the problem while retaining the leading order physics. As a final application, we use the multiple scattering model to investigate resonance of Lamb waves. We begin with a one-dimensional problem and progress to a three-dimensional problem. A directed graph enables interpretation of the interaction operator, and we show that a series solution converges due to loss of energy in the system. We see that there are four causes of resonance and plot the modulation depth as a function of spacing between the pits.

  20. Evaluation of equivalent defect heat generation in carbon epoxy composite under powerful ultrasonic stimulation by using infrared thermography

    NASA Astrophysics Data System (ADS)

    Derusova, D. A.; Vavilov, V. P.; Pawar, S. S.

    2015-04-01

    Low velocity impact is a frequently observed event during the operation of an aircraft composite structure. This type of damage is aptly called as “blind-side impact damage” as it is barely visible as a dent on the impacted surface, but may produce extended delaminations closer to the rear surface. One-sided thermal nondestructive testing is considered as a promising technique for detecting impact damage but because of diffusive nature of optical thermal signals there is drop in detectability of deeper subsurface defects. Ultrasonic Infrared thermography is a potentially attractive nondestructive evaluation technique used to detect the defects through observation of vibration-induced heat generation. Evaluation of the energy released by such defects is a challenging task. In this study, the thin delaminations caused by impact damage in composites and which are subjected to ultrasonic excitation are considered as local heat sources. The actual impact damage in a carbon epoxy composite which was detected by applying a magnetostrictive ultrasonic device is then modeled as a pyramid-like defect with a set of delaminations acting as an air-filled heat sources. The temperature rise expected on the surface of the specimen was achieved by varying energy contribution from each delamination through trial and error. Finally, by comparing the experimental temperature elevations in defective area with the results of temperature simulations, we estimated the energy generated by each defect and defect power of impact damage as a whole. The results show good correlation between simulations and measurements, thus validating the simulation approach.

  1. Acceleration of Regeneration of Large-Gap Peripheral Nerve Injuries Using Acellular Nerve Allografts plus amniotic Fluid Derived Stem Cells (AFS)

    DTIC Science & Technology

    2017-09-01

    that the AFS seeded ANA used for nerve repair resulted in an improved functional outcome for the rats compared to ANA alone and were equivalent to...junction morphology were equivalent between the AFS seeded ANA. Additional studies investigated the use of post-partum acellular materials to...techniques for repairing large-gap (6 cm) nerve injuries in non -human primates. This pre-clinical model represents a more translational model of

  2. Acceleration of Regeneration of Large-Gap Peripheral Nerve Injuries Using Acellular Nerve Allografts Plus Amniotic Fluid Derived Stem Cells (AFS)

    DTIC Science & Technology

    2017-09-01

    AFS seeded ANA used for nerve repair resulted in an improved functional outcome for the rats compared to ANA alone and were equivalent to those...junction morphology were equivalent between the AFS seeded ANA. Additional studies investigated the use of post-partum acellular materials to promote...techniques for repairing large-gap (6 cm) nerve injuries in non -human primates. This pre-clinical model represents a more translational model of peripheral

  3. 40 CFR 86.1839-01 - Carryover of certification data.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... distribution of catalyst temperatures of the selected durability configuration is effectively equivalent or lower than the distribution of catalyst temperatures of the vehicle configuration which is the source of...

  4. The 2.5-dimensional equivalent sources method for directly exposed and shielded urban canyons.

    PubMed

    Hornikx, Maarten; Forssén, Jens

    2007-11-01

    When a domain in outdoor acoustics is invariant in one direction, an inverse Fourier transform can be used to transform solutions of the two-dimensional Helmholtz equation to a solution of the three-dimensional Helmholtz equation for arbitrary source and observer positions, thereby reducing the computational costs. This previously published approach [D. Duhamel, J. Sound Vib. 197, 547-571 (1996)] is called a 2.5-dimensional method and has here been extended to the urban geometry of parallel canyons, thereby using the equivalent sources method to generate the two-dimensional solutions. No atmospheric effects are considered. To keep the error arising from the transform small, two-dimensional solutions with a very fine frequency resolution are necessary due to the multiple reflections in the canyons. Using the transform, the solution for an incoherent line source can be obtained much more efficiently than by using the three-dimensional solution. It is shown that the use of a coherent line source for shielded urban canyon observer positions leads mostly to an overprediction of levels and can yield erroneous results for noise abatement schemes. Moreover, the importance of multiple facade reflections in shielded urban areas is emphasized by vehicle pass-by calculations, where cases with absorptive and diffusive surfaces have been modeled.

  5. A neutral lithium beam source

    NASA Astrophysics Data System (ADS)

    Zhang, XiaoDong; Wang, ZhengMin; Hu, LiQun

    1994-04-01

    A low energy neutral lithium beam source with energy about 6 keV and a neutral beam equivalent current of 20 μA/cm2 has been developed in ASIPP in order to measure the density gradient and the fluctuations in the edge plasma of the HT-6M tokamak. In the source, lithium ions are extracted from a solid emitter (β-eucryptite), focused in a two-tube immersion lens, and neutralized in a charge-exchange cell with sodium. This source operates in pulsed mode. The pulse length is adjustable from 10 to 100 ms.

  6. Equivalent Quantum Equations in a System Inspired by Bouncing Droplets Experiments

    NASA Astrophysics Data System (ADS)

    Borghesi, Christian

    2017-07-01

    In this paper we study a classical and theoretical system which consists of an elastic medium carrying transverse waves and one point-like high elastic medium density, called concretion. We compute the equation of motion for the concretion as well as the wave equation of this system. Afterwards we always consider the case where the concretion is not the wave source any longer. Then the concretion obeys a general and covariant guidance formula, which leads in low-velocity approximation to an equivalent de Broglie-Bohm guidance formula. The concretion moves then as if exists an equivalent quantum potential. A strictly equivalent free Schrödinger equation is retrieved, as well as the quantum stationary states in a linear or spherical cavity. We compute the energy (and momentum) of the concretion, naturally defined from the energy (and momentum) density of the vibrating elastic medium. Provided one condition about the amplitude of oscillation is fulfilled, it strikingly appears that the energy and momentum of the concretion not only are written in the same form as in quantum mechanics, but also encapsulate equivalent relativistic formulas.

  7. Affordance Equivalences in Robotics: A Formalism

    PubMed Central

    Andries, Mihai; Chavez-Garcia, Ricardo Omar; Chatila, Raja; Giusti, Alessandro; Gambardella, Luca Maria

    2018-01-01

    Automatic knowledge grounding is still an open problem in cognitive robotics. Recent research in developmental robotics suggests that a robot's interaction with its environment is a valuable source for collecting such knowledge about the effects of robot's actions. A useful concept for this process is that of an affordance, defined as a relationship between an actor, an action performed by this actor, an object on which the action is performed, and the resulting effect. This paper proposes a formalism for defining and identifying affordance equivalence. By comparing the elements of two affordances, we can identify equivalences between affordances, and thus acquire grounded knowledge for the robot. This is useful when changes occur in the set of actions or objects available to the robot, allowing to find alternative paths to reach goals. In the experimental validation phase we verify if the recorded interaction data is coherent with the identified affordance equivalences. This is done by querying a Bayesian Network that serves as container for the collected interaction data, and verifying that both affordances considered equivalent yield the same effect with a high probability. PMID:29937724

  8. Cross-Language Measurement Equivalence of Parenting Measures for use with Mexican American Populations

    PubMed Central

    Nair, Rajni L.; White, Rebecca M.B.; Knight, George P.; Roosa, Mark W.

    2009-01-01

    Increasing diversity among families in the United States often necessitates the translation of common measures into various languages. However, even when great care is taken during translations, empirical evaluations of measurement equivalence are necessary. The current study demonstrates the analytic techniques researchers should use to evaluate the measurement equivalence of translated measures. To this end we investigated the cross-language measurement equivalence of several common parenting measures in a sample of 749 Mexican American families. The item invariance results indicated similarity of factor structures across language groups for each of the parenting measures for both mothers and children. Construct validity tests indicated similar slope relations between each of the four parenting measures and the outcomes across the two language groups for both mothers and children. Equivalence in intercepts, however, was only achieved for some outcomes. These findings indicate that the use of these measures in both within group and between group analyses based on correlation/covariance structure is defensible, but researchers are cautioned against interpretations of mean level differences across these language groups. PMID:19803604

  9. Total phenolics and total flavonoids in selected Indian medicinal plants.

    PubMed

    Sulaiman, C T; Balachandran, Indira

    2012-05-01

    Plant phenolics and flavonoids have a powerful biological activity, which outlines the necessity of their determination. The phenolics and flavonoids content of 20 medicinal plants were determined in the present investigation. The phenolic content was determined by using Folin-Ciocalteu assay. The total flavonoids were measured spectrophotometrically by using the aluminium chloride colorimetric assay. The results showed that the family Mimosaceae is the richest source of phenolics, (Acacia nilotica: 80.63 mg gallic acid equivalents, Acacia catechu 78.12 mg gallic acid equivalents, Albizia lebbeck 66.23 mg gallic acid equivalents). The highest total flavonoid content was revealed in Senna tora which belongs to the family Caesalpiniaceae. The present study also shows the ratio of flavonoids to the phenolics in each sample for their specificity.

  10. A simple calculation method for determination of equivalent square field.

    PubMed

    Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad

    2012-04-01

    Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning.

  11. 40 CFR 63.11438 - What are the standards for new and existing sources?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (a)(1) and (2) of this section: (1) Use natural gas, or equivalent clean-burning fuel, as the kiln fuel; or (2) Use an electric-powered kiln. (b) You must maintain annual wet glaze usage records for... for new and existing sources? (a) For each kiln that fires glazed ceramic ware, you must maintain the...

  12. Remember-Know and Source Memory Instructions Can Qualitatively Change Old-New Recognition Accuracy: The Modality-Match Effect in Recognition Memory

    ERIC Educational Resources Information Center

    Mulligan, Neil W.; Besken, Miri; Peterson, Daniel

    2010-01-01

    Remember-Know (RK) and source memory tasks were designed to elucidate processes underlying memory retrieval. As part of more complex judgments, both tests produce a measure of old-new recognition, which is typically treated as equivalent to that derived from a standard recognition task. The present study demonstrates, however, that recognition…

  13. High level white noise generator

    DOEpatents

    Borkowski, Casimer J.; Blalock, Theron V.

    1979-01-01

    A wide band, stable, random noise source with a high and well-defined output power spectral density is provided which may be used for accurate calibration of Johnson Noise Power Thermometers (JNPT) and other applications requiring a stable, wide band, well-defined noise power spectral density. The noise source is based on the fact that the open-circuit thermal noise voltage of a feedback resistor, connecting the output to the input of a special inverting amplifier, is available at the amplifier output from an equivalent low output impedance caused by the feedback mechanism. The noise power spectral density level at the noise source output is equivalent to the density of the open-circuit thermal noise or a 100 ohm resistor at a temperature of approximately 64,000 Kelvins. The noise source has an output power spectral density that is flat to within 0.1% (0.0043 db) in the frequency range of from 1 KHz to 100 KHz which brackets typical passbands of the signal-processing channels of JNPT's. Two embodiments, one of higher accuracy that is suitable for use as a standards instrument and another that is particularly adapted for ambient temperature operation, are illustrated in this application.

  14. Life Cycle Assessment of Bio-diesel Production—A Comparative Analysis

    NASA Astrophysics Data System (ADS)

    Chatterjee, R.; Sharma, V.; Mukherjee, S.; Kumar, S.

    2014-04-01

    This work deals with the comparative analysis of environmental impacts of bio-diesel produced from Jatropha curcas, Rapeseed and Palm oil by applying the life cycle assessment and eco-efficiency concepts. The environmental impact indicators considered in the present paper include global warming potential (GWP, CO2 equivalent), acidification potential (AP, SO2 equivalent) and eutrophication potential (EP, NO3 equivalent). Different weighting techniques have been used to present and evaluate the environmental characteristics of bio-diesel. With the assistance of normalization values, the eco-efficiency was demonstrated in this work. The results indicate that the energy consumption of bio-diesel production is lowest in Jatropha while AP and EP are more in case of Jatropha than that of Rapeseed and Palm oil.

  15. Electromagnetic backscattering from a random distribution of lossy dielectric scatterers

    NASA Technical Reports Server (NTRS)

    Lang, R. H.

    1980-01-01

    Electromagnetic backscattering from a sparse distribution of discrete lossy dielectric scatterers occupying a region 5 was studied. The scatterers are assumed to have random position and orientation. Scattered fields are calculated by first finding the mean field and then by using it to define an equivalent medium within the volume 5. The scatterers are then viewed as being embedded in the equivalent medium; the distorted Born approximation is then used to find the scattered fields. This technique represents an improvement over the standard Born approximation since it takes into account the attenuation of the incident and scattered waves in the equivalent medium. The method is used to model a leaf canopy when the leaves are modeled by lossy dielectric discs.

  16. Growth and tolerance of infants fed formula with a new algal source of docosahexaenoic acid: Double-blind, randomized, controlled trial.

    PubMed

    Yeiser, Michael; Harris, Cheryl L; Kirchoff, Ashlee L; Patterson, Ashley C; Wampler, Jennifer L; Zissman, Edward N; Berseth, Carol Lynn

    2016-12-01

    Docosahexaenoic acid (DHA) in infant formula at concentrations based on worldwide human milk has resulted in circulating red blood cell (RBC) lipids related to visual and cognitive development. In this study, infants received study formula (17mg DHA/100kcal) with a commercially-available (Control: n=140; DHASCO®) or alternative (DHASCO®-B: n=127) DHA single cell oil from 14 to 120 days of age. No significant group differences were detected for growth rates by gender through 120 days of age. Blood fatty acids at 120 days of age were assessed by capillary column gas chromatography in a participant subset (Control: n=34; DHASCO-B: n=27). The 90% confidence interval (91-104%) for the group mean (geometric) total RBC DHA (µg/mL) ratio fell within the pre-specified equivalence limit (80-125%), establishing study formula equivalence with respect to DHA. This study demonstrated infant formula with DHASCO-B was safe, well-tolerated, and associated with normal growth. Furthermore, DHASCO and DHASCO-B represented equivalent sources of DHA as measured by circulating RBC DHA. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Effects of Reflective Inquiry Instructional Technique on Students' Academic Achievement and Ability Level in Electronic Work Trade in Technical Colleges

    ERIC Educational Resources Information Center

    Ogbuanya, T. C.; Owodunni, A. S.

    2015-01-01

    This study was designed to determine the effect of reflective inquiry instructional technique on achievement of students in Technical Colleges. The study adopted a pre-test, post-test, non-equivalent control group, quasi-experimental research design which involved groups of students in their intact class assigned to experimental group and control…

  18. ITER-like antenna capacitors voltage probes: Circuit/electromagnetic calculations and calibrations.

    PubMed

    Helou, W; Dumortier, P; Durodié, F; Lombard, G; Nicholls, K

    2016-10-01

    The analyses illustrated in this manuscript have been performed in order to provide the required data for the amplitude-and-phase calibration of the D-dot voltage probes used in the ITER-like antenna at the Joint European Torus tokamak. Their equivalent electrical circuit has been extracted and analyzed, and it has been compared to the one of voltage probes installed in simple transmission lines. A radio-frequency calibration technique has been formulated and exact mathematical relations have been derived. This technique mixes in an elegant fashion data extracted from measurements and numerical calculations to retrieve the calibration factors. The latter have been compared to previous calibration data with excellent agreement proving the robustness of the proposed radio-frequency calibration technique. In particular, it has been stressed that it is crucial to take into account environmental parasitic effects. A low-frequency calibration technique has been in addition formulated and analyzed in depth. The equivalence between the radio-frequency and low-frequency techniques has been rigorously demonstrated. The radio-frequency calibration technique is preferable in the case of the ITER-like antenna due to uncertainties on the characteristics of the cables connected at the inputs of the voltage probes. A method to extract the effect of a mismatched data acquisition system has been derived for both calibration techniques. Finally it has been outlined that in the case of the ITER-like antenna voltage probes can be in addition used to monitor the currents at the inputs of the antenna.

  19. Semantic relatedness for evaluation of course equivalencies

    NASA Astrophysics Data System (ADS)

    Yang, Beibei

    Semantic relatedness, or its inverse, semantic distance, measures the degree of closeness between two pieces of text determined by their meaning. Related work typically measures semantics based on a sparse knowledge base such as WordNet or Cyc that requires intensive manual efforts to build and maintain. Other work is based on a corpus such as the Brown corpus, or more recently, Wikipedia. This dissertation proposes two approaches to applying semantic relatedness to the problem of suggesting transfer course equivalencies. Two course descriptions are given as input to feed the proposed algorithms, which output a value that can be used to help determine if the courses are equivalent. The first proposed approach uses traditional knowledge sources such as WordNet and corpora for courses from multiple fields of study. The second approach uses Wikipedia, the openly-editable encyclopedia, and it focuses on courses from a technical field such as Computer Science. This work shows that it is promising to adapt semantic relatedness to the education field for matching equivalencies between transfer courses. A semantic relatedness measure using traditional knowledge sources such as WordNet performs relatively well on non-technical courses. However, due to the "knowledge acquisition bottleneck," such a resource is not ideal for technical courses, which use an extensive and growing set of technical terms. To address the problem, this work proposes a Wikipedia-based approach which is later shown to be more correlated to human judgment compared to previous work.

  20. On the optimality of code options for a universal noiseless coder

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Rice, Robert F.; Miller, Warner

    1991-01-01

    A universal noiseless coding structure was developed that provides efficient performance over an extremely broad range of source entropy. This is accomplished by adaptively selecting the best of several easily implemented variable length coding algorithms. Custom VLSI coder and decoder modules capable of processing over 20 million samples per second are currently under development. The first of the code options used in this module development is shown to be equivalent to a class of Huffman code under the Humblet condition, other options are shown to be equivalent to the Huffman codes of a modified Laplacian symbol set, at specified symbol entropy values. Simulation results are obtained on actual aerial imagery, and they confirm the optimality of the scheme. On sources having Gaussian or Poisson distributions, coder performance is also projected through analysis and simulation.

  1. SU-F-T-06: Development of a Formalism for Practical Dose Measurements in Brachytherapy in the German Standard DIN 6803

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hensley, F; Chofor, N; Schoenfeld, A

    2016-06-15

    Purpose: In the steep dose gradients in the vicinity of a radiation source and due to the properties of the changing photon spectra, dose measurements in Brachytherapy usually have large uncertainties. Working group DIN 6803-3 is presently discussing recommendations for practical brachytherapy dosimetry incorporating recent theoretical developments in the description of brachytherapy radiation fields as well as new detectors and phantom materials. The goal is to prepare methods and instruments to verify dose calculation algorithms and for clinical dose verification with reduced uncertainties. Methods: After analysis of the distance dependent spectral changes of the radiation field surrounding brachytherapy sources, themore » energy dependent response of typical brachytherapy detectors was examined with Monte Carlo simulations. A dosimetric formalism was developed allowing the correction of their energy dependence as function of source distance for a Co-60 calibrated detector. Water equivalent phantom materials were examined with Monte Carlo calculations for their influence on brachytherapy photon spectra and for their water equivalence in terms of generating equivalent distributions of photon spectra and absorbed dose to water. Results: The energy dependence of a detector in the vicinity of a brachytherapy source can be described by defining an energy correction factor kQ for brachytherapy in the same manner as in existing dosimetry protocols which incorporates volume averaging and radiation field distortion by the detector. Solid phantom materials were identified which allow precise positioning of a detector together with small correctable deviations from absorbed dose to water. Recommendations for the selection of detectors and phantom materials are being developed for different measurements in brachytherapy. Conclusion: The introduction of kQ for brachytherapy sources may allow more systematic and comparable dose measurements. In principle, the corrections can be verified or even determined by measurement in a water phantom and comparison with dose distributions calculated using the TG43 dosimetry formalism. Project is supported by DIN Deutsches Institut fuer Normung.« less

  2. Foraging in subterranean termites (Isoptera: Rhinotermitidae): how do Heterotermes tenuis and Coptotermes gestroi behave when they locate equivalent food resources?

    PubMed

    Lima, J T; Costa-Leonardo, A M

    2014-08-01

    A previous research suggests that when subterranean termites locate equivalent food they consume the initial food resource. However, little is known about the movement of foragers among these food sources. For this reason, this study analyzed the feeding behavior of Heterotermes tenuis and Coptotermes gestroi in the presence of equivalent foods. The experimental arenas were composed of a release chamber connected to food chambers. The consumption of each wood block and percentage of the foraging individuals recruited for the food chambers were observed in relation to the total survival rate. The results showed that in the multiple-choice tests, wood block consumptions and the recruitment of individuals did not differ between replicates of each termite species. However, in different tests of tenacity, the chambers with the first food presented higher feeding rates by both H. tenuis and C. gestroi and resulted in a higher recruitment of workers and soldiers. In these conditions, it may be concluded that foragers of either species do not concentrate their efforts on the consumption of only one food resource when they are able to reach multiple cellulosic sources simultaneously. Additionally, the data concerning tenacity tests suggest that there is a chronologic priority of consumption in relation to the discovery of available food sources. Knowledge about the foraging biology of subterranean termites is important for future studies of their feeding behavior, and it is indispensable for improving control strategies.

  3. Evaluation of optimum room entry times for radiation therapists after high energy whole pelvic photon treatments.

    PubMed

    Ho, Lavine; White, Peter; Chan, Edward; Chan, Kim; Ng, Janet; Tam, Timothy

    2012-01-01

    Linear accelerators operating at or above 10 MV produce neutrons by photonuclear reactions and induce activation in machine components, which are a source of potential exposure for radiation therapists. This study estimated gamma dose contributions to radiation therapists during high energy, whole pelvic, photon beam treatments and determined the optimum room entry times, in terms of safety of radiation therapists. Two types of technique (anterior-posterior opposing and 3-field technique) were studied. An Elekta Precise treatment system, operating up to 18 MV, was investigated. Measurements with an area monitoring device (a Mini 900R radiation monitor) were performed, to calculate gamma dose rates around the radiotherapy facility. Measurements inside the treatment room were performed when the linear accelerator was in use. The doses received by radiation therapists were estimated, and optimum room entry times were determined. The highest gamma dose rates were approximately 7 μSv/h inside the treatment room, while the doses in the control room were close to background (~0 μSv/h) for all techniques. The highest personal dose received by radiation therapists was estimated at 5 mSv/yr. To optimize protection, radiation therapists should wait for up to11 min after beam-off prior to room entry. The potential risks to radiation therapists with standard safety procedures were well below internationally recommended values, but risks could be further decreased by delaying room entry times. Dependent on the technique used, optimum entry times ranged between 7 to 11 min. A balance between moderate treatment times versus reduction in measured equivalent doses should be considered.

  4. Semilinear programming: applications and implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohan, S.

    Semilinear programming is a method of solving optimization problems with linear constraints where the non-negativity restrictions on the variables are dropped and the objective function coefficients can take on different values depending on whether the variable is positive or negative. The simplex method for linear programming is modified in this thesis to solve general semilinear and piecewise linear programs efficiently without having to transform them into equivalent standard linear programs. Several models in widely different areas of optimization such as production smoothing, facility locations, goal programming and L/sub 1/ estimation are presented first to demonstrate the compact formulation that arisesmore » when such problems are formulated as semilinear programs. A code SLP is constructed using the semilinear programming techniques. Problems in aggregate planning and L/sub 1/ estimation are solved using SLP and equivalent linear programs using a linear programming simplex code. Comparisons of CPU times and number iterations indicate SLP to be far superior. The semilinear programming techniques are extended to piecewise linear programming in the implementation of the code PLP. Piecewise linear models in aggregate planning are solved using PLP and equivalent standard linear programs using a simple upper bounded linear programming code SUBLP.« less

  5. Energy awareness for supercapacitors using Kalman filter state-of-charge tracking

    NASA Astrophysics Data System (ADS)

    Nadeau, Andrew; Hassanalieragh, Moeen; Sharma, Gaurav; Soyata, Tolga

    2015-11-01

    Among energy buffering alternatives, supercapacitors can provide unmatched efficiency and durability. Additionally, the direct relation between a supercapacitor's terminal voltage and stored energy can improve energy awareness. However, a simple capacitive approximation cannot adequately represent the stored energy in a supercapacitor. It is shown that the three branch equivalent circuit model provides more accurate energy awareness. This equivalent circuit uses three capacitances and associated resistances to represent the supercapacitor's internal SOC (state-of-charge). However, the SOC cannot be determined from one observation of the terminal voltage, and must be tracked over time using inexact measurements. We present: 1) a Kalman filtering solution for tracking the SOC; 2) an on-line system identification procedure to efficiently estimate the equivalent circuit's parameters; and 3) experimental validation of both parameter estimation and SOC tracking for 5 F, 10 F, 50 F, and 350 F supercapacitors. Validation is done within the operating range of a solar powered application and the associated power variability due to energy harvesting. The proposed techniques are benchmarked against the simple capacitive model and prior parameter estimation techniques, and provide a 67% reduction in root-mean-square error for predicting usable buffered energy.

  6. Automobile Crash Sensor Signal Processor

    DOT National Transportation Integrated Search

    1973-11-01

    The crash sensor signal processor described interfaces between an automobile-installed doppler radar and an air bag activating solenoid or equivalent electromechanical device. The processor utilizes both digital and analog techniques to produce an ou...

  7. A study of the radiation environment on board the space shuttle flight STS-57

    NASA Technical Reports Server (NTRS)

    Badhwar, G. D.; Atwell, W.; Benton, E. V.; Frank, A. L.; Keegan, R. P.; Dudkin, V. E.; Karpov, O. N.; Potapov, V.; Akopova, A. B.; Magradze, N. V.

    1995-01-01

    A joint NASA-Russian study of the radiation environment inside a SPACEHAB 2 locker on space shuttle flight STS-57 was conducted. The shuttle flew in a nearly circular orbit of 28.5 deg inclination and 462 km altitude. The locker carried a charged particle spectrometer, a tissue equivalent proportional counter (TEPC), and two area passive detectors consisting of combined NASA plastic nuclear track detectors (PNTD's) and thermoluminescent detectors (TLD's), and Russian nuclear emulsions, PNTD's, and TLD's. All the detector systems were shielded by the same shuttle mass distribution. This makes possible a direct comparison of the various dose measurement techniques. In addition, measurements of the neutron energy spectrum were made using the proton recoil technique. The results show good agreement between the integral LET spectrum of the combined galactic and trapped particles using the tissue equivalent proportional counter and track detectors between about 15 keV/micron and 200 keV/micron. The LET spectrum determined from nuclear emulsions was systematically lower by about 50%, possibly due to emulsion fading. The results show that the TEPC measured an absorbed dose 20% higher than TLD's, due primarily to an increased TEPC response to neutrons and a low sensitivity of TLD's to high LET particles under normal processing techniques. There is a significant flux of high energy neutrons that is currently not taken into consideration in dose equivalent calculations. The results of the analysis of the spectrometer data will be reported separately.

  8. SU-F-T-398: Improving Radiotherapy Treatment Planning Using Dual Energy Computed Tomography Based Tissue Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tomic, N; Bekerat, H; Seuntjens, J

    Purpose: Both kVp settings and geometric distribution of various materials lead to significant change of the HU values, showing the largest discrepancy for high-Z materials and for the lowest CT scanning kVp setting. On the other hand, the dose distributions around low-energy brachytherapy sources are highly dependent on the architecture and composition of tissue heterogeneities in and around the implant. Both measurements and Monte Carlo calculations show that improper tissue characterization may lead to calculated dose errors of 90% for low energy and around 10% for higher energy photons. We investigated the ability of dual-energy CT (DECT) to characterize moremore » accurately tissue equivalent materials. Methods: We used the RMI-467 heterogeneity phantom scanned in DECT mode with 3 different set-ups: first, we placed high electron density (ED) plugs within the outer ring of the phantom; then we arranged high ED plugs within the inner ring; and finally ED plugs were randomly distributed. All three setups were scanned with the same DECT technique using a single-source DECT scanner with fast kVp switching (Discovery CT750HD; GE Healthcare). Images were transferred to a GE Advantage workstation for DECT analysis. Spectral Hounsfield unit curves (SHUACs) were then generated from 50 to 140-keV, in 10-keV increments, for each plug. Results: The dynamic range of Hounsfield units shrinks with increased photon energy as the attenuation coefficients decrease. Our results show that the spread of HUs for the three different geometrical setups is the smallest at 80 keV. Furthermore, among all the energies and all materials presented, the largest difference appears at high Z tissue equivalent plugs. Conclusion: Our results suggest that dose calculations at both megavoltage and low photon energies could benefit in the vicinity of bony structures if the 80 keV reconstructed monochromatic CT image is used with the DECT protocol utilized in this work.« less

  9. Application of Recombinant Factor C Reagent for the Detection of Bacterial Endotoxins in Pharmaceutical Products.

    PubMed

    Bolden, Jay; Smith, Kelly

    2017-01-01

    Recombinant Factor C (rFC) is non-animal-derived reagent used to detect bacterial endotoxins in pharmaceutical products. Despite the fact that the reagent was first commercially available nearly 15 years ago, the broad use of rFC in pharmaceutical industry has long been lagging, presumably due to historical single-source supplier concerns and the lack of inclusion in worldwide pharmacopeias. Commercial rFC reagents are now available from multiple manufacturers, thus single sourcing is no longer an issue. We report here the successful validation of several pharmaceutical products by an end-point florescence-based endotoxin method using the rFC reagent. The method is equivalent or superior to the compendia bacterial endotoxins test method. Based on the comparability data and extenuating circumstances, the incorporation of the end point fluorescence technique and rFC reagent in global compendia bacterial endotoxins test chapters is desired and warranted. LAY ABSTRACT: Public health has been protected for over 30 years with the use of a purified blood product of the horseshoe crab, limulus amebocyte lysate. More recently, this blood product can be produced in biotech manufacturing processes, which reduces potential impacts to the horseshoe crab and related species dependent upon the crab, for example, migrating shorebirds. The pharmaceutical industry has been slow to adopt the use of this reagent, Recombinant Factor C (rFC), for various reasons. We evaluated the use of rFC across many pharmaceutical products, and in other feasibility demonstration experiments, and found rFC to be a suitable alternative to the animal-derived limulus amebocyte lysate. Incorporation of rFC and its analytical method into national testing standards would provide an equivalent or better test while continuing to maintain patient safety for those who depend on medicines and while securing pharmaceutical supply chains. In addition, widespread use of this method would benefit existing animal conservation efforts. © PDA, Inc. 2017.

  10. SU-E-T-594: Out-Of-Field Neutron and Gamma Dose Estimated Using TLD-600/700 Pairs in the Wobbling Proton Therapy System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Y; Lin, Y; Medical Physics Research Center, Institute for Radiological Research, Chang Gung University / Chang Gung Memorial Hospital, Linkou, Taoyuan, Taiwan

    Purpose: Secondary fast neutrons and gamma rays are mainly produced due to the interaction of the primary proton beam with the beam delivery nozzle. These secondary radiation dose to patients and radiation workers are unwanted. The purpose of this study is to estimate the neutron and gamma dose equivalent out of the treatment volume during the wobbling proton therapy system. Methods: Two types of thermoluminescent (TL) dosimeters, TLD-600 ({sup 6}LiF: Mg, Ti) and TLD-700 ({sup 7}LiF: Mg, Ti) were used in this study. They were calibrated in the standard neutron and gamma sources at National Standards Laboratory. Annealing procedure ismore » 400°C for 1 hour, 100°C for 2 hours and spontaneously cooling down to the room temperature in a programmable oven. Two-peak method (a kind of glow curve analysis technique) was used to evaluate the TL response corresponding to the neutron and gamma dose. The TLD pairs were placed outside the treatment field at the neutron-gamma mixed field with 190-MeV proton beam produced by the wobbling system through the polyethylene plate phantom. The results of TLD measurement were compared to the Monte Carlo simulation. Results: The initial experiment results of calculated dose equivalents are 0.63, 0.38, 0.21 and 0.13 mSv per Gy outside the field at the distance of 50, 100, 150 and 200 cm. Conclusion: The TLD-600 and TLD-700 pairs are convenient to estimate neutron and gamma dosimetry during proton therapy. However, an accurate and suitable glow curve analysis technique is necessary. During the wobbling system proton therapy, our results showed that the neutron and gamma doses outside the treatment field are noticeable. This study was supported by the grants from the Chang Gung Memorial Hospital (CMRPD1C0682)« less

  11. Spatial homogenization methods for pin-by-pin neutron transport calculations

    NASA Astrophysics Data System (ADS)

    Kozlowski, Tomasz

    For practical reactor core applications low-order transport approximations such as SP3 have been shown to provide sufficient accuracy for both static and transient calculations with considerably less computational expense than the discrete ordinate or the full spherical harmonics methods. These methods have been applied in several core simulators where homogenization was performed at the level of the pin cell. One of the principal problems has been to recover the error introduced by pin-cell homogenization. Two basic approaches to treat pin-cell homogenization error have been proposed: Superhomogenization (SPH) factors and Pin-Cell Discontinuity Factors (PDF). These methods are based on well established Equivalence Theory and Generalized Equivalence Theory to generate appropriate group constants. These methods are able to treat all sources of error together, allowing even few-group diffusion with one mesh per cell to reproduce the reference solution. A detailed investigation and consistent comparison of both homogenization techniques showed potential of PDF approach to improve accuracy of core calculation, but also reveal its limitation. In principle, the method is applicable only for the boundary conditions at which it was created, i.e. for boundary conditions considered during the homogenization process---normally zero current. Therefore, there exists a need to improve this method, making it more general and environment independent. The goal of proposed general homogenization technique is to create a function that is able to correctly predict the appropriate correction factor with only homogeneous information available, i.e. a function based on heterogeneous solution that could approximate PDFs using homogeneous solution. It has been shown that the PDF can be well approximated by least-square polynomial fit of non-dimensional heterogeneous solution and later used for PDF prediction using homogeneous solution. This shows a promise for PDF prediction for off-reference conditions, such as during reactor transients which provide conditions that can not typically be anticipated a priori.

  12. Equivalent-Continuum Modeling With Application to Carbon Nanotubes

    NASA Technical Reports Server (NTRS)

    Odegard, Gregory M.; Gates, Thomas S.; Nicholson, Lee M.; Wise, Kristopher E.

    2002-01-01

    A method has been proposed for developing structure-property relationships of nano-structured materials. This method serves as a link between computational chemistry and solid mechanics by substituting discrete molecular structures with equivalent-continuum models. It has been shown that this substitution may be accomplished by equating the vibrational potential energy of a nano-structured material with the strain energy of representative truss and continuum models. As important examples with direct application to the development and characterization of single-walled carbon nanotubes and the design of nanotube-based devices, the modeling technique has been applied to determine the effective-continuum geometry and bending rigidity of a graphene sheet. A representative volume element of the chemical structure of graphene has been substituted with equivalent-truss and equivalent continuum models. As a result, an effective thickness of the continuum model has been determined. This effective thickness has been shown to be significantly larger than the interatomic spacing of graphite. The effective thickness has been shown to be significantly larger than the inter-planar spacing of graphite. The effective bending rigidity of the equivalent-continuum model of a graphene sheet was determined by equating the vibrational potential energy of the molecular model of a graphene sheet subjected to cylindrical bending with the strain energy of an equivalent continuum plate subjected to cylindrical bending.

  13. Radiation-induced second primary cancer risks from modern external beam radiotherapy for early prostate cancer: impact of stereotactic ablative radiotherapy (SABR), volumetric modulated arc therapy (VMAT) and flattening filter free (FFF) radiotherapy

    NASA Astrophysics Data System (ADS)

    Murray, Louise J.; Thompson, Christopher M.; Lilley, John; Cosgrove, Vivian; Franks, Kevin; Sebag-Montefiore, David; Henry, Ann M.

    2015-02-01

    Risks of radiation-induced second primary cancer following prostate radiotherapy using 3D-conformal radiotherapy (3D-CRT), intensity-modulated radiotherapy (IMRT), volumetric modulated arc therapy (VMAT), flattening filter free (FFF) and stereotactic ablative radiotherapy (SABR) were evaluated. Prostate plans were created using 10 MV 3D-CRT (78 Gy in 39 fractions) and 6 MV 5-field IMRT (78 Gy in 39 fractions), VMAT (78 Gy in 39 fractions, with standard flattened and energy-matched FFF beams) and SABR (42.7 Gy in 7 fractions with standard flattened and energy-matched FFF beams). Dose-volume histograms from pelvic planning CT scans of three prostate patients, each planned using all 6 techniques, were used to calculate organ equivalent doses (OED) and excess absolute risks (EAR) of second rectal and bladder cancers, and pelvic bone and soft tissue sarcomas, using mechanistic, bell-shaped and plateau models. For organs distant to the treatment field, chamber measurements recorded in an anthropomorphic phantom were used to calculate OEDs and EARs using a linear model. Ratios of OED give relative radiation-induced second cancer risks. SABR resulted in lower second cancer risks at all sites relative to 3D-CRT. FFF resulted in lower second cancer risks in out-of-field tissues relative to equivalent flattened techniques, with increasing impact in organs at greater distances from the field. For example, FFF reduced second cancer risk by up to 20% in the stomach and up to 56% in the brain, relative to the equivalent flattened technique. Relative to 10 MV 3D-CRT, 6 MV IMRT or VMAT with flattening filter increased second cancer risks in several out-of-field organs, by up to 26% and 55%, respectively. For all techniques, EARs were consistently low. The observed large relative differences between techniques, in absolute terms, were very low, highlighting the importance of considering absolute risks alongside the corresponding relative risks, since when absolute risks are very low, large relative risks become less meaningful. A calculated relative radiation-induced second cancer risk benefit from SABR and FFF techniques was theoretically predicted, although absolute radiation-induced second cancer risks were low for all techniques, and absolute differences between techniques were small.

  14. Screening of hormone-like activities in bottled waters available in Southern Spain using receptor-specific bioassays.

    PubMed

    Real, Macarena; Molina-Molina, José-Manuel; Jiménez-Díaz, Inmaculada; Arrebola, Juan Pedro; Sáenz, José-María; Fernández, Mariana F; Olea, Nicolás

    2015-01-01

    Bottled water consumption is a putative source of human exposure to endocrine-disrupting chemicals (EDCs). Research has been conducted on the presence of chemicals with estrogen-like activity in bottled waters and on their estrogenicity, but few data are available on the presence of hormonal activities associated with other nuclear receptors (NRs). The aim of this study was to determine the presence of endocrine activities dependent on the activation of human estrogen receptor alpha (hERa) and/or androgen receptor (hAR) in water in glass or plastic bottles sold to consumers in Southern Spain. Hormone-like activities were evaluated in 29 bottled waters using receptor-specific bioassays based on reporter gene expression in PALM cells [(anti-)androgenicity] and cell proliferation assessment in MCF-7 cells [(anti-)estrogenicity] after optimized solid phase extraction (SPE). All of the water samples analyzed showed hormonal activity. This was estrogenic in 79.3% and anti-estrogenic in 37.9% of samples and was androgenic in 27.5% and anti-androgenic in 41.3%, with mean concentrations per liter of 0.113pM 17β-estradiol (E2) equivalent units (E2Eq), 11.01pM anti-estrogen (ICI 182780) equivalent units (ICI 182780Eq), 0.33pM methyltrienolone (R1881) equivalent units (R1881Eq), and 0.18nM procymidone equivalent units (ProcEq). Bottled water consumption contributes to EDC exposure. Hormone-like activities observed in waters from both plastic and glass bottles suggest that plastic packaging is not the sole source of contamination and that the source of the water and bottling process may play a role, among other factors. Further research is warranted on the cumulative effects of long-term exposure to low doses of EDCs. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Chopped random-basis quantum optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caneva, Tommaso; Calarco, Tommaso; Montangero, Simone

    2011-08-15

    In this work, we describe in detail the chopped random basis (CRAB) optimal control technique recently introduced to optimize time-dependent density matrix renormalization group simulations [P. Doria, T. Calarco, and S. Montangero, Phys. Rev. Lett. 106, 190501 (2011)]. Here, we study the efficiency of this control technique in optimizing different quantum processes and we show that in the considered cases we obtain results equivalent to those obtained via different optimal control methods while using less resources. We propose the CRAB optimization as a general and versatile optimal control technique.

  16. Lightweight and Statistical Techniques for Petascale Debugging: Correctness on Petascale Systems (CoPS) Preliminry Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Supinski, B R; Miller, B P; Liblit, B

    2011-09-13

    Petascale platforms with O(10{sup 5}) and O(10{sup 6}) processing cores are driving advancements in a wide range of scientific disciplines. These large systems create unprecedented application development challenges. Scalable correctness tools are critical to shorten the time-to-solution on these systems. Currently, many DOE application developers use primitive manual debugging based on printf or traditional debuggers such as TotalView or DDT. This paradigm breaks down beyond a few thousand cores, yet bugs often arise above that scale. Programmers must reproduce problems in smaller runs to analyze them with traditional tools, or else perform repeated runs at scale using only primitive techniques.more » Even when traditional tools run at scale, the approach wastes substantial effort and computation cycles. Continued scientific progress demands new paradigms for debugging large-scale applications. The Correctness on Petascale Systems (CoPS) project is developing a revolutionary debugging scheme that will reduce the debugging problem to a scale that human developers can comprehend. The scheme can provide precise diagnoses of the root causes of failure, including suggestions of the location and the type of errors down to the level of code regions or even a single execution point. Our fundamentally new strategy combines and expands three relatively new complementary debugging approaches. The Stack Trace Analysis Tool (STAT), a 2011 R&D 100 Award Winner, identifies behavior equivalence classes in MPI jobs and highlights behavior when elements of the class demonstrate divergent behavior, often the first indicator of an error. The Cooperative Bug Isolation (CBI) project has developed statistical techniques for isolating programming errors in widely deployed code that we will adapt to large-scale parallel applications. Finally, we are developing a new approach to parallelizing expensive correctness analyses, such as analysis of memory usage in the Memgrind tool. In the first two years of the project, we have successfully extended STAT to determine the relative progress of different MPI processes. We have shown that the STAT, which is now included in the debugging tools distributed by Cray with their large-scale systems, substantially reduces the scale at which traditional debugging techniques are applied. We have extended CBI to large-scale systems and developed new compiler based analyses that reduce its instrumentation overhead. Our results demonstrate that CBI can identify the source of errors in large-scale applications. Finally, we have developed MPIecho, a new technique that will reduce the time required to perform key correctness analyses, such as the detection of writes to unallocated memory. Overall, our research results are the foundations for new debugging paradigms that will improve application scientist productivity by reducing the time to determine which package or module contains the root cause of a problem that arises at all scales of our high end systems. While we have made substantial progress in the first two years of CoPS research, significant work remains. While STAT provides scalable debugging assistance for incorrect application runs, we could apply its techniques to assertions in order to observe deviations from expected behavior. Further, we must continue to refine STAT's techniques to represent behavioral equivalence classes efficiently as we expect systems with millions of threads in the next year. We are exploring new CBI techniques that can assess the likelihood that execution deviations from past behavior are the source of erroneous execution. Finally, we must develop usable correctness analyses that apply the MPIecho parallelization strategy in order to locate coding errors. We expect to make substantial progress on these directions in the next year but anticipate that significant work will remain to provide usable, scalable debugging paradigms.« less

  17. 77 FR 3108 - Dividend Equivalents From Sources Within the United States

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-23

    .... Erwin or D. Peter Merkel at (202) 622-3870 (not a toll-free number). SUPPLEMENTARY INFORMATION... is D. Peter Merkel, the Office of Associate Chief Counsel (International). Other personnel from the...

  18. 77 FR 53141 - Dividend Equivalents From Sources Within the United States

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-31

    .... Peter Merkel at (202) 622-3870 (not a toll-free number). SUPPLEMENTARY INFORMATION: Background On... Information The principal author of these regulations is D. Peter Merkel, the Office of Associate Chief...

  19. Dose Equivalents for Antipsychotic Drugs: The DDD Method.

    PubMed

    Leucht, Stefan; Samara, Myrto; Heres, Stephan; Davis, John M

    2016-07-01

    Dose equivalents of antipsychotics are an important but difficult to define concept, because all methods have weaknesses and strongholds. We calculated dose equivalents based on defined daily doses (DDDs) presented by the World Health Organisation's Collaborative Center for Drug Statistics Methodology. Doses equivalent to 1mg olanzapine, 1mg risperidone, 1mg haloperidol, and 100mg chlorpromazine were presented and compared with the results of 3 other methods to define dose equivalence (the "minimum effective dose method," the "classical mean dose method," and an international consensus statement). We presented dose equivalents for 57 first-generation and second-generation antipsychotic drugs, available as oral, parenteral, or depot formulations. Overall, the identified equivalent doses were comparable with those of the other methods, but there were also outliers. The major strength of this method to define dose response is that DDDs are available for most drugs, including old antipsychotics, that they are based on a variety of sources, and that DDDs are an internationally accepted measure. The major limitations are that the information used to estimate DDDS is likely to differ between the drugs. Moreover, this information is not publicly available, so that it cannot be reviewed. The WHO stresses that DDDs are mainly a standardized measure of drug consumption, and their use as a measure of dose equivalence can therefore be misleading. We, therefore, recommend that if alternative, more "scientific" dose equivalence methods are available for a drug they should be preferred to DDDs. Moreover, our summary can be a useful resource for pharmacovigilance studies. © The Author 2016. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  20. Dietary antioxidant capacity of the patients with cardiovascular disease in a cross-sectional study.

    PubMed

    Zujko, Małgorzata E; Witkowska, Anna M; Waśkiewicz, Anna; Piotrowski, Walerian; Terlikowska, Katarzyna M

    2015-03-15

    The purpose of this study was to establish sources and patterns of antioxidant, polyphenol and flavonoid intakes in men and women with cardiovascular disease (CVD). The subjects with CVD and healthy controls (HC) were participants of the Polish National Multicenter Health Survey (WOBASZ). Food intakes were measured with the 1-day 24-hour recall method. A self-developed database was used to calculate dietary total antioxidant capacity (DTAC), dietary total polyphenol content (DTPC) and dietary total flavonoid content (DTFC). DTAC did not differ between the men with CVD and HC men (6442 vs. 6066 μmol trolox equivalents - TE), but in the women with CVD it was significantly higher than in the HC women (6182 vs. 5500 μmol TE). The main sources of antioxidants in the males with CVD were: tea, coffee, apples, and nuts and seeds, and tea, coffee and apples in HC. In the females they were: tea, coffee, apples and strawberries, both in the women with CVD and HC. DTPC in the men with CVD did not differ from HC (1198 vs. 1114 mg gallic acid equivalents, GAE). In the females, DTPC was significantly higher in the subjects with CVD as compared to HC (1075 vs. 981 mg GAE). Predominant sources of polyphenols were: tea, coffee, cabbage, potatoes, apples and white bread in the men with CVD, and tea, coffee, potatoes, white bread and apples in HC, while in the women (both with CVD and HC): tea, coffee, apples, potatoes and cabbage. No differences in DTFC have been found between the males with CVD and HC (212 vs. 202 mg quercetine equivalents, QE). In the women with CVD, DTFC was significantly higher than in HC (200 vs. 177 mg QE). Main sources of flavonoids in all participants (men and women, CVD and HC) were tea, apples, cabbage and coffee. Polish men and women faced with CVD beneficially modify their dietary practices by enhancing intakes of foods that are sources of antioxidants, polyphenols and flavonoids. Different sources and patterns of antioxidant, polyphenol and flavonoid intakes, however, between male and female patients with CVD were observed.

  1. Full-wave modeling of the time domain reflectometry signal in wetted sandy soils using a random microstructure discretization: Comparison with experiments

    NASA Astrophysics Data System (ADS)

    Rejiba, F.; Sagnard, F.; Schamper, C.

    2011-07-01

    Time domain reflectometry (TDR) is a proven, nondestructive method for the measurement of the permittivity and electrical conductivity of soils, using electromagnetic (EM) waves. Standard interpretation of TDR data leads to the estimation of the soil's equivalent electromagnetic properties since the wavelengths associated with the source signal are considerably greater than the microstructure of the soil. The aforementioned approximation tends to hide an important issue: the influence of the microstructure and phase configuration in the generation of a polarized electric field, which is complicated because of the presence of numerous length scales. In this paper, the influence of the microstructural distribution of each phase on the TDR signal has been studied. We propose a two-step EM modeling technique at a microscale range (?): first, we define an equivalent grain including a thin shell of free water, and second, we solve Maxwell's equations over the discretized, statistically distributed triphasic porous medium. Modeling of the TDR probe with the soil sample was performed using a three-dimensional finite difference time domain scheme. The effectiveness of this hybrid homogenization approach is tested on unsaturated Nemours sand with narrow granulometric fractions. The comparisons made between numerical and experimental results are promising, despite significant assumptions concerning (1) the TDR probe head and the coaxial cable and (2) the assumed effective medium theory homogenization associated with the electromagnetic processes arising locally between the liquid and solid phases at the grain scale.

  2. Design and Fabrication of a Differential Electrostatic Accelerometer for Space-Station Testing of the Equivalence Principle.

    PubMed

    Han, Fengtian; Liu, Tianyi; Li, Linlin; Wu, Qiuping

    2016-08-10

    The differential electrostatic space accelerometer is an equivalence principle (EP) experiment instrument proposed to operate onboard China's space station in the 2020s. It is designed to compare the spin-spin interaction between two rotating extended bodies and the Earth to a precision of 10(-12), which is five orders of magnitude better than terrestrial experiment results to date. To achieve the targeted test accuracy, the sensitive space accelerometer will use the very soft space environment provided by a quasi-drag-free floating capsule and long-time observation of the free-fall mass motion for integration of the measurements over 20 orbits. In this work, we describe the design and capability of the differential accelerometer to test weak space acceleration. Modeling and simulation results of the electrostatic suspension and electrostatic motor are presented based on attainable space microgravity condition. Noise evaluation shows that the electrostatic actuation and residual non-gravitational acceleration are two major noise sources. The evaluated differential acceleration noise is 1.01 × 10(-9) m/s²/Hz(1/2) at the NEP signal frequency of 0.182 mHz, by neglecting small acceleration disturbances. The preliminary work on development of the first instrument prototype is introduced for on-ground technological assessments. This development has already confirmed several crucial fabrication processes and measurement techniques and it will open the way to the construction of the final differential space accelerometer.

  3. Design and Fabrication of a Differential Electrostatic Accelerometer for Space-Station Testing of the Equivalence Principle

    PubMed Central

    Han, Fengtian; Liu, Tianyi; Li, Linlin; Wu, Qiuping

    2016-01-01

    The differential electrostatic space accelerometer is an equivalence principle (EP) experiment instrument proposed to operate onboard China’s space station in the 2020s. It is designed to compare the spin-spin interaction between two rotating extended bodies and the Earth to a precision of 10−12, which is five orders of magnitude better than terrestrial experiment results to date. To achieve the targeted test accuracy, the sensitive space accelerometer will use the very soft space environment provided by a quasi-drag-free floating capsule and long-time observation of the free-fall mass motion for integration of the measurements over 20 orbits. In this work, we describe the design and capability of the differential accelerometer to test weak space acceleration. Modeling and simulation results of the electrostatic suspension and electrostatic motor are presented based on attainable space microgravity condition. Noise evaluation shows that the electrostatic actuation and residual non-gravitational acceleration are two major noise sources. The evaluated differential acceleration noise is 1.01 × 10−9 m/s2/Hz1/2 at the NEP signal frequency of 0.182 mHz, by neglecting small acceleration disturbances. The preliminary work on development of the first instrument prototype is introduced for on-ground technological assessments. This development has already confirmed several crucial fabrication processes and measurement techniques and it will open the way to the construction of the final differential space accelerometer. PMID:27517927

  4. Measurement equivalence: a glossary for comparative population health research.

    PubMed

    Morris, Katherine Ann

    2018-03-06

    Comparative population health studies are becoming more common and are advancing solutions to crucial public health problems, but decades-old measurement equivalence issues remain without a common vocabulary to identify and address the biases that contribute to non-equivalence. This glossary defines sources of measurement non-equivalence. While drawing examples from both within-country and between-country studies, this glossary also defines methods of harmonisation and elucidates the unique opportunities in addition to the unique challenges of particular harmonisation methods. Its primary objective is to enable population health researchers to more clearly articulate their measurement assumptions and the implications of their findings for policy. It is also intended to provide scholars and policymakers across multiple areas of inquiry with tools to evaluate comparative research and thus contribute to urgent debates on how to ameliorate growing health disparities within and between countries. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  5. Electrochemical process for the preparation of nitrogen fertilizers

    DOEpatents

    Jiang, Junhua; Aulich, Ted R; Ignatchenko, Alexey V

    2015-04-14

    Methods and apparatus for the preparation of nitrogen fertilizers including ammonium nitrate, urea, urea-ammonium nitrate, and/or ammonia are disclosed. Embodiments include (1) ammonium nitrate produced via the reduction of a nitrogen source at the cathode and the oxidation of a nitrogen source at the anode; (2) urea or its isomers produced via the simultaneous cathodic reduction of a carbon source and a nitrogen source: (3) ammonia produced via the reduction of nitrogen source at the cathode and the oxidation of a hydrogen source or a hydrogen equivalent such as carbon monoxide or a mixture of carbon monoxide and hydrogen at the anode; and (4) urea-ammonium nitrate produced via the simultaneous cathodic reduction of a carbon source and a nitrogen source, and anodic oxidation of a nitrogen source.

  6. Total Phenolics and Total Flavonoids in Selected Indian Medicinal Plants

    PubMed Central

    Sulaiman, C. T.; Balachandran, Indira

    2012-01-01

    Plant phenolics and flavonoids have a powerful biological activity, which outlines the necessity of their determination. The phenolics and flavonoids content of 20 medicinal plants were determined in the present investigation. The phenolic content was determined by using Folin-Ciocalteu assay. The total flavonoids were measured spectrophotometrically by using the aluminium chloride colorimetric assay. The results showed that the family Mimosaceae is the richest source of phenolics, (Acacia nilotica: 80.63 mg gallic acid equivalents, Acacia catechu 78.12 mg gallic acid equivalents, Albizia lebbeck 66.23 mg gallic acid equivalents). The highest total flavonoid content was revealed in Senna tora which belongs to the family Caesalpiniaceae. The present study also shows the ratio of flavonoids to the phenolics in each sample for their specificity. PMID:23439764

  7. Organ doses from radionuclides on the ground. Part I. Simple time dependences

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacob, P.; Paretzke, H.G.; Rosenbaum, H.

    1988-06-01

    Organ dose equivalents of mathematical, anthropomorphical phantoms ADAM and EVA for photon exposures from plane sources on the ground have been calculated by Monte Carlo photon transport codes and tabulated in this article. The calculation takes into account the air-ground interface and a typical surface roughness, the energy and angular dependence of the photon fluence impinging on the phantom and the time dependence of the contributions from daughter nuclides. Results are up to 35% higher than data reported in the literature for important radionuclides. This manuscript deals with radionuclides, for which the time dependence of dose equivalent rates and dosemore » equivalents may be approximated by a simple exponential. A companion manuscript treats radionuclides with non-trivial time dependences.« less

  8. Modelling nonlinearity in piezoceramic transducers: From equations to nonlinear equivalent circuits.

    PubMed

    Parenthoine, D; Tran-Huu-Hue, L-P; Haumesser, L; Vander Meulen, F; Lematre, M; Lethiecq, M

    2011-02-01

    Quadratic nonlinear equations of a piezoelectric element under the assumptions of 1D vibration and weak nonlinearity are derived by the perturbation theory. It is shown that the nonlinear response can be represented by controlled sources that are added to the classical hexapole used to model piezoelectric ultrasonic transducers. As a consequence, equivalent electrical circuits can be used to predict the nonlinear response of a transducer taking into account the acoustic loads on the rear and front faces. A generalisation of nonlinear equivalent electrical circuits to cases including passive layers and propagation media is then proposed. Experimental results, in terms of second harmonic generation, on a coupled resonator are compared to theoretical calculations from the proposed model. Copyright © 2010 Elsevier B.V. All rights reserved.

  9. Design of HIFU transducers for generating specified nonlinear ultrasound fields

    PubMed Central

    Rosnitskiy, Pavel B.; Yuldashev, Petr V.; Sapozhnikov, Oleg A.; Maxwell, Adam; Kreider, Wayne; Bailey, Michael R.; Khokhlova, Vera A.

    2016-01-01

    Various clinical applications of high intensity focused ultrasound (HIFU) have different requirements for the pressure levels and degree of nonlinear waveform distortion at the focus. The goal of this work was to determine transducer design parameters that produce either a specified shock amplitude in the focal waveform or specified peak pressures while still maintaining quasilinear conditions at the focus. Multi-parametric nonlinear modeling based on the KZK equation with an equivalent source boundary condition was employed. Peak pressures, shock amplitudes at the focus, and corresponding source outputs were determined for different transducer geometries and levels of nonlinear distortion. Results are presented in terms of the parameters of an equivalent single-element, spherically shaped transducer. The accuracy of the method and its applicability to cases of strongly focused transducers were validated by comparing the KZK modeling data with measurements and nonlinear full-diffraction simulations for a single-element source and arrays with 7 and 256 elements. The results provide look-up data for evaluating nonlinear distortions at the focus of existing therapeutic systems as well as for guiding the design of new transducers that generate specified nonlinear fields. PMID:27775904

  10. Equivalent magnetic vector potential model for low-frequency magnetic exposure assessment

    NASA Astrophysics Data System (ADS)

    Diao, Y. L.; Sun, W. N.; He, Y. Q.; Leung, S. W.; Siu, Y. M.

    2017-10-01

    In this paper, a novel source model based on a magnetic vector potential for the assessment of induced electric field strength in a human body exposed to the low-frequency (LF) magnetic field of an electrical appliance is presented. The construction of the vector potential model requires only a single-component magnetic field to be measured close to the appliance under test, hence relieving considerable practical measurement effort—the radial basis functions (RBFs) are adopted for the interpolation of discrete measurements; the magnetic vector potential model can then be directly constructed by summing a set of simple algebraic functions of RBF parameters. The vector potentials are then incorporated into numerical calculations as the equivalent source for evaluations of the induced electric field in the human body model. The accuracy and effectiveness of the proposed model are demonstrated by comparing the induced electric field in a human model to that of the full-wave simulation. This study presents a simple and effective approach for modelling the LF magnetic source. The result of this study could simplify the compliance test procedure for assessing an electrical appliance regarding LF magnetic exposure.

  11. Analysis of neutron and gamma-ray streaming along the maze of NRCAM thallium production target room.

    PubMed

    Raisali, G; Hajiloo, N; Hamidi, S; Aslani, G

    2006-08-01

    Study of the shield performance of a thallium-203 production target room has been investigated in this work. Neutron and gamma-ray equivalent dose rates at various points of the maze are calculated by simulating the transport of streaming neutrons, and photons using Monte Carlo method. For determination of neutron and gamma-ray source intensities and their energy spectrum, we have applied SRIM 2003 and ALICE91 computer codes to Tl target and its Cu substrate for a 145 microA of 28.5 MeV protons beam. The MCNP/4C code has been applied with neutron source term in mode n p to consider both prompt neutrons and secondary gamma-rays. Then the code is applied for the prompt gamma-rays as the source term. The neutron-flux energy spectrum and equivalent dose rates for neutron and gamma-rays in various positions in the maze have been calculated. It has been found that the deviation between calculated and measured dose values along the maze is less than 20%.

  12. The Prediction of Scattered Broadband Shock-Associated Noise

    NASA Technical Reports Server (NTRS)

    Miller, Steven A. E.

    2015-01-01

    A mathematical model is developed for the prediction of scattered broadband shock-associated noise. Model arguments are dependent on the vector Green's function of the linearized Euler equations, steady Reynolds-averaged Navier-Stokes solutions, and the two-point cross-correlation of the equivalent source. The equivalent source is dependent on steady Reynolds-averaged Navier-Stokes solutions of the jet flow, that capture the nozzle geometry and airframe surface. Contours of the time-averaged streamwise velocity component and turbulent kinetic energy are examined with varying airframe position relative to the nozzle exit. Propagation effects are incorporated by approximating the vector Green's function of the linearized Euler equations. This approximation involves the use of ray theory and an assumption that broadband shock-associated noise is relatively unaffected by the refraction of the jet shear layer. A non-dimensional parameter is proposed that quantifies the changes of the broadband shock-associated noise source with varying jet operating condition and airframe position. Scattered broadband shock-associated noise possesses a second set of broadband lobes that are due to the effect of scattering. Presented predictions demonstrate relatively good agreement compared to a wide variety of measurements.

  13. Equivalent magnetic vector potential model for low-frequency magnetic exposure assessment.

    PubMed

    Diao, Y L; Sun, W N; He, Y Q; Leung, S W; Siu, Y M

    2017-09-21

    In this paper, a novel source model based on a magnetic vector potential for the assessment of induced electric field strength in a human body exposed to the low-frequency (LF) magnetic field of an electrical appliance is presented. The construction of the vector potential model requires only a single-component magnetic field to be measured close to the appliance under test, hence relieving considerable practical measurement effort-the radial basis functions (RBFs) are adopted for the interpolation of discrete measurements; the magnetic vector potential model can then be directly constructed by summing a set of simple algebraic functions of RBF parameters. The vector potentials are then incorporated into numerical calculations as the equivalent source for evaluations of the induced electric field in the human body model. The accuracy and effectiveness of the proposed model are demonstrated by comparing the induced electric field in a human model to that of the full-wave simulation. This study presents a simple and effective approach for modelling the LF magnetic source. The result of this study could simplify the compliance test procedure for assessing an electrical appliance regarding LF magnetic exposure.

  14. Spatial sound field synthesis and upmixing based on the equivalent source method.

    PubMed

    Bai, Mingsian R; Hsu, Hoshen; Wen, Jheng-Ciang

    2014-01-01

    Given scarce number of recorded signals, spatial sound field synthesis with an extended sweet spot is a challenging problem in acoustic array signal processing. To address the problem, a synthesis and upmixing approach inspired by the equivalent source method (ESM) is proposed. The synthesis procedure is based on the pressure signals recorded by a microphone array and requires no source model. The array geometry can also be arbitrary. Four upmixing strategies are adopted to enhance the resolution of the reproduced sound field when there are more channels of loudspeakers than the microphones. Multi-channel inverse filtering with regularization is exploited to deal with the ill-posedness in the reconstruction process. The distance between the microphone and loudspeaker arrays is optimized to achieve the best synthesis quality. To validate the proposed system, numerical simulations and subjective listening experiments are performed. The results demonstrated that all upmixing methods improved the quality of reproduced target sound field over the original reproduction. In particular, the underdetermined ESM interpolation method yielded the best spatial sound field synthesis in terms of the reproduction error, timbral quality, and spatial quality.

  15. State-space reduction and equivalence class sampling for a molecular self-assembly model.

    PubMed

    Packwood, Daniel M; Han, Patrick; Hitosugi, Taro

    2016-07-01

    Direct simulation of a model with a large state space will generate enormous volumes of data, much of which is not relevant to the questions under study. In this paper, we consider a molecular self-assembly model as a typical example of a large state-space model, and present a method for selectively retrieving 'target information' from this model. This method partitions the state space into equivalence classes, as identified by an appropriate equivalence relation. The set of equivalence classes H, which serves as a reduced state space, contains none of the superfluous information of the original model. After construction and characterization of a Markov chain with state space H, the target information is efficiently retrieved via Markov chain Monte Carlo sampling. This approach represents a new breed of simulation techniques which are highly optimized for studying molecular self-assembly and, moreover, serves as a valuable guideline for analysis of other large state-space models.

  16. High-resolution hot-film measurement of surface heat flux to an impinging jet

    NASA Astrophysics Data System (ADS)

    O'Donovan, T. S.; Persoons, T.; Murray, D. B.

    2011-10-01

    To investigate the complex coupling between surface heat transfer and local fluid velocity in convective heat transfer, advanced techniques are required to measure the surface heat flux at high spatial and temporal resolution. Several established flow velocity techniques such as laser Doppler anemometry, particle image velocimetry and hot wire anemometry can measure fluid velocities at high spatial resolution (µm) and have a high-frequency response (up to 100 kHz) characteristic. Equivalent advanced surface heat transfer measurement techniques, however, are not available; even the latest advances in high speed thermal imaging do not offer equivalent data capture rates. The current research presents a method of measuring point surface heat flux with a hot film that is flush mounted on a heated flat surface. The film works in conjunction with a constant temperature anemometer which has a bandwidth of 100 kHz. The bandwidth of this technique therefore is likely to be in excess of more established surface heat flux measurement techniques. Although the frequency response of the sensor is not reported here, it is expected to be significantly less than 100 kHz due to its physical size and capacitance. To demonstrate the efficacy of the technique, a cooling impinging air jet is directed at the heated surface, and the power required to maintain the hot-film temperature is related to the local heat flux to the fluid air flow. The technique is validated experimentally using a more established surface heat flux measurement technique. The thermal performance of the sensor is also investigated numerically. It has been shown that, with some limitations, the measurement technique accurately measures the surface heat transfer to an impinging air jet with improved spatial resolution for a wide range of experimental parameters.

  17. A simple calculation method for determination of equivalent square field

    PubMed Central

    Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad

    2012-01-01

    Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning. PMID:22557801

  18. Interferometric observations of an artificial satellite.

    PubMed

    Preston, R A; Ergas, R; Hinteregger, H F; Knight, C A; Robertson, D S; Shapiro, I I; Whitney, A R; Rogers, A E; Clark, T A

    1972-10-27

    Very-long-baseline interferometric observations of radio signals from the TACSAT synchronous satellite, even though extending over only 7 hours, have enabled an excellent orbit to be deduced. Precision in differenced delay and delay-rate measurements reached 0.15 nanosecond ( approximately 5 centimeters in equivalent differenced distance) and 0.05 picosecond per second ( approximately 0.002 centimeter per second in equivalent differenced velocity), respectively. The results from this initial three-station experiment demonstrate the feasibility of using the method for accurate satellite tracking and for geodesy. Comparisons are made with other techniques.

  19. Equivalence of time-multiplexed and frequency-multiplexed signals in digital communications.

    NASA Technical Reports Server (NTRS)

    Timor, U.

    1972-01-01

    In comparing different techniques for multiplexing N binary data signals into a single channel, time-division multiplexing (TDM) is known to have a theoretic efficiency of 100 percent (neglecting sync power) and thus seems to outperform frequency-division multiplexing systems (FDM). By considering more general FDM systems, we will show that both TDM and FDM are equivalent and have an efficiency of 100 percent. The difference between the systems is in the multiplexing and demultiplexing subsystems, but not in the performance or in the generated waveforms.

  20. Theory and experiment in gravitational physics

    NASA Technical Reports Server (NTRS)

    Will, C. M.

    1981-01-01

    New technological advances have made it feasible to conduct measurements with precision levels which are suitable for experimental tests of the theory of general relativity. This book has been designed to fill a new need for a complete treatment of techniques for analyzing gravitation theory and experience. The Einstein equivalence principle and the foundations of gravitation theory are considered, taking into account the Dicke framework, basic criteria for the viability of a gravitation theory, experimental tests of the Einstein equivalence principle, Schiff's conjecture, and a model theory devised by Lightman and Lee (1973). Gravitation as a geometric phenomenon is considered along with the parametrized post-Newtonian formalism, the classical tests, tests of the strong equivalence principle, gravitational radiation as a tool for testing relativistic gravity, the binary pulsar, and cosmological tests.

  1. Generalized serial search code acquisition - The equivalent circular state diagram approach

    NASA Technical Reports Server (NTRS)

    Polydoros, A.; Simon, M. K.

    1984-01-01

    A transform-domain method for deriving the generating function of the acquisition process resulting from an arbitrary serial search strategy is presented. The method relies on equivalent circular state diagrams, uses Mason's formula from flow-graph theory, and employs a minimum number of required parameters. The transform-domain approach is briefly described and the concept of equivalent circular state diagrams is introduced and exploited to derive the generating function and resulting mean acquisition time for three particular cases of interest, the continuous/center Z search, the broken/center Z search, and the expanding window search. An optimization of the latter technique is performed whereby the number of partial windows which minimizes the mean acquisition time is determined. The numerical results satisfy certain intuitive predictions and provide useful design guidelines for such systems.

  2. Theory and experiment in gravitational physics

    NASA Astrophysics Data System (ADS)

    Will, C. M.

    New technological advances have made it feasible to conduct measurements with precision levels which are suitable for experimental tests of the theory of general relativity. This book has been designed to fill a new need for a complete treatment of techniques for analyzing gravitation theory and experience. The Einstein equivalence principle and the foundations of gravitation theory are considered, taking into account the Dicke framework, basic criteria for the viability of a gravitation theory, experimental tests of the Einstein equivalence principle, Schiff's conjecture, and a model theory devised by Lightman and Lee (1973). Gravitation as a geometric phenomenon is considered along with the parametrized post-Newtonian formalism, the classical tests, tests of the strong equivalence principle, gravitational radiation as a tool for testing relativistic gravity, the binary pulsar, and cosmological tests.

  3. Flexural bending of the Zagros foreland basin

    NASA Astrophysics Data System (ADS)

    Pirouz, Mortaza; Avouac, Jean-Philippe; Gualandi, Adriano; Hassanzadeh, Jamshid; Sternai, Pietro

    2017-09-01

    We constrain and model the geometry of the Zagros foreland to assess the equivalent elastic thickness of the northern edge of the Arabian plate and the loads that have originated due to the Arabia-Eurasia collision. The Oligo-Miocene Asmari formation, and its equivalents in Iraq and Syria, is used to estimate the post-collisional subsidence as they separate passive margin sediments from the younger foreland deposits. The depth to these formations is obtained by synthesizing a large database of well logs, seismic profiles and structural sections from the Mesopotamian basin and the Persian Gulf. The foreland depth varies along strike of the Zagros wedge between 1 and 6 km. The foreland is deepest beneath the Dezful embayment, in southwest Iran, and becomes shallower towards both ends. We investigate how the geometry of the foreland relates to the range topography loading based on simple flexural models. Deflection of the Arabian plate is modelled using point load distribution and convolution technique. The results show that the foreland depth is well predicted with a flexural model which assumes loading by the basin sedimentary fill, and thickened crust of the Zagros. The model also predicts a Moho depth consistent with Free-Air anomalies over the foreland and Zagros wedge. The equivalent elastic thickness of the flexed Arabian lithosphere is estimated to be ca. 50 km. We conclude that other sources of loading of the lithosphere, either related to the density variations (e.g. due to a possible lithospheric root) or dynamic origin (e.g. due to sublithospheric mantle flow or lithospheric buckling) have a negligible influence on the foreland geometry, Moho depth and topography of the Zagros. We calculate the shortening across the Zagros assuming conservation of crustal mass during deformation, trapping of all the sediments eroded from the range in the foreland, and an initial crustal thickness of 38 km. This calculation implies a minimum of 126 ± 18 km of crustal shortening due to ophiolite obduction and post-collisional shortening.

  4. Discontinuities Characteristics of the Upper Jurassic Arab-D Reservoir Equivalent Tight Carbonates Outcrops, Central Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Abdlmutalib, Ammar; Abdullatif, Osman

    2017-04-01

    Jurassic carbonates represent an important part of the Mesozoic petroleum system in the Arabian Peninsula in terms of source rocks, reservoirs, and seals. Jurassic Outcrop equivalents are well exposed in central Saudi Arabia and which allow examining and measuring different scales of geological heterogeneities that are difficult to collect from the subsurface due to limitations of data and techniques. Identifying carbonates Discontinuities characteristics at outcrops might help to understand and predict their properties and behavior in the subsurface. The main objective of this study is to identify the lithofacies and the discontinuities properties of the upper Jurassic carbonates of the Arab D member and the Jubaila Formation (Arab-D reservoir) based on their outcrop equivalent strata in central Saudi Arabia. The sedimentologic analysis revealed several lithofacies types that vary in their thickness, abundances, cyclicity and vertical and lateral stacking patterns. The carbonates lithofacies included mudstone, wackestone, packstone, and grainstone. These lithofacies indicate deposition within tidal flat, skeletal banks and shallow to deep lagoonal paleoenvironmental settings. Field investigations of the outcrops revealed two types of discontinuities within Arab D Member and Upper Jubaila. These are depositional discontinuities and tectonic fractures and which all vary in their orientation, intensity, spacing, aperture and displacements. It seems that both regional and local controls have affected the fracture development within these carbonate rocks. On the regional scale, the fractures seem to be structurally controlled by the Central Arabian Graben System, which affected central Saudi Arabia. While, locally, at the outcrop scale, stratigraphic, depositional and diagenetic controls appear to have influenced the fracture development and intensity. The fracture sets and orientations identified on outcrops show similarity to those fracture sets revealed in the upper Jurassic carbonates in the subsurface which suggest inter-relationships. Therefore, the integration of discontinuities characteristics revealed from the Arab-D outcrop with subsurface data might help to understand and predict discontinuity properties and patterns of the Arab-D reservoir in the subsurface.

  5. Dose-current discharge correlation analysis in a Mather type Plasma Focus device for medical applications

    NASA Astrophysics Data System (ADS)

    Sumini, M.; Mostacci, D.; Tartari, A.; Mazza, A.; Cucchi, G.; Isolan, L.; Buontempo, F.; Zironi, I.; Castellani, G.

    2017-11-01

    In a Plasma Focus device the plasma collapses into the pinch where it reaches thermonuclear conditions for a few tens of nanoseconds, becoming a multi-radiation source. The nature of the radiation generated depends on the gas filling the chamber and the device working parameters. The self-collimated electron beam generated in the backward direction with respect to the plasma motion is one of the main radiation sources of interest also for medical applications. The electron beam may be guided against a high Z material target to produce an X-ray beam. This technique offers an ultra-high dose rate source of X-rays, able to deliver during the pinch a massive dose (up to 1 Gy per discharge for the PFMA-3 test device), as measured with EBT3 GafchromicⒸfilm tissue equivalent dosimeters. Given the stochastic behavior of the discharge process, a reliable on-line estimate of the dose-delivered is a very challenging task, in some way preventing a systematic application as a potentially interesting therapy device. This work presents an approach to linking the dose registered by the EBT3 GafchromicⒸfilms with the information contained in the signal recorded during the current discharge process. Processing the signal with the Wigner-Ville distribution, a spectrogram was obtained, displaying the information on intensity at various frequency scales, identifying the band of frequencies representative of the pinch events and define some patterns correlated with the dose.

  6. Data Release Report for Source Physics Experiments 2 and 3 (SPE-2 and SPE-3) Nevada National Security Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Townsend, Margaret; Obi, Curtis

    2015-01-26

    The second Source Physics Experiment shot (SPE-2) was conducted in Nevada on October 25, 2011, at 1900:00.011623 Greenwich Mean Time (GMT). The explosive source was 997 kilograms (kg) trinitrotoluene (TNT) equivalent of sensitized heavy ammonium fuel oil (SHANFO) detonated at a depth of 45.7 meters (m). The third Source Physics Experiment shot (SPE-3) was conducted in Nevada on July 24, 2012, at 1800:00.44835 GMT. The explosive source was 905 kg TNT equivalent of SHANFO detonated at a depth of 45.8 m. Both shots were recorded by an extensive set of instrumentation that includes sensors both at near-field (less than 100more » m) and far-field (100 m or greater) distances. The near-field instruments consisted of three-component accelerometers deployed in boreholes at 15, 46, and 55 m depths around the shot and a set of single-component vertical accelerometers on the surface. The far-field network was composed of a variety of seismic and acoustic sensors, including short-period geophones, broadband seismometers, three-component accelerometers, and rotational seismometers at distances of 100 m to 25 kilometers. This report coincides with the release of these data for analysts and organizations that are not participants in this program. This report describes the second and third Source Physics Experiment shots and the various types of near-field and far-field data that are available.« less

  7. System equivalent model mixing

    NASA Astrophysics Data System (ADS)

    Klaassen, Steven W. B.; van der Seijs, Maarten V.; de Klerk, Dennis

    2018-05-01

    This paper introduces SEMM: a method based on Frequency Based Substructuring (FBS) techniques that enables the construction of hybrid dynamic models. With System Equivalent Model Mixing (SEMM) frequency based models, either of numerical or experimental nature, can be mixed to form a hybrid model. This model follows the dynamic behaviour of a predefined weighted master model. A large variety of applications can be thought of, such as the DoF-space expansion of relatively small experimental models using numerical models, or the blending of different models in the frequency spectrum. SEMM is outlined, both mathematically and conceptually, based on a notation commonly used in FBS. A critical physical interpretation of the theory is provided next, along with a comparison to similar techniques; namely DoF expansion techniques. SEMM's concept is further illustrated by means of a numerical example. It will become apparent that the basic method of SEMM has some shortcomings which warrant a few extensions to the method. One of the main applications is tested in a practical case, performed on a validated benchmark structure; it will emphasize the practicality of the method.

  8. Study of G-S/D underlap for enhanced analog performance and RF/circuit analysis of UTB InAs-OI-Si MOSFET using NQS small signal model

    NASA Astrophysics Data System (ADS)

    Maity, Subir Kumar; Pandit, Soumya

    2017-01-01

    InGaAs (and its variant) appears to be a promising channel material for high-performance, low-power scaled CMOS applications due to its excellent carrier transport properties. However, MOS transistors made of this suffer from poor electrostatic integrity. In this work, we consider an underlap ultra thin body (UTB) InAs-on-Insulator n-channel MOS transistor, and study the effect of varying the gate-source/drain (G-S/D) underlap length on the analog performance of the device with the help of technology computer-aided design (TCAD) simulation, calibrated with Schrodinger-Poisson solver and experimental results. The underlap technique improves the gate electrostatic integrity which in turn improves the analog performance. We develop a non-quasi-static (NQS) small signal equivalent circuit model of the device which is used for study of the RF performance. With increase of the underlap length, the unity gain cut-off frequency degrades and the maximum oscillation frequency improves beyond a certain value of the underlap length. We further study the gain-frequency response of a common source amplifier using the NQS model, through SPICE simulation and observe that the voltage gain and the gain bandwidth improves.

  9. Low frequency acoustic waves from explosive sources in the atmosphere

    NASA Astrophysics Data System (ADS)

    Millet, Christophe; Robinet, Jean-Christophe; Roblin, Camille; Gloerfelt, Xavier

    2006-11-01

    In this study, a perturbative formulation of non linear euler equations is used to compute the pressure variation for low frequency acoustic waves from explosive sources in real atmospheres. Based on a Dispersion-Relation-Preserving (DRP) finite difference scheme, the discretization provides good properties for both sound generation and long range sound propagation over a variety of spatial atmospheric scales. It also assures that there is no wave mode coupling in the numerical simulation The background flow is obtained by matching the comprehensive empirical global model of horizontal winds HWM-93 (and MSISE-90 for the temperature profile) with meteorological reanalysis of the lower atmosphere. Benchmark calculations representing cases where there is downward and upward refraction (including shadow zones), ducted propagation, and generation of acoustic waves from low speed shear layers are considered for validation. For all cases, results show a very good agreement with analytical solutions, when available, and with other standard approaches, such as the ray tracing and the normal mode technique. Comparison of calculations and experimental data from the high explosive ``Misty Picture'' test that provided the scaled equivalent airblast of an 8 kt nuclear device (on May 14, 1987), is also considered. It is found that instability waves develop less than one hour after the wavefront generated by the detonation passes.

  10. Adjustment of geochemical background by robust multivariate statistics

    USGS Publications Warehouse

    Zhou, D.

    1985-01-01

    Conventional analyses of exploration geochemical data assume that the background is a constant or slowly changing value, equivalent to a plane or a smoothly curved surface. However, it is better to regard the geochemical background as a rugged surface, varying with changes in geology and environment. This rugged surface can be estimated from observed geological, geochemical and environmental properties by using multivariate statistics. A method of background adjustment was developed and applied to groundwater and stream sediment reconnaissance data collected from the Hot Springs Quadrangle, South Dakota, as part of the National Uranium Resource Evaluation (NURE) program. Source-rock lithology appears to be a dominant factor controlling the chemical composition of groundwater or stream sediments. The most efficacious adjustment procedure is to regress uranium concentration on selected geochemical and environmental variables for each lithologic unit, and then to delineate anomalies by a common threshold set as a multiple of the standard deviation of the combined residuals. Robust versions of regression and RQ-mode principal components analysis techniques were used rather than ordinary techniques to guard against distortion caused by outliers Anomalies delineated by this background adjustment procedure correspond with uranium prospects much better than do anomalies delineated by conventional procedures. The procedure should be applicable to geochemical exploration at different scales for other metals. ?? 1985.

  11. An Energy Efficient Technique Using Electric Active Shielding for Capacitive Coupling Intra-Body Communication

    PubMed Central

    Ma, Chao; Huang, Zhonghua; Wang, Zhiqi; Zhou, Linxuan; Li, Yinlin

    2017-01-01

    Capacitive coupling intra-body communication (CC-IBC) has become one of the candidates for healthcare sensor networks due to its positive prevailing features of energy efficiency, transmission rate and security. Under the CC-IBC scheme, some of the electric field emitted from signal (SIG) electrode of the transmitter will couple directly to the ground (GND) electrode, acting equivalently as an internal impedance of the signal source and inducing considerable energy losses. However, none of the previous works have fully studied the problem. In this paper, the underlying theory of such energy loss is investigated and quantitatively evaluated using conventional parameters. Accordingly, a method of electric active shielding is proposed to reduce the displacement current across the SIG-GND electrodes, leading to less power loss. In addition, the variation of such loss in regard to frequency range and positions on human body was also considered. The theory was validated by finite element method simulation and experimental measurement. The prototype result shows that the receiving power has been improved by approximate 5.5 dBm while the total power consumption is maximally 9 mW less using the proposed technique, providing an energy efficient option in physical layer for wearable and implantable healthcare sensor networks. PMID:28885546

  12. Development of a computer technique for the prediction of transport aircraft flight profile sonic boom signatures. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Coen, Peter G.

    1991-01-01

    A new computer technique for the analysis of transport aircraft sonic boom signature characteristics was developed. This new technique, based on linear theory methods, combines the previously separate equivalent area and F function development with a signature propagation method using a single geometry description. The new technique was implemented in a stand-alone computer program and was incorporated into an aircraft performance analysis program. Through these implementations, both configuration designers and performance analysts are given new capabilities to rapidly analyze an aircraft's sonic boom characteristics throughout the flight envelope.

  13. Water-level surface in the Chicot equivalent aquifer system in southeastern Louisiana, 2009

    USGS Publications Warehouse

    Tomaszewski, Dan J.

    2011-01-01

    The Chicot equivalent aquifer system is an important source of freshwater in southeastern Louisiana. In 2005, about 47 million gallons per day (Mgal/d) were withdrawn from the Chicot equivalent aquifer system in East Baton Rouge, East Feliciana, Livingston, Tangipahoa, St. Helena, St. Tammany, Washington, and West Feliciana Parishes. Concentrated withdrawals exceeded 5 Mgal/d in Bogalusa, the city of Baton Rouge, and in northwestern East Baton Rouge Parish. In the study area, about 30,000 wells screened in the Chicot equivalent aquifer system were registered with the Louisiana Department of Transportation and Development (LaDOTD). These wells were constructed for public-supply, industry, irrigation, and domestic uses. Most of the wells were registered as domestic-use wells and are small-diameter, low-yielding wells. Total withdrawal from the Chicot equivalent aquifer system for domestic use was estimated to be 12 Mgal/d in 2005. This report documents the 2009 water-level surface of the Chicot equivalent aquifer system in southeastern Louisiana. The report also shows differences in water-level measurements for the years 1991 and 2009 at selected sites. Understanding changes and trends in water levels is important for continued use, planning, and management of groundwater resources. The U.S. Geological Survey, in cooperation with the Louisiana Department of Transportation and Development, conducted this study of the water-level surface of the Chicot equivalent aquifer system as part of an ongoing effort to monitor groundwater levels in aquifers in Louisiana.

  14. 40 CFR 63.56 - Requirements for case-by-case determination of equivalent emission limitations after promulgation...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 63.56 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED... Technology Determinations for Major Sources in Accordance With Clean Air Act Sections, Sections 112(g) and...

  15. Realistic Subsurface Anomaly Discrimination Using Electromagnetic Induction and an SVM Classifier

    NASA Astrophysics Data System (ADS)

    Pablo Fernández, Juan; Shubitidze, Fridon; Shamatava, Irma; Barrowes, Benjamin E.; O'Neill, Kevin

    2010-12-01

    The environmental research program of the United States military has set up blind tests for detection and discrimination of unexploded ordnance. One such test consists of measurements taken with the EM-63 sensor at Camp Sibert, AL. We review the performance on the test of a procedure that combines a field-potential (HAP) method to locate targets, the normalized surface magnetic source (NSMS) model to characterize them, and a support vector machine (SVM) to classify them. The HAP method infers location from the scattered magnetic field and its associated scalar potential, the latter reconstructed using equivalent sources. NSMS replaces the target with an enclosing spheroid of equivalent radial magnetization whose integral it uses as a discriminator. SVM generalizes from empirical evidence and can be adapted for multiclass discrimination using a voting system. Our method identifies all potentially dangerous targets correctly and has a false-alarm rate of about 5%.

  16. Nutrient non-equivalence: Does restricting high-potassium plant foods help to prevent hyperkalemia in hemodialysis patients?

    PubMed Central

    St-Jules, DE; Goldfarb, DS; Sevick, MA

    2016-01-01

    Hemodialysis patients are often advised to limit their intake of high-potassium foods to help manage hyperkalemia. However, the benefits of this practice are entirely theoretical and not supported by rigorous randomized controlled trials. The hypothesis that potassium restriction is useful is based on the assumption that different sources of dietary potassium are therapeutically equivalent. In fact, animal and plant sources of potassium may differ in their potential to contribute to hyperkalemia. In this commentary, we summarize the historical research basis for limiting high-potassium foods. Ultimately, we conclude that this approach is not evidence-based and may actually present harm to patients. However, given the uncertainty arising from the paucity of conclusive data, we agree that until the appropriate intervention studies are conducted, practitioners should continue to advise restriction of high-potassium foods. PMID:26975777

  17. Iron-rich clay minerals on Mars - Potential sources or sinks for hydrogen and indicators of hydrogen loss over time

    NASA Technical Reports Server (NTRS)

    Burt, D. M.

    1989-01-01

    Although direct evidence is lacking, indirect evidence suggests that iron-rich clay minerals or poorly-ordered chemical equivalents are widespread on the Martian surface. Such clays can act as sources or sinks for hydrogen ('hydrogen sponges'). Ferrous clays can lose hydrogen and ferric clays gain it by the coupled substitution Fe(3+)O(Fe(2+)OH)-1, equivalent to minus atomic H. This 'oxy-clay' substitution involves only proton and electron migration through the crystal structure, and therefore occurs nondestructively and reversibly, at relatively low temperatures. The reversible, low-temperature nature of this reaction contrasts with the irreversible nature of destructive dehydroxylation (H2O loss) suffered by clays heated to high temperatures. In theory, metastable ferric oxy-clays formed by dehydrogenation of ferrous clays over geologic time could, if exposed to water vapor, extract the hydrogen from it, releasing oxygen.

  18. Nutrient Non-equivalence: Does Restricting High-Potassium Plant Foods Help to Prevent Hyperkalemia in Hemodialysis Patients?

    PubMed

    St-Jules, David E; Goldfarb, David S; Sevick, Mary Ann

    2016-09-01

    Hemodialysis patients are often advised to limit their intake of high-potassium foods to help manage hyperkalemia. However, the benefits of this practice are entirely theoretical and not supported by rigorous randomized controlled trials. The hypothesis that potassium restriction is useful is based on the assumption that different sources of dietary potassium are therapeutically equivalent. In fact, animal and plant sources of potassium may differ in their potential to contribute to hyperkalemia. In this commentary, we summarize the historical research basis for limiting high-potassium foods. Ultimately, we conclude that this approach is not evidence-based and may actually present harm to patients. However, given the uncertainty arising from the paucity of conclusive data, we agree that until the appropriate intervention studies are conducted, practitioners should continue to advise restriction of high-potassium foods. Copyright © 2016 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  19. Simulation of 100-300 GHz solid-state harmonic sources

    NASA Technical Reports Server (NTRS)

    Zybura, Michael F.; Jones, J. Robert; Jones, Stephen H.; Tait, Gregory B.

    1995-01-01

    Accurate and efficient simulations of the large-signal time-dependent characteristics of second-harmonic Transferred Electron Oscillators (TEO's) and Heterostructure Barrier Varactor (HBV) frequency triplers have been obtained. This is accomplished by using a novel and efficient harmonic-balance circuit analysis technique which facilitates the integration of physics-based hydrodynamic device simulators. The integrated hydrodynamic device/harmonic-balance circuit simulators allow TEO and HBV circuits to be co-designed from both a device and a circuit point of view. Comparisons have been made with published experimental data for both TEO's and HBV's. For TEO's, excellent correlation has been obtained at 140 GHz and 188 GHz in second-harmonic operation. Excellent correlation has also been obtained for HBV frequency triplers operating near 200 GHz. For HBV's, both a lumped quasi-static equivalent circuit model and the hydrodynamic device simulator have been linked to the harmonic-balance circuit simulator. This comparison illustrates the importance of representing active devices with physics-based numerical device models rather than analytical device models.

  20. Magnetoelectric gradiometer with enhanced vibration rejection efficiency under H-field modulation

    NASA Astrophysics Data System (ADS)

    Xu, Junran; Zhuang, Xin; Leung, Chung Ming; Staruch, Margo; Finkel, Peter; Li, Jiefang; Viehland, D.

    2018-03-01

    A magnetoelectric (ME) gradiometer consisting of two Metglas/Pb(Zr,Ti)O3 fiber-based sensors has been developed. The equivalent magnetic noise of both sensors was first determined to be about 60 pT/√Hz while using an H-field modulation technique. The common mode rejection ratio of a gradiometer based on these two sensors was determined to be 74. The gradiometer response curve was then measured, which provided the dependence of the gradiometer output as a function of the source-gradiometer-normalized distance. Investigations in the presence of vibration noise revealed that a ME gradiometer consisting of two ME magnetometers working under H-field modulation was capable of significant vibration rejection. The results were compared to similar studies of ME gradiometers operated in a passive working mode. Our findings demonstrate that this active gradiometer has a good vibration rejection capability in the presence of both magnetic signals and vibration noise/interferences by using two magnetoelectric sensors operated under H-field modulation.

Top