Sample records for equivalent source model

  1. Sound source identification and sound radiation modeling in a moving medium using the time-domain equivalent source method.

    PubMed

    Zhang, Xiao-Zheng; Bi, Chuan-Xing; Zhang, Yong-Bin; Xu, Liang

    2015-05-01

    Planar near-field acoustic holography has been successfully extended to reconstruct the sound field in a moving medium, however, the reconstructed field still contains the convection effect that might lead to the wrong identification of sound sources. In order to accurately identify sound sources in a moving medium, a time-domain equivalent source method is developed. In the method, the real source is replaced by a series of time-domain equivalent sources whose strengths are solved iteratively by utilizing the measured pressure and the known convective time-domain Green's function, and time averaging is used to reduce the instability in the iterative solving process. Since these solved equivalent source strengths are independent of the convection effect, they can be used not only to identify sound sources but also to model sound radiations in both moving and static media. Numerical simulations are performed to investigate the influence of noise on the solved equivalent source strengths and the effect of time averaging on reducing the instability, and to demonstrate the advantages of the proposed method on the source identification and sound radiation modeling.

  2. Feasibility of Equivalent Dipole Models for Electroencephalogram-Based Brain Computer Interfaces.

    PubMed

    Schimpf, Paul H

    2017-09-15

    This article examines the localization errors of equivalent dipolar sources inverted from the surface electroencephalogram in order to determine the feasibility of using their location as classification parameters for non-invasive brain computer interfaces. Inverse localization errors are examined for two head models: a model represented by four concentric spheres and a realistic model based on medical imagery. It is shown that the spherical model results in localization ambiguity such that a number of dipolar sources, with different azimuths and varying orientations, provide a near match to the electroencephalogram of the best equivalent source. No such ambiguity exists for the elevation of inverted sources, indicating that for spherical head models, only the elevation of inverted sources (and not the azimuth) can be expected to provide meaningful classification parameters for brain-computer interfaces. In a realistic head model, all three parameters of the inverted source location are found to be reliable, providing a more robust set of parameters. In both cases, the residual error hypersurfaces demonstrate local minima, indicating that a search for the best-matching sources should be global. Source localization error vs. signal-to-noise ratio is also demonstrated for both head models.

  3. An equivalent source model of the satellite-altitude magnetic anomaly field over Australia

    NASA Technical Reports Server (NTRS)

    Mayhew, M. A.; Johnson, B. D.; Langel, R. A.

    1980-01-01

    The low-amplitude, long-wavelength magnetic anomaly field measured between 400 and 700 km elevation over Australia by the POGO satellites is modeled by means of the equivalent source technique. Magnetic dipole moments are computed for a latitude-longitude array of dipole sources on the earth's surface such that the dipoles collectively give rise to a field which makes a least squares best fit to that observed. The distribution of magnetic moments is converted to a model of apparent magnetization contrast in a layer of constant (40 km) thickness, which contains information equivalent to the lateral variation in the vertical integral of magnetization down to the Curie isotherm and can be transformed to a model of variable thickness magnetization. It is noted that the closest equivalent source spacing giving a stable solution is about 2.5 deg, corresponding to about half the mean data elevation, and that the magnetization distribution correlates well with some of the principle tectonic elements of Australia.

  4. Applicability of the single equivalent point dipole model to represent a spatially distributed bio-electrical source

    NASA Technical Reports Server (NTRS)

    Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.

    2001-01-01

    Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius. Finally, a method is devised to obtain the size of the distributed source during the cardiac cycle.

  5. Equivalent source modeling of the main field using MAGSAT data

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The software was considerably enhanced to accommodate a more comprehensive examination of data available for field modeling using the equivalent sources method by (1) implementing a dynamic core allocation capability into the software system for the automatic dimensioning of the normal matrix; (2) implementing a time dependent model for the dipoles; (3) incorporating the capability to input specialized data formats in a fashion similar to models in spherical harmonics; and (4) implementing the optional ability to simultaneously estimate observatory anomaly biases where annual means data is utilized. The time dependence capability was demonstrated by estimating a component model of 21 deg resolution using the 14 day MAGSAT data set of Goddard's MGST (12/80). The equivalent source model reproduced both the constant and the secular variation found in MGST (12/80).

  6. Equivalent radiation source of 3D package for electromagnetic characteristics analysis

    NASA Astrophysics Data System (ADS)

    Li, Jun; Wei, Xingchang; Shu, Yufei

    2017-10-01

    An equivalent radiation source method is proposed to characterize electromagnetic emission and interference of complex three dimensional integrated circuits (IC) in this paper. The method utilizes amplitude-only near-field scanning data to reconstruct an equivalent magnetic dipole array, and the differential evolution optimization algorithm is proposed to extract the locations, orientation and moments of those dipoles. By importing the equivalent dipoles model into a 3D full-wave simulator together with the victim circuit model, the electromagnetic interference issues in mixed RF/digital systems can be well predicted. A commercial IC is used to validate the accuracy and efficiency of this proposed method. The coupled power at the victim antenna port calculated by the equivalent radiation source is compared with the measured data. Good consistency is obtained which confirms the validity and efficiency of the method. Project supported by the National Nature Science Foundation of China (No. 61274110).

  7. Separation of non-stationary multi-source sound field based on the interpolated time-domain equivalent source method

    NASA Astrophysics Data System (ADS)

    Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng

    2016-05-01

    In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.

  8. An equivalent body surface charge model representing three-dimensional bioelectrical activity

    NASA Technical Reports Server (NTRS)

    He, B.; Chernyak, Y. B.; Cohen, R. J.

    1995-01-01

    A new surface-source model has been developed to account for the bioelectrical potential on the body surface. A single-layer surface-charge model on the body surface has been developed to equivalently represent bioelectrical sources inside the body. The boundary conditions on the body surface are discussed in relation to the surface-charge in a half-space conductive medium. The equivalent body surface-charge is shown to be proportional to the normal component of the electric field on the body surface just outside the body. The spatial resolution of the equivalent surface-charge distribution appears intermediate between those of the body surface potential distribution and the body surface Laplacian distribution. An analytic relationship between the equivalent surface-charge and the surface Laplacian of the potential was found for a half-space conductive medium. The effects of finite spatial sampling and noise on the reconstruction of the equivalent surface-charge were evaluated by computer simulations. It was found through computer simulations that the reconstruction of the equivalent body surface-charge from the body surface Laplacian distribution is very stable against noise and finite spatial sampling. The present results suggest that the equivalent body surface-charge model may provide an additional insight to our understanding of bioelectric phenomena.

  9. A boundary condition to the Khokhlov-Zabolotskaya equation for modeling strongly focused nonlinear ultrasound fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rosnitskiy, P., E-mail: pavrosni@yandex.ru; Yuldashev, P., E-mail: petr@acs366.phys.msu.ru; Khokhlova, V., E-mail: vera@acs366.phys.msu.ru

    2015-10-28

    An equivalent source model was proposed as a boundary condition to the nonlinear parabolic Khokhlov-Zabolotskaya (KZ) equation to simulate high intensity focused ultrasound (HIFU) fields generated by medical ultrasound transducers with the shape of a spherical shell. The boundary condition was set in the initial plane; the aperture, the focal distance, and the initial pressure of the source were chosen based on the best match of the axial pressure amplitude and phase distributions in the Rayleigh integral analytic solution for a spherical transducer and the linear parabolic approximation solution for the equivalent source. Analytic expressions for the equivalent source parametersmore » were derived. It was shown that the proposed approach allowed us to transfer the boundary condition from the spherical surface to the plane and to achieve a very good match between the linear field solutions of the parabolic and full diffraction models even for highly focused sources with F-number less than unity. The proposed method can be further used to expand the capabilities of the KZ nonlinear parabolic equation for efficient modeling of HIFU fields generated by strongly focused sources.« less

  10. Identifying equivalent sound sources from aeroacoustic simulations using a numerical phased array

    NASA Astrophysics Data System (ADS)

    Pignier, Nicolas J.; O'Reilly, Ciarán J.; Boij, Susann

    2017-04-01

    An application of phased array methods to numerical data is presented, aimed at identifying equivalent flow sound sources from aeroacoustic simulations. Based on phased array data extracted from compressible flow simulations, sound source strengths are computed on a set of points in the source region using phased array techniques assuming monopole propagation. Two phased array techniques are used to compute the source strengths: an approach using a Moore-Penrose pseudo-inverse and a beamforming approach using dual linear programming (dual-LP) deconvolution. The first approach gives a model of correlated sources for the acoustic field generated from the flow expressed in a matrix of cross- and auto-power spectral values, whereas the second approach results in a model of uncorrelated sources expressed in a vector of auto-power spectral values. The accuracy of the equivalent source model is estimated by computing the acoustic spectrum at a far-field observer. The approach is tested first on an analytical case with known point sources. It is then applied to the example of the flow around a submerged air inlet. The far-field spectra obtained from the source models for two different flow conditions are in good agreement with the spectra obtained with a Ffowcs Williams-Hawkings integral, showing the accuracy of the source model from the observer's standpoint. Various configurations for the phased array and for the sources are used. The dual-LP beamforming approach shows better robustness to changes in the number of probes and sources than the pseudo-inverse approach. The good results obtained with this simulation case demonstrate the potential of the phased array approach as a modelling tool for aeroacoustic simulations.

  11. An experimental comparison of various methods of nearfield acoustic holography

    DOE PAGES

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    2017-05-19

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  12. An experimental comparison of various methods of nearfield acoustic holography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.

    An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less

  13. Near Identifiability of Dynamical Systems

    NASA Technical Reports Server (NTRS)

    Hadaegh, F. Y.; Bekey, G. A.

    1987-01-01

    Concepts regarding approximate mathematical models treated rigorously. Paper presents new results in analysis of structural identifiability, equivalence, and near equivalence between mathematical models and physical processes they represent. Helps establish rigorous mathematical basis for concepts related to structural identifiability and equivalence revealing fundamental requirements, tacit assumptions, and sources of error. "Structural identifiability," as used by workers in this field, loosely translates as meaning ability to specify unique mathematical model and set of model parameters that accurately predict behavior of corresponding physical system.

  14. An equivalent viscoelastic model for rock mass with parallel joints

    NASA Astrophysics Data System (ADS)

    Li, Jianchun; Ma, Guowei; Zhao, Jian

    2010-03-01

    An equivalent viscoelastic medium model is proposed for rock mass with parallel joints. A concept of "virtual wave source (VWS)" is proposed to take into account the wave reflections between the joints. The equivalent model can be effectively applied to analyze longitudinal wave propagation through discontinuous media with parallel joints. Parameters in the equivalent viscoelastic model are derived analytically based on longitudinal wave propagation across a single rock joint. The proposed model is then verified by applying identical incident waves to the discontinuous and equivalent viscoelastic media at one end to compare the output waves at the other end. When the wavelength of the incident wave is sufficiently long compared to the joint spacing, the effect of the VWS on wave propagation in rock mass is prominent. The results from the equivalent viscoelastic medium model are very similar to those determined from the displacement discontinuity method. Frequency dependence and joint spacing effect on the equivalent viscoelastic model and the VWS method are discussed.

  15. National Snow Analyses - NOHRSC - The ultimate source for snow information

    Science.gov Websites

    Equivalent Thumbnail image of Modeled Snow Water Equivalent Animate: Season --- Two weeks --- One Day Snow Depth Thumbnail image of Modeled Snow Depth Animate: Season --- Two weeks --- One Day Average Snowpack Temp Thumbnail image of Modeled Average Snowpack Temp Animate: Season --- Two weeks --- One Day SWE

  16. Distributed source model for the full-wave electromagnetic simulation of nonlinear terahertz generation.

    PubMed

    Fumeaux, Christophe; Lin, Hungyen; Serita, Kazunori; Withayachumnankul, Withawat; Kaufmann, Thomas; Tonouchi, Masayoshi; Abbott, Derek

    2012-07-30

    The process of terahertz generation through optical rectification in a nonlinear crystal is modeled using discretized equivalent current sources. The equivalent terahertz sources are distributed in the active volume and computed based on a separately modeled near-infrared pump beam. This approach can be used to define an appropriate excitation for full-wave electromagnetic numerical simulations of the generated terahertz radiation. This enables predictive modeling of the near-field interactions of the terahertz beam with micro-structured samples, e.g. in a near-field time-resolved microscopy system. The distributed source model is described in detail, and an implementation in a particular full-wave simulation tool is presented. The numerical results are then validated through a series of measurements on square apertures. The general principle can be applied to other nonlinear processes with possible implementation in any full-wave numerical electromagnetic solver.

  17. Particle swarm optimization and its application in MEG source localization using single time sliced data

    NASA Astrophysics Data System (ADS)

    Lin, Juan; Liu, Chenglian; Guo, Yongning

    2014-10-01

    The estimation of neural active sources from the magnetoencephalography (MEG) data is a very critical issue for both clinical neurology and brain functions research. A widely accepted source-modeling technique for MEG involves calculating a set of equivalent current dipoles (ECDs). Depth in the brain is one of difficulties in MEG source localization. Particle swarm optimization(PSO) is widely used to solve various optimization problems. In this paper we discuss its ability and robustness to find the global optimum in different depths of the brain when using single equivalent current dipole (sECD) model and single time sliced data. The results show that PSO is an effective global optimization to MEG source localization when given one dipole in different depths.

  18. Spherical-earth Gravity and Magnetic Anomaly Modeling by Gauss-legendre Quadrature Integration

    NASA Technical Reports Server (NTRS)

    Vonfrese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J. (Principal Investigator)

    1981-01-01

    The anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical Earth for an arbitrary body represented by an equivalent point source distribution of gravity poles or magnetic dipoles were calculated. The distribution of equivalent point sources was determined directly from the coordinate limits of the source volume. Variable integration limits for an arbitrarily shaped body are derived from interpolation of points which approximate the body's surface envelope. The versatility of the method is enhanced by the ability to treat physical property variations within the source volume and to consider variable magnetic fields over the source and observation surface. A number of examples verify and illustrate the capabilities of the technique, including preliminary modeling of potential field signatures for Mississippi embayment crustal structure at satellite elevations.

  19. 40 CFR Table 5 to Subpart Mmmm of... - Model Rule-Toxic Equivalency Factors

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 7 2013-07-01 2013-07-01 false Model Rule-Toxic Equivalency Factors 5 Table 5 to Subpart MMMM of Part 60 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines and Compliance Times for Existing Sewage Sludge...

  20. 40 CFR Table 5 to Subpart Mmmm of... - Model Rule-Toxic Equivalency Factors

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 7 2014-07-01 2014-07-01 false Model Rule-Toxic Equivalency Factors 5 Table 5 to Subpart MMMM of Part 60 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Emission Guidelines and Compliance Times for Existing Sewage Sludge...

  1. Equivalent circuit of radio frequency-plasma with the transformer model

    NASA Astrophysics Data System (ADS)

    Nishida, K.; Mochizuki, S.; Ohta, M.; Yasumoto, M.; Lettry, J.; Mattei, S.; Hatayama, A.

    2014-02-01

    LINAC4 H- source is radio frequency (RF) driven type source. In the RF system, it is required to match the load impedance, which includes H- source, to that of final amplifier. We model RF plasma inside the H- source as circuit elements using transformer model so that characteristics of the load impedance become calculable. It has been shown that the modeling based on the transformer model works well to predict the resistance and inductance of the plasma.

  2. Equivalent source modeling of the core magnetic field using magsat data

    NASA Technical Reports Server (NTRS)

    Mayhew, M. A.; Estes, R. H.

    1983-01-01

    Experiments are carried out on fitting the main field using different numbers of equivalent sources arranged in equal area at fixed radii at and inside the core-mantle boundary. In fixing the radius for a given series of runs, the convergence problems that result from the extreme nonlinearity of the problem when dipole positions are allowed to vary are avoided. Results are presented from a comparison between this approach and the standard spherical harmonic approach for modeling the main field in terms of accuracy and computational efficiency. The modeling of the main field with an equivalent dipole representation is found to be comparable to the standard spherical harmonic approach in accuracy. The 32 deg dipole density (42 dipoles) corresponds approximately to an eleventh degree/order spherical harmonic expansion (143 parameters), whereas the 21 dipole density (92 dipoles) corresponds to approximately a seventeenth degree and order expansion (323 parameters). It is pointed out that fixing the dipole positions results in rapid convergence of the dipole solutions for single-epoch models.

  3. Equivalent magnetic vector potential model for low-frequency magnetic exposure assessment

    NASA Astrophysics Data System (ADS)

    Diao, Y. L.; Sun, W. N.; He, Y. Q.; Leung, S. W.; Siu, Y. M.

    2017-10-01

    In this paper, a novel source model based on a magnetic vector potential for the assessment of induced electric field strength in a human body exposed to the low-frequency (LF) magnetic field of an electrical appliance is presented. The construction of the vector potential model requires only a single-component magnetic field to be measured close to the appliance under test, hence relieving considerable practical measurement effort—the radial basis functions (RBFs) are adopted for the interpolation of discrete measurements; the magnetic vector potential model can then be directly constructed by summing a set of simple algebraic functions of RBF parameters. The vector potentials are then incorporated into numerical calculations as the equivalent source for evaluations of the induced electric field in the human body model. The accuracy and effectiveness of the proposed model are demonstrated by comparing the induced electric field in a human model to that of the full-wave simulation. This study presents a simple and effective approach for modelling the LF magnetic source. The result of this study could simplify the compliance test procedure for assessing an electrical appliance regarding LF magnetic exposure.

  4. Equivalent magnetic vector potential model for low-frequency magnetic exposure assessment.

    PubMed

    Diao, Y L; Sun, W N; He, Y Q; Leung, S W; Siu, Y M

    2017-09-21

    In this paper, a novel source model based on a magnetic vector potential for the assessment of induced electric field strength in a human body exposed to the low-frequency (LF) magnetic field of an electrical appliance is presented. The construction of the vector potential model requires only a single-component magnetic field to be measured close to the appliance under test, hence relieving considerable practical measurement effort-the radial basis functions (RBFs) are adopted for the interpolation of discrete measurements; the magnetic vector potential model can then be directly constructed by summing a set of simple algebraic functions of RBF parameters. The vector potentials are then incorporated into numerical calculations as the equivalent source for evaluations of the induced electric field in the human body model. The accuracy and effectiveness of the proposed model are demonstrated by comparing the induced electric field in a human model to that of the full-wave simulation. This study presents a simple and effective approach for modelling the LF magnetic source. The result of this study could simplify the compliance test procedure for assessing an electrical appliance regarding LF magnetic exposure.

  5. Spherical-earth gravity and magnetic anomaly modeling by Gauss-Legendre quadrature integration

    NASA Technical Reports Server (NTRS)

    Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J.

    1981-01-01

    Gauss-Legendre quadrature integration is used to calculate the anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical earth. The procedure involves representation of the anomalous source as a distribution of equivalent point gravity poles or point magnetic dipoles. The distribution of equivalent point sources is determined directly from the volume limits of the anomalous body. The variable limits of integration for an arbitrarily shaped body are obtained from interpolations performed on a set of body points which approximate the body's surface envelope. The versatility of the method is shown by its ability to treat physical property variations within the source volume as well as variable magnetic fields over the source and observation surface. Examples are provided which illustrate the capabilities of the technique, including a preliminary modeling of potential field signatures for the Mississippi embayment crustal structure at 450 km.

  6. Seismic equivalents of volcanic jet scaling laws and multipoles in acoustics

    NASA Astrophysics Data System (ADS)

    Haney, Matthew M.; Matoza, Robin S.; Fee, David; Aldridge, David F.

    2018-04-01

    We establish analogies between equivalent source theory in seismology (moment-tensor and single-force sources) and acoustics (monopoles, dipoles and quadrupoles) in the context of volcanic eruption signals. Although infrasound (acoustic waves < 20 Hz) from volcanic eruptions may be more complex than a simple monopole, dipole or quadrupole assumption, these elementary acoustic sources are a logical place to begin exploring relations with seismic sources. By considering the radiated power of a harmonic force source at the surface of an elastic half-space, we show that a volcanic jet or plume modelled as a seismic force has similar scaling with respect to eruption parameters (e.g. exit velocity and vent area) as an acoustic dipole. We support this by demonstrating, from first principles, a fundamental relationship that ties together explosion, torque and force sources in seismology and highlights the underlying dipole nature of seismic forces. This forges a connection between the multipole expansion of equivalent sources in acoustics and the use of forces and moments as equivalent sources in seismology. We further show that volcanic infrasound monopole and quadrupole sources exhibit scalings similar to seismicity radiated by volume injection and moment sources, respectively. We describe a scaling theory for seismic tremor during volcanic eruptions that agrees with observations showing a linear relation between radiated power of tremor and eruption rate. Volcanic tremor over the first 17 hr of the 2016 eruption at Pavlof Volcano, Alaska, obeyed the linear relation. Subsequent tremor during the main phase of the eruption did not obey the linear relation and demonstrates that volcanic eruption tremor can exhibit other scalings even during the same eruption.

  7. Obtaining source current density related to irregularly structured electromagnetic target field inside human body using hybrid inverse/FDTD method.

    PubMed

    Han, Jijun; Yang, Deqiang; Sun, Houjun; Xin, Sherman Xuegang

    2017-01-01

    Inverse method is inherently suitable for calculating the distribution of source current density related with an irregularly structured electromagnetic target field. However, the present form of inverse method cannot calculate complex field-tissue interactions. A novel hybrid inverse/finite-difference time domain (FDTD) method that can calculate the complex field-tissue interactions for the inverse design of source current density related with an irregularly structured electromagnetic target field is proposed. A Huygens' equivalent surface is established as a bridge to combine the inverse and FDTD method. Distribution of the radiofrequency (RF) magnetic field on the Huygens' equivalent surface is obtained using the FDTD method by considering the complex field-tissue interactions within the human body model. The obtained magnetic field distributed on the Huygens' equivalent surface is regarded as the next target. The current density on the designated source surface is derived using the inverse method. The homogeneity of target magnetic field and specific energy absorption rate are calculated to verify the proposed method.

  8. Incorporating Measurement Non-Equivalence in a Cross-Study Latent Growth Curve Analysis

    PubMed Central

    Flora, David B.; Curran, Patrick J.; Hussong, Andrea M.; Edwards, Michael C.

    2009-01-01

    A large literature emphasizes the importance of testing for measurement equivalence in scales that may be used as observed variables in structural equation modeling applications. When the same construct is measured across more than one developmental period, as in a longitudinal study, it can be especially critical to establish measurement equivalence, or invariance, across the developmental periods. Similarly, when data from more than one study are combined into a single analysis, it is again important to assess measurement equivalence across the data sources. Yet, how to incorporate non-equivalence when it is discovered is not well described for applied researchers. Here, we present an item response theory approach that can be used to create scale scores from measures while explicitly accounting for non-equivalence. We demonstrate these methods in the context of a latent curve analysis in which data from two separate studies are combined to create a single longitudinal model spanning several developmental periods. PMID:19890440

  9. Skyshine study for next generation of fusion devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gohar, Y.; Yang, S.

    1987-02-01

    A shielding analysis for next generation of fusion devices (ETR/INTOR) was performed to study the dose equivalent outside the reactor building during operation including the contribution from neutrons and photons scattered back by collisions with air nuclei (skyshine component). Two different three-dimensional geometrical models for a tokamak fusion reactor based on INTOR design parameters were developed for this study. In the first geometrical model, the reactor geometry and the spatial distribution of the deuterium-tritium neutron source were simplified for a parametric survey. The second geometrical model employed an explicit representation of the toroidal geometry of the reactor chamber and themore » spatial distribution of the neutron source. The MCNP general Monte Carlo code for neutron and photon transport was used to perform all the calculations. The energy distribution of the neutron source was used explicitly in the calculations with ENDF/B-V data. The dose equivalent results were analyzed as a function of the concrete roof thickness of the reactor building and the location outside the reactor building.« less

  10. An Empirical Temperature Variance Source Model in Heated Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  11. Observational breakthroughs lead the way to improved hydrological predictions

    NASA Astrophysics Data System (ADS)

    Lettenmaier, Dennis P.

    2017-04-01

    New data sources are revolutionizing the hydrological sciences. The capabilities of hydrological models have advanced greatly over the last several decades, but until recently model capabilities have outstripped the spatial resolution and accuracy of model forcings (atmospheric variables at the land surface) and the hydrologic state variables (e.g., soil moisture; snow water equivalent) that the models predict. This has begun to change, as shown in two examples here: soil moisture and drought evolution over Africa as predicted by a hydrology model forced with satellite-derived precipitation, and observations of snow water equivalent at very high resolution over a river basin in California's Sierra Nevada.

  12. Modelling nonlinearity in piezoceramic transducers: From equations to nonlinear equivalent circuits.

    PubMed

    Parenthoine, D; Tran-Huu-Hue, L-P; Haumesser, L; Vander Meulen, F; Lematre, M; Lethiecq, M

    2011-02-01

    Quadratic nonlinear equations of a piezoelectric element under the assumptions of 1D vibration and weak nonlinearity are derived by the perturbation theory. It is shown that the nonlinear response can be represented by controlled sources that are added to the classical hexapole used to model piezoelectric ultrasonic transducers. As a consequence, equivalent electrical circuits can be used to predict the nonlinear response of a transducer taking into account the acoustic loads on the rear and front faces. A generalisation of nonlinear equivalent electrical circuits to cases including passive layers and propagation media is then proposed. Experimental results, in terms of second harmonic generation, on a coupled resonator are compared to theoretical calculations from the proposed model. Copyright © 2010 Elsevier B.V. All rights reserved.

  13. SU-E-T-102: Determination of Dose Distributions and Water-Equivalence of MAGIC-F Polymer Gel for 60Co and 192Ir Brachytherapy Sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quevedo, A; Nicolucci, P

    2014-06-01

    Purpose: Analyse the water-equivalence of MAGIC-f polymer gel for {sup 60}Co and {sup 192}Ir clinical brachytherapy sources, through dose distributions simulated with PENELOPE Monte Carlo code. Methods: The real geometry of {sup 60} (BEBIG, modelo Co0.A86) and {sup 192}192Ir (Varian, model GammaMed Plus) clinical brachytherapy sources were modelled on PENELOPE Monte Carlo simulation code. The most probable emission lines of photons were used for both sources: 17 emission lines for {sup 192}Ir and 12 lines for {sup 60}. The dose distributions were obtained in a cubic water or gel homogeneous phantom (30 × 30 × 30 cm{sup 3}), with themore » source positioned in the middle of the phantom. In all cases the number of simulation showers remained constant at 10{sup 9} particles. A specific material for gel was constructed in PENELOPE using weight fraction components of MAGIC-f: wH = 0,1062, wC = 0,0751, wN = 0,0139, wO = 0,8021, wS = 2,58×10{sup −6} e wCu = 5,08 × 10{sup −6}. The voxel size in the dose distributions was 0.6 mm. Dose distribution maps on the longitudinal and radial direction through the centre of the source were used to analyse the water-equivalence of MAGIC-f. Results: For the {sup 60} source, the maximum diferences in relative doses obtained in the gel and water were 0,65% and 1,90%, for radial and longitudinal direction, respectively. For {sup 192}Ir, the maximum difereces in relative doses were 0,30% and 1,05%, for radial and longitudinal direction, respectively. The materials equivalence can also be verified through the effective atomic number and density of each material: Zef-MAGIC-f = 7,07 e .MAGIC-f = 1,060 g/cm{sup 3} and Zef-water = 7,22. Conclusion: The results showed that MAGIC-f is water equivalent, consequently being suitable to simulate soft tissue, for Cobalt and Iridium energies. Hence, gel can be used as a dosimeter in clinical applications. Further investigation to its use in a clinical protocol is needed.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chinthavali, Madhu Sudhan; Campbell, Steven L

    This paper presents an analytical model for wireless power transfer system used in electric vehicle application. The equivalent circuit model for each major component of the system is described, including the input voltage source, resonant network, transformer, nonlinear diode rectifier load, etc. Based on the circuit model, the primary side compensation capacitance, equivalent input impedance, active / reactive power are calculated, which provides a guideline for parameter selection. Moreover, the voltage gain curve from dc output to dc input is derived as well. A hardware prototype with series-parallel resonant stage is built to verify the developed model. The experimental resultsmore » from the hardware are compared with the model predicted results to show the validity of the model.« less

  15. pacce: Perl algorithm to compute continuum and equivalent widths

    NASA Astrophysics Data System (ADS)

    Riffel, Rogério; Borges Vale, Tibério

    2011-08-01

    We present Perl Algorithm to Compute continuum and Equivalent Widths ( pacce). We describe the methods used in the computations and the requirements for its usage. We compare the measurements made with pacce and "manual" ones made using iraf splot task. These tests show that for synthetic simple stellar population (SSP) models the equivalent widths strengths are very similar (differences ≲0.2 Å) for both measurements. In real stellar spectra, the correlation between both values is still very good, but with differences of up to 0.5 Å. pacce is also able to determine mean continuum and continuum at line center values, which are helpful in stellar population studies. In addition, it is also able to compute the uncertainties in the equivalent widths using photon statistics. The code is made available for the community through the web at http://www.if.ufrgs.br/~riffel/software.html .

  16. Sound field reproduction as an equivalent acoustical scattering problem.

    PubMed

    Fazi, Filippo Maria; Nelson, Philip A

    2013-11-01

    Given a continuous distribution of acoustic sources, the determination of the source strength that ensures the synthesis of a desired sound field is shown to be identical to the solution of an equivalent acoustic scattering problem. The paper begins with the presentation of the general theory that underpins sound field reproduction with secondary sources continuously arranged on the boundary of the reproduction region. The process of reproduction by a continuous source distribution is modeled by means of an integral operator (the single layer potential). It is then shown how the solution of the sound reproduction problem corresponds to that of an equivalent scattering problem. Analytical solutions are computed for two specific instances of this problem, involving, respectively, the use of a secondary source distribution in spherical and planar geometries. The results are shown to be the same as those obtained with analyses based on High Order Ambisonics and Wave Field Synthesis, respectively, thus bringing to light a fundamental analogy between these two methods of sound reproduction. Finally, it is shown how the physical optics (Kirchhoff) approximation enables the derivation of a high-frequency simplification for the problem under consideration, this in turn being related to the secondary source selection criterion reported in the literature on Wave Field Synthesis.

  17. Simulation Study of Near-Surface Coupling of Nuclear Devices vs. Equivalent High-Explosive Charges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fournier, Kevin B; Walton, Otis R; Benjamin, Russ

    2014-09-29

    A computational study was performed to examine the differences in near-surface ground-waves and air-blast waves generated by high-explosive energy sources and those generated by much higher energy - density low - yield nuclear sources. The study examined the effect of explosive-source emplacement (i.e., height-of-burst, HOB, or depth-of-burial, DOB) over a range from depths of -35m to heights of 20m, for explosions with an explosive yield of 1-kt . The chemical explosive was modeled by a JWL equation-of-state model for a ~14m diameter sphere of ANFO (~1,200,000kg – 1 k t equivalent yield ), and the high-energy-density source was modeled asmore » a one tonne (1000 kg) plasma of ‘Iron-gas’ (utilizing LLNL’s tabular equation-of-state database, LEOS) in a 2m diameter sphere, with a total internal-energy content equivalent to 1 k t . A consistent equivalent-yield coupling-factor approach was developed to compare the behavior of the two sources. The results indicate that the equivalent-yield coupling-factor for air-blasts from 1 k t ANFO explosions varies monotonically and continuously from a nearly perfec t reflected wave off of the ground surface for a HOB ≈ 20m, to a coupling factor of nearly zero at DOB ≈ -25m. The nuclear air - blast coupling curve, on the other hand, remained nearly equal to a perfectly reflected wave all the way down to HOB’s very near zero, and then quickly dropped to a value near zero for explosions with a DOB ≈ -10m. The near - surface ground - wave traveling horizontally out from the explosive source region to distances of 100’s of meters exhibited equivalent - yield coupling - factors t hat varied nearly linearly with HOB/DOB for the simulated ANFO explosive source, going from a value near zero at HOB ≈ 5m to nearly one at DOB ≈ -25m. The nuclear-source generated near-surface ground wave coupling-factor remained near zero for almost all HOB’s greater than zero, and then appeared to vary nearly - linearly with depth-of-burial until it reached a value of one at a DOB between 15m and 20m. These simulations confirm the expected result that the variation of coupling to the ground, or the air, change s much more rapidly with emplacement location for a high-energy-density (i.e., nuclear-like) explosive source than it does for relatively low - energy - density chemical explosive sources. The Energy Partitioning, Energy Coupling (EPEC) platform at LLNL utilizes laser energy from one quad (i.e. 4-laser beams) of the 192 - beam NIF Laser bank to deliver ~10kJ of energy to 1mg of silver in a hohlraum creating an effective small-explosive ‘source’ with an energy density comparable to those in low-yield nuclear devices. Such experiments have the potential to provide direct experimental confirmation of the simulation results obtained in this study, at a physical scale (and time-scale) which is a factor of 1000 smaller than the spatial- or temporal-scales typically encountered when dealing with nuclear explosions.« less

  18. Measurement of the ambient gamma dose equivalent and kerma from the small 252Cf source at 1 meter and the small 60Co source at 2 meters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carl, W. F.

    NASA Langley Research Center requested a measurement and determination of the ambient gamma dose equivalent rate and kerma at 100 cm from the 252Cf source and determination of the ambient gamma dose equivalent rate and kerma at 200 cm from the 60Co source for the Radiation Budget Instrument Experiment (Rad-X). An Exradin A6 ion chamber with Shonka air-equivalent plastic walls in combination with a Supermax electrometer were used to measure the exposure rate and free-in-air kerma rate of the two sources at the requested distances. The measured gamma exposure, kerma, and dose equivalent rates are tabulated.

  19. Reconstruction of instantaneous surface normal velocity of a vibrating structure using interpolated time-domain equivalent source method

    NASA Astrophysics Data System (ADS)

    Geng, Lin; Bi, Chuan-Xing; Xie, Feng; Zhang, Xiao-Zheng

    2018-07-01

    Interpolated time-domain equivalent source method is extended to reconstruct the instantaneous surface normal velocity of a vibrating structure by using the time-evolving particle velocity as the input, which provides a non-contact way to overall understand the instantaneous vibration behavior of the structure. In this method, the time-evolving particle velocity in the near field is first modeled by a set of equivalent sources positioned inside the vibrating structure, and then the integrals of equivalent source strengths are solved by an iterative solving process and are further used to calculate the instantaneous surface normal velocity. An experiment of a semi-cylindrical steel plate impacted by a steel ball is investigated to examine the ability of the extended method, where the time-evolving normal particle velocity and pressure on the hologram surface measured by a Microflown pressure-velocity probe are used as the inputs of the extended method and the method based on pressure measurements, respectively, and the instantaneous surface normal velocity of the plate measured by a laser Doppler vibrometry is used as the reference for comparison. The experimental results demonstrate that the extended method is a powerful tool to visualize the instantaneous surface normal velocity of a vibrating structure in both time and space domains and can obtain more accurate results than that of the method based on pressure measurements.

  20. Biological effects and equivalent doses in radiotherapy: A software solution

    PubMed Central

    Voyant, Cyril; Julian, Daniel; Roustit, Rudy; Biffi, Katia; Lantieri, Céline

    2013-01-01

    Background The limits of TDF (time, dose, and fractionation) and linear quadratic models have been known for a long time. Medical physicists and physicians are required to provide fast and reliable interpretations regarding delivered doses or any future prescriptions relating to treatment changes. Aim We, therefore, propose a calculation interface under the GNU license to be used for equivalent doses, biological doses, and normal tumor complication probability (Lyman model). Materials and methods The methodology used draws from several sources: the linear-quadratic-linear model of Astrahan, the repopulation effects of Dale, and the prediction of multi-fractionated treatments of Thames. Results and conclusions The results are obtained from an algorithm that minimizes an ad-hoc cost function, and then compared to an equivalent dose computed using standard calculators in seven French radiotherapy centers. PMID:24936319

  1. Equivalent magnetization over the World's Ocean

    NASA Astrophysics Data System (ADS)

    Dyment, J.; Choi, Y.; Hamoudi, M.; Erwan, T.; Lesur, V.

    2014-12-01

    As a by-product of our recent work to build a candidate model over the oceans for the World Digital Magnetic Anomaly Map (WDMAM) version 2, we derived global distributions of the equivalent magnetization in oceanic domains. In a first step, we use classic point source forward modeling on a spherical Earth to build a forward model of the marine magnetic anomalies at sea-surface. We estimate magnetization vectors using the age map of the ocean floor, the relative plate motions, the apparent polar wander path for Africa, and a geomagnetic reversal time scale. As magnetized source geometry, we assume 1 km-thick layer bearing a 10 A/m magnetization following the topography of the oceanic basement as defined by the bathymetry and sedimentary thickness. Adding a present-day geomagnetic field model allows the computation of our initial magnetic anomaly model. In a second step, we adjust this model to the existing marine magnetic anomaly data, in order to make it consistent with these data. To do so, we extract synthetic magnetic along the ship tracks for which real data are available and we compare quantitatively the measured and computed anomalies on 100, 200 or 400 km-long sliding windows (depending the spreading rate). Among the possible comparison criteria, we discard the maximal range - too dependent on local values - and the correlation and coherency - the geographical adjustment between model and data being not accurate enough - to favor the standard deviation around the mean value. The ratio between the standard deviations of data and model on each sliding window represent an estimate of the magnetization ratio causing the anomalies, which we interpolate to adjust the initial magnetic anomaly model to the data and therefore compute a final model to be included in our WDMAM candidate over the oceanic regions lacking data. The above ratio, after division by the magnetization of 10 A/m used in the model, represents an estimate of the equivalent magnetization under the considered magnetized source geometry. The resulting distributions of equivalent magnetization are discussed in terms of mid-ocean ridges, presence of hotspots and oceanic plateaus, and the age of the oceanic lithosphere. Global marine magnetic data sets and models represent a useful tool to assess first order magnetic properties of the oceanic lithosphere.

  2. The effect of a paraffin screen on the neutron dose at the maze door of a 15 MV linear accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krmar, M.; Kuzmanović, A.; Nikolić, D.

    2013-08-15

    Purpose: The purpose of this study was to explore the effects of a paraffin screen located at various positions in the maze on the neutron dose equivalent at the maze door.Methods: The neutron dose equivalent was measured at the maze door of a room containing a 15 MV linear accelerator for x-ray therapy. Measurements were performed for several positions of the paraffin screen covering only 27.5% of the cross-sectional area of the maze. The neutron dose equivalent was also measured at all screen positions. Two simple models of the neutron source were considered in which the first assumed that themore » source was the cross-sectional area at the inner entrance of the maze, radiating neutrons in an isotropic manner. In the second model the reduction in the neutron dose equivalent at the maze door due to the paraffin screen was considered to be a function of the mean values of the neutron fluence and energy at the screen.Results: The results of this study indicate that the equivalent dose at the maze door was reduced by a factor of 3 through the use of a paraffin screen that was placed inside the maze. It was also determined that the contributions to the dosage from areas that were not covered by the paraffin screen as viewed from the dosimeter, were 2.5 times higher than the contributions from the covered areas. This study also concluded that the contributions of the maze walls, ceiling, and floor to the total neutron dose equivalent were an order of magnitude lower than those from the surface at the far end of the maze.Conclusions: This study demonstrated that a paraffin screen could be used to reduce the neutron dose equivalent at the maze door by a factor of 3. This paper also found that the reduction of the neutron dose equivalent was a linear function of the area covered by the maze screen and that the decrease in the dose at the maze door could be modeled as an exponential function of the product φ·E at the screen.« less

  3. 40 CFR 63.55 - Maximum achievable control technology (MACT) determinations for affected sources subject to case...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (MACT) determinations for affected sources subject to case-by-case determination of equivalent emission... sources subject to case-by-case determination of equivalent emission limitations. (a) Requirements for... hazardous air pollutant emissions limitations equivalent to the limitations that would apply if an emission...

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borisova, Elena; Lilly, Simon J.; Cantalupo, Sebastiano

    A toy model is developed to understand how the spatial distribution of fluorescent emitters in the vicinity of bright quasars could be affected by the geometry of the quasar bi-conical radiation field and by its lifetime. The model is then applied to the distribution of high-equivalent-width Ly α emitters (with rest-frame equivalent widths above 100 Å, threshold used in, e.g., Trainor and Steidel) identified in a deep narrow-band 36 × 36 arcmin{sup 2} image centered on the luminous quasar Q0420–388. These emitters are found near the edge of the field and show some evidence of an azimuthal asymmetry on themore » sky of the type expected if the quasar is radiating in a bipolar cone. If these sources are being fluorescently illuminated by the quasar, the two most distant objects require a lifetime of at least 15 Myr for an opening angle of 60° or more, increasing to more than 40 Myr if the opening angle is reduced to a minimum of 30°. However, some other expected signatures of boosted fluorescence are not seen at the current survey limits, e.g., a fall off in Ly α brightness, or equivalent width, with distance. Furthermore, to have most of the Ly α emission of the two distant sources to be fluorescently boosted would require the quasar to have been significantly brighter in the past. This suggests that these particular sources may not be fluorescent, invalidating the above lifetime constraints. This would cast doubt on the use of this relatively low equivalent width threshold and thus also on the lifetime analysis in Trainor and Steidel.« less

  5. Resonance treatment using pin-based pointwise energy slowing-down method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Sooyoung, E-mail: csy0321@unist.ac.kr; Lee, Changho, E-mail: clee@anl.gov; Lee, Deokjung, E-mail: deokjung@unist.ac.kr

    A new resonance self-shielding method using a pointwise energy solution has been developed to overcome the drawbacks of the equivalence theory. The equivalence theory uses a crude resonance scattering source approximation, and assumes a spatially constant scattering source distribution inside a fuel pellet. These two assumptions cause a significant error, in that they overestimate the multi-group effective cross sections, especially for {sup 238}U. The new resonance self-shielding method solves pointwise energy slowing-down equations with a sub-divided fuel rod. The method adopts a shadowing effect correction factor and fictitious moderator material to model a realistic pointwise energy solution. The slowing-down solutionmore » is used to generate the multi-group cross section. With various light water reactor problems, it was demonstrated that the new resonance self-shielding method significantly improved accuracy in the reactor parameter calculation with no compromise in computation time, compared to the equivalence theory.« less

  6. Radiation exposure from consumer products and miscellaneous sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1977-01-01

    This review of the literature indicates that there is a variety of consumer products and miscellaneous sources of radiation that result in exposure to the U.S. population. A summary of the number of people exposed to each such source, an estimate of the resulting dose equivalents to the exposed population, and an estimate of the average annual population dose equivalent are tabulated. A review of the data in this table shows that the total average annual contribution to the whole-body dose equivalent of the U.S. population from consumer products is less than 5 mrem; about 70 percent of this arisesmore » from the presence of naturally-occurring radionuclides in building materials. Some of the consumer product sources contribute exposure mainly to localized tissues or organs. Such localized estimates include: 0.5 to 1 mrem to the average annual population lung dose equivalent (generalized); 2 rem to the average annual population bronchial epithelial dose equivalent (localized); and 10 to 15 rem to the average annual population basal mucosal dose equivalent (basal mucosa of the gum). Based on these estimates, these sources may be grouped or classified as those that involve many people and the dose equivalent is relative large or those that involve many people but the dose equivalent is relatively small, or the dose equivalent is relatively large but the number of people involved is small.« less

  7. On Theoretical Broadband Shock-Associated Noise Near-Field Cross-Spectra

    NASA Technical Reports Server (NTRS)

    Miller, Steven A. E.

    2015-01-01

    The cross-spectral acoustic analogy is used to predict auto-spectra and cross-spectra of broadband shock-associated noise in the near-field and far-field from a range of heated and unheated supersonic off-design jets. A single equivalent source model is proposed for the near-field, mid-field, and far-field terms, that contains flow-field statistics of the shock wave shear layer interactions. Flow-field statistics are modeled based upon experimental observation and computational fluid dynamics solutions. An axisymmetric assumption is used to reduce the model to a closed-form equation involving a double summation over the equivalent source at each shock wave shear layer interaction. Predictions are compared with a wide variety of measurements at numerous jet Mach numbers and temperature ratios from multiple facilities. Auto-spectral predictions of broadband shock-associated noise in the near-field and far-field capture trends observed in measurement and other prediction theories. Predictions of spatial coherence of broadband shock-associated noise accurately capture the peak coherent intensity, frequency, and spectral width.

  8. Source characterization and modeling development for monoenergetic-proton radiography experiments on OMEGA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Manuel, M. J.-E.; Zylstra, A. B.; Rinderknecht, H. G.

    2012-06-15

    A monoenergetic proton source has been characterized and a modeling tool developed for proton radiography experiments at the OMEGA [T. R. Boehly et al., Opt. Comm. 133, 495 (1997)] laser facility. Multiple diagnostics were fielded to measure global isotropy levels in proton fluence and images of the proton source itself provided information on local uniformity relevant to proton radiography experiments. Global fluence uniformity was assessed by multiple yield diagnostics and deviations were calculated to be {approx}16% and {approx}26% of the mean for DD and D{sup 3}He fusion protons, respectively. From individual fluence images, it was found that the angular frequenciesmore » of Greater-Than-Or-Equivalent-To 50 rad{sup -1} contributed less than a few percent to local nonuniformity levels. A model was constructed using the Geant4 [S. Agostinelli et al., Nuc. Inst. Meth. A 506, 250 (2003)] framework to simulate proton radiography experiments. The simulation implements realistic source parameters and various target geometries. The model was benchmarked with the radiographs of cold-matter targets to within experimental accuracy. To validate the use of this code, the cold-matter approximation for the scattering of fusion protons in plasma is discussed using a typical laser-foil experiment as an example case. It is shown that an analytic cold-matter approximation is accurate to within Less-Than-Or-Equivalent-To 10% of the analytic plasma model in the example scenario.« less

  9. Multispectral data compression through transform coding and block quantization

    NASA Technical Reports Server (NTRS)

    Ready, P. J.; Wintz, P. A.

    1972-01-01

    Transform coding and block quantization techniques are applied to multispectral aircraft scanner data, and digitized satellite imagery. The multispectral source is defined and an appropriate mathematical model proposed. The Karhunen-Loeve, Fourier, and Hadamard encoders are considered and are compared to the rate distortion function for the equivalent Gaussian source and to the performance of the single sample PCM encoder.

  10. Comparison of sound reproduction using higher order loudspeakers and equivalent line arrays in free-field conditions.

    PubMed

    Poletti, Mark A; Betlehem, Terence; Abhayapala, Thushara D

    2014-07-01

    Higher order sound sources of Nth order can radiate sound with 2N + 1 orthogonal radiation patterns, which can be represented as phase modes or, equivalently, amplitude modes. This paper shows that each phase mode response produces a spiral wave front with a different spiral rate, and therefore a different direction of arrival of sound. Hence, for a given receiver position a higher order source is equivalent to a linear array of 2N + 1 monopole sources. This interpretation suggests performance similar to a circular array of higher order sources can be produced by an array of sources, each of which consists of a line array having monopoles at the apparent source locations of the corresponding phase modes. Simulations of higher order arrays and arrays of equivalent line sources are presented. It is shown that the interior fields produced by the two arrays are essentially the same, but that the exterior fields differ because the higher order sources produces different equivalent source locations for field positions outside the array. This work provides an explanation of the fact that an array of L Nth order sources can reproduce sound fields whose accuracy approaches the performance of (2N + 1)L monopoles.

  11. Equivalent magnetization over the World's Ocean and the World Digital Magnetic Anomaly Map

    NASA Astrophysics Data System (ADS)

    Dyment, Jerome; Choi, Yujin; Hamoudi, Mohamed; Thébault, Erwan; Quesnel, Yoann; Roest, Walter; Lesur, Vincent

    2014-05-01

    As a by-product of our recent work to build a candidate model over the oceans for the second version of the World Digital Magnetic Anomaly Map (WDMAM), we derived global distributions of the equivalent magnetization in oceanic domains. In a first step, we use classic point source forward modeling on a spherical Earth to build a forward model of the marine magnetic anomalies at sea-surface. We estimate magnetization vectors using the age map of the ocean floor, the relative plate motions, the apparent polar wander path for Africa, and a geomagnetic reversal time scale. We assume two possible magnetized source geometry, involving both a 1 km-thick layer bearing a 10 A/m magnetization either on a regular spherical shell with a constant, 5 km-deep, bathymetry (simple geometry) or following the topography of the oceanic basement as defined by the bathymetry and sedimentary thickness (realistic geometry). Adding a present-day geomagnetic field model allows the computation of our initial magnetic anomaly model. In a second step, we adjust this model to the existing marine magnetic anomaly data, in order to make it consistent with these data. To do so, we extract synthetic magnetic along the ship tracks for which real data are available and we compare quantitatively the measured and computed anomalies on 100, 200 or 400 km-long sliding windows (depending the spreading rate). Among the possible comparison criteria, we discard the maximal range - too dependent on local values - and the correlation and coherency - the geographical adjustment between model and data being not accurate enough - to favor the standard deviation around the mean value. The ratio between the standard deviations of data and model on each sliding window represent an estimate of the magnetization ratio causing the anomalies, which we interpolate to adjust the initial magnetic anomaly model to the data and therefore compute a final model to be included in our WDMAM candidate over the oceanic regions lacking data. The above ratio, after division by the magnetization of 10 A/m used in the model, represents an estimate of the equivalent magnetization under the considered magnetized source geometry. The resulting distributions of equivalent magnetization are further discussed in terms of mid-ocean ridges, presence of hotspots and oceanic plateaus, and the age of the oceanic lithosphere. Global marine magnetic data sets and models represent a useful tool to assess first order magnetic properties of the oceanic lithosphere.

  12. Frequency Response of an Aircraft Wing with Discrete Source Damage Using Equivalent Plate Analysis

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, T.; Eldred, Lloyd B.

    2007-01-01

    An equivalent plate procedure is developed to provide a computationally efficient means of matching the stiffness and frequencies of flight vehicle wing structures for prescribed loading conditions. Several new approaches are proposed and studied to match the stiffness and first five natural frequencies of the two reference models with and without damage. One approach divides the candidate reference plate into multiple zones in which stiffness and mass can be varied using a variety of materials including aluminum, graphite-epoxy, and foam-core graphite-epoxy sandwiches. Another approach places point masses along the edge of the stiffness-matched plate to tune the natural frequencies. Both approaches are successful at matching the stiffness and natural frequencies of the reference plates and provide useful insight into determination of crucial features in equivalent plate models of aircraft wing structures.

  13. Design of HIFU transducers for generating specified nonlinear ultrasound fields

    PubMed Central

    Rosnitskiy, Pavel B.; Yuldashev, Petr V.; Sapozhnikov, Oleg A.; Maxwell, Adam; Kreider, Wayne; Bailey, Michael R.; Khokhlova, Vera A.

    2016-01-01

    Various clinical applications of high intensity focused ultrasound (HIFU) have different requirements for the pressure levels and degree of nonlinear waveform distortion at the focus. The goal of this work was to determine transducer design parameters that produce either a specified shock amplitude in the focal waveform or specified peak pressures while still maintaining quasilinear conditions at the focus. Multi-parametric nonlinear modeling based on the KZK equation with an equivalent source boundary condition was employed. Peak pressures, shock amplitudes at the focus, and corresponding source outputs were determined for different transducer geometries and levels of nonlinear distortion. Results are presented in terms of the parameters of an equivalent single-element, spherically shaped transducer. The accuracy of the method and its applicability to cases of strongly focused transducers were validated by comparing the KZK modeling data with measurements and nonlinear full-diffraction simulations for a single-element source and arrays with 7 and 256 elements. The results provide look-up data for evaluating nonlinear distortions at the focus of existing therapeutic systems as well as for guiding the design of new transducers that generate specified nonlinear fields. PMID:27775904

  14. The Prediction of Scattered Broadband Shock-Associated Noise

    NASA Technical Reports Server (NTRS)

    Miller, Steven A. E.

    2015-01-01

    A mathematical model is developed for the prediction of scattered broadband shock-associated noise. Model arguments are dependent on the vector Green's function of the linearized Euler equations, steady Reynolds-averaged Navier-Stokes solutions, and the two-point cross-correlation of the equivalent source. The equivalent source is dependent on steady Reynolds-averaged Navier-Stokes solutions of the jet flow, that capture the nozzle geometry and airframe surface. Contours of the time-averaged streamwise velocity component and turbulent kinetic energy are examined with varying airframe position relative to the nozzle exit. Propagation effects are incorporated by approximating the vector Green's function of the linearized Euler equations. This approximation involves the use of ray theory and an assumption that broadband shock-associated noise is relatively unaffected by the refraction of the jet shear layer. A non-dimensional parameter is proposed that quantifies the changes of the broadband shock-associated noise source with varying jet operating condition and airframe position. Scattered broadband shock-associated noise possesses a second set of broadband lobes that are due to the effect of scattering. Presented predictions demonstrate relatively good agreement compared to a wide variety of measurements.

  15. A model for jet-noise analysis using pressure-gradient correlations on an imaginary cone

    NASA Technical Reports Server (NTRS)

    Norum, T. D.

    1974-01-01

    The technique for determining the near and far acoustic field of a jet through measurements of pressure-gradient correlations on an imaginary conical surface surrounding the jet is discussed. The necessary analytical developments are presented, and their feasibility is checked by using a point source as the sound generator. The distribution of the apparent sources on the cone, equivalent to the point source, is determined in terms of the pressure-gradient correlations.

  16. A comprehensive equivalent circuit model of all-vanadium redox flow battery for power system analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Zhao, Jiyun; Wang, Peng; Skyllas-Kazacos, Maria; Xiong, Binyu; Badrinarayanan, Rajagopalan

    2015-09-01

    Electrical equivalent circuit models demonstrate excellent adaptability and simplicity in predicting the electrical dynamic response of the all-vanadium redox flow battery (VRB) system. However, only a few publications that focus on this topic are available. The paper presents a comprehensive equivalent circuit model of VRB for system level analysis. The least square method is used to identify both steady-state and dynamic characteristics of VRB. The inherent features of the flow battery such as shunt current, ion diffusion and pumping energy consumption are also considered. The proposed model consists of an open-circuit voltage source, two parasitic shunt bypass circuits, a 1st order resistor-capacitor network and a hydraulic circuit model. Validated with experimental data, the proposed model demonstrates excellent accuracy. The mean-error of terminal voltage and pump consumption are 0.09 V and 0.49 W respectively. Based on the proposed model, self-discharge and system efficiency are studied. An optimal flow rate which maximizes the system efficiency is identified. Finally, the dynamic responses of the proposed VRB model under step current profiles are presented. Variables such as SOC and stack terminal voltage can be provided.

  17. Experimental investigation of microwave interaction with magnetoplasma in miniature multipolar configuration using impedance measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dey, Indranuj, E-mail: indranuj@aees.kyushu-u.ac.jp; Toyoda, Yuji; Yamamoto, Naoji

    2014-09-15

    A miniature microwave plasma source employing both radial and axial magnetic fields for plasma confinement has been developed for micro-propulsion applications. Plasma is initiated by launching microwaves via a short monopole antenna to circumvent geometrical cutoff limitations. The amplitude and phase of the forward and reflected microwave power is measured to obtain the complex reflection coefficient from which the equivalent impedance of the plasma source is determined. Effect of critical plasma density condition is reflected in the measurements and provides insight into the working of the miniature plasma source. A basic impedance calculation model is developed to help in understandingmore » the experimental observations. From experiment and theory, it is seen that the equivalent impedance magnitude is controlled by the coaxial discharge boundary conditions, and the phase is influenced primarily by the plasma immersed antenna impedance.« less

  18. 78 FR 73128 - Dividend Equivalents From Sources Within the United States

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-05

    ... Dividend Equivalents From Sources Within the United States AGENCY: Internal Revenue Service (IRS), Treasury... dividends, and the amount of the dividend equivalents. This information is required to establish whether a... valid control number assigned by the Office of Management and Budget. Books or records relating to a...

  19. Experimental characterization of the AFIT neutron facility. Master's thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lessard, O.J.

    1993-09-01

    AFIT's Neutron Facility was characterized for room-return neutrons using a (252)Cf source and a Bonner sphere spectrometer with three experimental models, the shadow shield, the Eisenhauer, Schwartz, and Johnson (ESJ), and the polynomial models. The free-field fluences at one meter from the ESJ and polynomial models were compared to the equivalent value from the accepted experimental shadow shield model to determine the suitability of the models in the AFIT facility. The polynomial model behaved erratically, as expected, while the ESJ model compared to within 4.8% of the shadow shield model results for the four Bonner sphere calibration. The ratio ofmore » total fluence to free-field fluence at one meter for the ESJ model was then compared to the equivalent ratio obtained by a Monte Cario Neutron-Photon transport code (MCNP), an accepted computational model. The ESJ model compared to within 6.2% of the MCNP results. AFIT's fluence ratios were compared to equivalent ratios reported by three other neutron facilities which verified that AFIT's results fit previously published trends based on room volumes. The ESJ model appeared adequate for health physics applications and was chosen was chosen for calibration of the AFIT facility. Neutron Detector, Bonner Sphere, Neutron Dosimetry, Room Characterization.« less

  20. Exploring equivalence domain in nonlinear inverse problems using Covariance Matrix Adaption Evolution Strategy (CMAES) and random sampling

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.; Kuvshinov, Alexey V.

    2016-05-01

    This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.

  1. Magsat equivalent source anomalies over the southeastern United States - Implications for crustal magnetization

    NASA Technical Reports Server (NTRS)

    Ruder, M. E.; Alexander, S. S.

    1986-01-01

    The Magsat crustal anomaly field depicts a previously-unidentified long-wavelength negative anomaly centered over southeastern Georgia. Examination of Magsat ascending and descending passes clearly identifies the anomalous region, despite the high-frequency noise present in the data. Using ancillary seismic, electrical conductivity, Bouguer gravity, and aeromagnetic data, a preliminary model of crustal magnetization for the southern Appalachian region is presented. A lower crust characterized by a pervasive negative magnetization contrast extends from the New York-Alabama lineament southeast to the Fall Line. In southern Georgia and eastern Alabama (coincident with the Brunswick Terrane), the model calls for lower crustal magnetization contrast of -2.4 A/m; northern Georgia and the Carolinas are modeled with contrasts of -1.5 A/m. Large-scale blocks in the upper crust which correspond to the Blue Ridge, Charlotte belt, and Carolina Slate belt, are modeled with magnetization contrasts of -1.2 A/m, 1.2 A/m, and 1.2 A/m respectively. The model accurately reproduces the amplitude of the observed low in the equivalent source Magsat anomaly field calculated at 325 km altitude and is spatially consistent with the 400 km lowpass-filtered aeromagnetic map of the region.

  2. Viscous remanent magnetization model for the Broken Ridge satellite magnetic anomaly

    NASA Technical Reports Server (NTRS)

    Johnson, B. D.

    1985-01-01

    An equivalent source model solution of the satellite magnetic field over Australia obtained by Mayhew et al. (1980) showed that the satellite anomalies could be related to geological features in Australia. When the processing and selection of the Magsat data over the Australian region had progressed to the point where interpretation procedures could be initiated, it was decided to start by attempting to model the Broken Ridge satellite anomaly, which represents one of the very few relatively isolated anomalies in the Magsat maps, with an unambiguous source region. Attention is given to details concerning the Broken Ridge satellite magnetic anomaly, the modeling method used, the Broken Ridge models, modeling results, and characteristics of magnetization.

  3. Study on acoustical properties of sintered bronze porous material for transient exhaust noise of pneumatic system

    NASA Astrophysics Data System (ADS)

    Li, Jingxiang; Zhao, Shengdun; Ishihara, Kunihiko

    2013-05-01

    A novel approach is presented to study the acoustical properties of sintered bronze material, especially used to suppress the transient noise generated by the pneumatic exhaust of pneumatic friction clutch and brake (PFC/B) systems. The transient exhaust noise is impulsive and harmful due to the large sound pressure level (SPL) that has high-frequency. In this paper, the exhaust noise is related to the transient impulsive exhaust, which is described by a one-dimensional aerodynamic model combining with a pressure drop expression of the Ergun equation. A relation of flow parameters and sound source is set up. Additionally, the piston acoustic source approximation of sintered bronze silencer with cylindrical geometry is presented to predict SPL spectrum at a far-field observation point. A semi-phenomenological model is introduced to analyze the sound propagation and reduction in the sintered bronze materials assumed as an equivalent fluid with rigid frame. Experiment results under different initial cylinder pressures are shown to corroborate the validity of the proposed aerodynamic model. In addition, the calculated sound pressures according to the equivalent sound source are compared with the measured noise signals both in time-domain and frequency-domain. Influences of porosity of the sintered bronze material are also discussed.

  4. Dioxin equivalency: Challenge to dose extrapolation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, J.F. Jr.; Silkworth, J.B.

    1995-12-31

    Extensive research has shown that all biological effects of dioxin-like agents are mediated via a single biochemical target, the Ah receptor (AhR), and that the relative biologic potencies of such agents in any given system, coupled with their exposure levels, may be described in terms of toxic equivalents (TEQ). It has also shown that the TEQ sources include not only chlorinated species such as the dioxins (PCDDs), PCDFs, and coplanar PCBs, but also non-chlorinated substances such as the PAHs of wood smoke, the AhR agonists of cooked meat, and the indolocarbazol (ICZ) derived from cruciferous vegetables. Humans have probably hadmore » elevated exposures to these non-chlorinated TEQ sources ever since the discoveries of fire, cooking, and the culinary use of Brassica spp. Recent assays of CYP1A2 induction show that these ``natural`` or ``traditional`` AhR agonists are contributing 50--100 times as much to average human TEQ exposures as do the chlorinated xenobiotics. Currently, the safe doses of the xenobiotic TEQ sources are estimated from their NOAELs and large extrapolation factors, derived from arbitrary mathematical models, whereas the NOAELs themselves are regarded as the safe doses for the TEQs of traditional dietary components. Available scientific data can neither support nor refute either approach to assessing the health risk of an individual chemical substance. However, if two substances be toxicologically equivalent, then their TEQ-adjusted health risks must also be equivalent, and the same dose extrapolation procedure should be used for both.« less

  5. The broad-band X-ray spectral variability of Mrk 841

    NASA Technical Reports Server (NTRS)

    George, I. M.; Nandra, K.; Fabian, A. C.; Turner, T. J.; Done, C.; Day, C. S. R.

    1993-01-01

    A detailed spectral analysis of five X-ray observations of Mrk 841 with the EXOSAT, Ginga, and ROSAT satellites is reported. Variability is apparent in both the soft (0.1-1.0 keV) and medium (1-20 keV) energy bands. Above, 1 keV, the spectra are adequately modeled by a power law with a strong emission line of equivalent width 450 eV. The large equivalent width of the emission line indicates a strongly enhanced reflection component of the source compared with other Seyferts observed with Ginga. The implications of the results of the analysis for physical models of the emission regions in this and other X-ray bright Seyferts are briefly examined.

  6. Improving simulations of snow water equivalent and total water storage changes over the Upper Yangtze River basin using multi-source remote sensing data

    NASA Astrophysics Data System (ADS)

    Han, P.; Long, D.

    2017-12-01

    Snow water equivalent (SWE) and total water storage (TWS) changes are important hydrological state variables over cryospheric regions, such as China's Upper Yangtze River (UYR) basin. Accurate simulation of these two state variables plays a critical role in understanding hydrological processes over this region and, in turn, benefits water resource management, hydropower development, and ecological integrity over the lower reaches of the Yangtze River, one of the largest rivers globally. In this study, an improved CREST model coupled with a snow and glacier melting module was used to simulate SWE and TWS changes over the UYR, and to quantify contributions of snow and glacier meltwater to the total runoff. Forcing, calibration, and validation data are mainly from multi-source remote sensing observations, including satellite-based precipitation estimates, passive microwave remote sensing-based SWE, and GRACE-derived TWS changes, along with streamflow measurements at the Zhimenda gauging station. Results show that multi-source remote sensing information can be extremely valuable in model forcing, calibration, and validation over the poorly gauged region. The simulated SWE and TWS changes and the observed counterparts are highly consistent, showing NSE coefficients higher than 0.8. The results also show that the contributions of snow and glacier meltwater to the total runoff are 8% and 6%, respectively, during the period 2003‒2014, which is an important source of runoff. Moreover, from this study, the TWS is found to increase at a rate of 5 mm/a ( 0.72 Gt/a) for the period 2003‒2014. The snow melting module may overestimate SWE for high precipitation events and was improved in this study. Key words: CREST model; Remote Sensing; Melting model; Source Region of the Yangtze River

  7. A charging model for three-axis stabilized spacecraft

    NASA Technical Reports Server (NTRS)

    Massaro, M. J.; Green, T.; Ling, D.

    1977-01-01

    A charging model was developed for geosynchronous, three-axis stabilized spacecraft when under the influence of a geomagnetic substorm. The differential charging potentials between the thermally coated or blanketed outer surfaces and metallic structure of a spacecraft were determined when the spacecraft was immersed in a dense plasma cloud of energetic particles. The spacecraft-to-environment interaction was determined by representing the charged particle environment by equivalent current source forcing functions and by representing the spacecraft by its electrically equivalent circuit with respect to the plasma charging phenomenon. The charging model included a sun/earth/spacecraft orbit model that simulated the sum illumination conditions of the spacecraft outer surfaces throughout the orbital flight on a diurnal as well as a seasonal basis. Transient and steady-state numerical results for a three-axis stabilized spacecraft are presented.

  8. The Scaling of Broadband Shock-Associated Noise with Increasing Temperature

    NASA Technical Reports Server (NTRS)

    Miller, Steven A.

    2012-01-01

    A physical explanation for the saturation of broadband shock-associated noise (BBSAN) intensity with increasing jet stagnation temperature has eluded investigators. An explanation is proposed for this phenomenon with the use of an acoustic analogy. For this purpose the acoustic analogy of Morris and Miller is examined. To isolate the relevant physics, the scaling of BBSAN at the peak intensity level at the sideline ( = 90 degrees) observer location is examined. Scaling terms are isolated from the acoustic analogy and the result is compared using a convergent nozzle with the experiments of Bridges and Brown and using a convergent-divergent nozzle with the experiments of Kuo, McLaughlin, and Morris at four nozzle pressure ratios in increments of total temperature ratios from one to four. The equivalent source within the framework of the acoustic analogy for BBSAN is based on local field quantities at shock wave shear layer interactions. The equivalent source combined with accurate calculations of the propagation of sound through the jet shear layer, using an adjoint vector Green s function solver of the linearized Euler equations, allows for predictions that retain the scaling with respect to stagnation pressure and allows for the accurate saturation of BBSAN with increasing stagnation temperature. This is a minor change to the source model relative to the previously developed models. The full development of the scaling term is shown. The sources and vector Green s function solver are informed by steady Reynolds-Averaged Navier-Stokes solutions. These solutions are examined as a function of stagnation temperature at the first shock wave shear layer interaction. It is discovered that saturation of BBSAN with increasing jet stagnation temperature occurs due to a balance between the amplification of the sound propagation through the shear layer and the source term scaling.A physical explanation for the saturation of broadband shock-associated noise (BBSAN) intensity with increasing jet stagnation temperature has eluded investigators. An explanation is proposed for this phenomenon with the use of an acoustic analogy. For this purpose the acoustic analogy of Morris and Miller is examined. To isolate the relevant physics, the scaling of BBSAN at the peak intensity level at the sideline psi = 90 degrees) observer location is examined. Scaling terms are isolated from the acoustic analogy and the result is compared using a convergent nozzle with the experiments of Bridges and Brown and using a convergent-divergent nozzle with the experiments of Kuo, McLaughlin, and Morris at four nozzle pressure ratios in increments of total temperature ratios from one to four. The equivalent source within the framework of the acoustic analogy for BBSAN is based on local field quantities at shock wave shear layer interactions. The equivalent source combined with accurate calculations of the propagation of sound through the jet shear layer, using an adjoint vector Green s function solver of the linearized Euler equations, allows for predictions that retain the scaling with respect to stagnation pressure and allows for the accurate saturation of BBSAN with increasing stagnation temperature. This is a minor change to the source model relative to the previously developed models. The full development of the scaling term is shown. The sources and vector Green s function solver are informed by steady Reynolds-Averaged Navier-Stokes solutions. These solutions are examined as a function of stagnation temperature at the first shock wave shear layer interaction. It is discovered that saturation of BBSAN with increasing jet stagnation temperature occurs due to a balance between the amplification of the sound propagation through the shear layer and the source term scaling.

  9. Design of HIFU Transducers for Generating Specified Nonlinear Ultrasound Fields.

    PubMed

    Rosnitskiy, Pavel B; Yuldashev, Petr V; Sapozhnikov, Oleg A; Maxwell, Adam D; Kreider, Wayne; Bailey, Michael R; Khokhlova, Vera A

    2017-02-01

    Various clinical applications of high-intensity focused ultrasound have different requirements for the pressure levels and degree of nonlinear waveform distortion at the focus. The goal of this paper is to determine transducer design parameters that produce either a specified shock amplitude in the focal waveform or specified peak pressures while still maintaining quasi-linear conditions at the focus. Multiparametric nonlinear modeling based on the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation with an equivalent source boundary condition was employed. Peak pressures, shock amplitudes at the focus, and corresponding source outputs were determined for different transducer geometries and levels of nonlinear distortion. The results are presented in terms of the parameters of an equivalent single-element spherically shaped transducer. The accuracy of the method and its applicability to cases of strongly focused transducers were validated by comparing the KZK modeling data with measurements and nonlinear full diffraction simulations for a single-element source and arrays with 7 and 256 elements. The results provide look-up data for evaluating nonlinear distortions at the focus of existing therapeutic systems as well as for guiding the design of new transducers that generate specified nonlinear fields.

  10. Adipose-derived stromal cells for the reconstruction of a human vesical equivalent.

    PubMed

    Rousseau, Alexandre; Fradette, Julie; Bernard, Geneviève; Gauvin, Robert; Laterreur, Véronique; Bolduc, Stéphane

    2015-11-01

    Despite a wide panel of tissue-engineering models available for vesical reconstruction, the lack of a differentiated urothelium remains their main common limitation. For the first time to our knowledge, an entirely human vesical equivalent, free of exogenous matrix, has been reconstructed using the self-assembly method. Moreover, we tested the contribution of adipose-derived stromal cells, an easily available source of mesenchymal cells featuring many potential advantages, by reconstructing three types of equivalent, named fibroblast vesical equivalent, adipose-derived stromal cell vesical equivalent and hybrid vesical equivalent--the latter containing both adipose-derived stromal cells and fibroblasts. The new substitutes have been compared and characterized for matrix composition and organization, functionality and mechanical behaviour. Although all three vesical equivalents displayed adequate collagen type I and III expression, only two of them, fibroblast vesical equivalent and hybrid vesical equivalent, sustained the development of a differentiated and functional urothelium. The presence of uroplakins Ib, II and III and the tight junction marker ZO-1 was detected and correlated with impermeability. The mechanical resistance of these tissues was sufficient for use by surgeons. We present here in vitro tissue-engineered vesical equivalents, built without the use of any exogenous matrix, able to sustain mechanical stress and to support the formation of a functional urothelium, i.e. able to display a barrier function similar to that of native tissue. Copyright © 2013 John Wiley & Sons, Ltd.

  11. Iterative combination of national phenotype, genotype, pedigree, and foreign information

    USDA-ARS?s Scientific Manuscript database

    Single step methods can combine all sources of information into accurate rankings for animals with and without genotypes. Equations that require inverting the genomic relationship matrix G work well with limited numbers of animals, but equivalent models without inversion are needed as numbers increa...

  12. Experimental verification of a thermal equivalent circuit dynamic model on an extended range electric vehicle battery pack

    NASA Astrophysics Data System (ADS)

    Ramotar, Lokendra; Rohrauer, Greg L.; Filion, Ryan; MacDonald, Kathryn

    2017-03-01

    The development of a dynamic thermal battery model for hybrid and electric vehicles is realized. A thermal equivalent circuit model is created which aims to capture and understand the heat propagation from the cells through the entire pack and to the environment using a production vehicle battery pack for model validation. The inclusion of production hardware and the liquid battery thermal management system components into the model considers physical and geometric properties to calculate thermal resistances of components (conduction, convection and radiation) along with their associated heat capacity. Various heat sources/sinks comprise the remaining model elements. Analog equivalent circuit simulations using PSpice are compared to experimental results to validate internal temperature nodes and heat rates measured through various elements, which are then employed to refine the model further. Agreement with experimental results indicates the proposed method allows for a comprehensive real-time battery pack analysis at little computational expense when compared to other types of computer based simulations. Elevated road and ambient conditions in Mesa, Arizona are simulated on a parked vehicle with varying quiescent cooling rates to examine the effect on the diurnal battery temperature for longer term static exposure. A typical daily driving schedule is also simulated and examined.

  13. GEANT4 and PHITS simulations of the shielding of neutrons from the 252Cf source

    NASA Astrophysics Data System (ADS)

    Shin, Jae Won; Hong, Seung-Woo; Bak, Sang-In; Kim, Do Yoon; Kim, Chong Yeal

    2014-09-01

    Monte Carlo simulations are performed by using the GEANT4 and the PHITS for studying the neutron-shielding abilities of several materials, such as graphite, iron, polyethylene, NS-4-FR and KRAFTON-HB. As a neutron source, 252Cf is considered. For the Monte Carlo simulations by using the GEANT4, high precision (G4HP) models with the G4NDL 4.2 based on ENDF/B-VII data are used. For the simulations by using the PHITS, the JENDL-4.0 library is used. The neutron-dose-equivalent rates with or without five different shielding materials are estimated and compared with the experimental values. The differences between the shielding abilities calculated by using the GEANT4 with the G4NDL 4.2 and the PHITS with the JENDL-4.0 are found not to be significant for all the cases considered in this work. The neutron-dose-equivalent rates obtained by using the GEANT4 and the PHITS are compared with experimental data and other simulation results. Our neutron-dose-equivalent rates agree well with the experimental dose-equivalent rates, within 20% errors, except for polyethylene. For polyethylene, the discrepancies between our calculations and the experiments are less than 40%, as observed in other simulation results.

  14. Magnetic Field Analysis of Lorentz Motors Using a Novel Segmented Magnetic Equivalent Circuit Method

    PubMed Central

    Qian, Junbing; Chen, Xuedong; Chen, Han; Zeng, Lizhan; Li, Xiaoqing

    2013-01-01

    A simple and accurate method based on the magnetic equivalent circuit (MEC) model is proposed in this paper to predict magnetic flux density (MFD) distribution of the air-gap in a Lorentz motor (LM). In conventional MEC methods, the permanent magnet (PM) is treated as one common source and all branches of MEC are coupled together to become a MEC network. In our proposed method, every PM flux source is divided into three sub-sections (the outer, the middle and the inner). Thus, the MEC of LM is divided correspondingly into three independent sub-loops. As the size of the middle sub-MEC is small enough, it can be treated as an ideal MEC and solved accurately. Combining with decoupled analysis of outer and inner MECs, MFD distribution in the air-gap can be approximated by a quadratic curve, and the complex calculation of reluctances in MECs can be avoided. The segmented magnetic equivalent circuit (SMEC) method is used to analyze a LM, and its effectiveness is demonstrated by comparison with FEA, conventional MEC and experimental results. PMID:23358368

  15. Macroscopic modeling for heat and water vapor transfer in dry snow by homogenization.

    PubMed

    Calonne, Neige; Geindreau, Christian; Flin, Frédéric

    2014-11-26

    Dry snow metamorphism, involved in several topics related to cryospheric sciences, is mainly linked to heat and water vapor transfers through snow including sublimation and deposition at the ice-pore interface. In this paper, the macroscopic equivalent modeling of heat and water vapor transfers through a snow layer was derived from the physics at the pore scale using the homogenization of multiple scale expansions. The microscopic phenomena under consideration are heat conduction, vapor diffusion, sublimation, and deposition. The obtained macroscopic equivalent model is described by two coupled transient diffusion equations including a source term arising from phase change at the pore scale. By dimensional analysis, it was shown that the influence of such source terms on the overall transfers can generally not be neglected, except typically under small temperature gradients. The precision and the robustness of the proposed macroscopic modeling were illustrated through 2D numerical simulations. Finally, the effective vapor diffusion tensor arising in the macroscopic modeling was computed on 3D images of snow. The self-consistent formula offers a good estimate of the effective diffusion coefficient with respect to the snow density, within an average relative error of 10%. Our results confirm recent work that the effective vapor diffusion is not enhanced in snow.

  16. Quantitative Biofractal Feedback Part II ’Devices, Scalability & Robust Control’

    DTIC Science & Technology

    2008-05-01

    in the modelling of proton exchange membrane fuel cells ( PEMFC ) may work as a powerful tool in the development and widespread testing of alternative...energy sources in the next decade [9], where biofractal controllers will be used to control these complex systems. The dynamic model of PEMFC , is...dynamic response of the PEMFC . In the Iftukhar model, the fuel cell is represented by an equivalent circuit, whose components are identified with

  17. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    PubMed

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  18. A sparse equivalent source method for near-field acoustic holography.

    PubMed

    Fernandez-Grande, Efren; Xenaki, Angeliki; Gerstoft, Peter

    2017-01-01

    This study examines a near-field acoustic holography method consisting of a sparse formulation of the equivalent source method, based on the compressive sensing (CS) framework. The method, denoted Compressive-Equivalent Source Method (C-ESM), encourages spatially sparse solutions (based on the superposition of few waves) that are accurate when the acoustic sources are spatially localized. The importance of obtaining a non-redundant representation, i.e., a sensing matrix with low column coherence, and the inherent ill-conditioning of near-field reconstruction problems is addressed. Numerical and experimental results on a classical guitar and on a highly reactive dipole-like source are presented. C-ESM is valid beyond the conventional sampling limits, making wide-band reconstruction possible. Spatially extended sources can also be addressed with C-ESM, although in this case the obtained solution does not recover the spatial extent of the source.

  19. Economic total maximum daily load for watershed-based pollutant trading.

    PubMed

    Zaidi, A Z; deMonsabert, S M

    2015-04-01

    Water quality trading (WQT) is supported by the US Environmental Protection Agency (USEPA) under the framework of its total maximum daily load (TMDL) program. An innovative approach is presented in this paper that proposes post-TMDL trade by calculating pollutant rights for each pollutant source within a watershed. Several water quality trading programs are currently operating in the USA with an objective to achieve overall pollutant reduction impacts that are equivalent or better than TMDL scenarios. These programs use trading ratios for establishing water quality equivalence among pollutant reductions. The inbuilt uncertainty in modeling the effects of pollutants in a watershed from both the point and nonpoint sources on receiving waterbodies makes WQT very difficult. A higher trading ratio carries with it increased mitigation costs, but cannot ensure the attainment of the required water quality with certainty. The selection of an applicable trading ratio, therefore, is not a simple process. The proposed approach uses an Economic TMDL optimization model that determines an economic pollutant reduction scenario that can be compared with actual TMDL allocations to calculate selling/purchasing rights for each contributing source. The methodology is presented using the established TMDLs for the bacteria (fecal coliform) impaired Muddy Creek subwatershed WAR1 in Rockingham County, Virginia, USA. Case study results show that an environmentally and economically superior trading scenario can be realized by using Economic TMDL model or any similar model that considers the cost of TMDL allocations.

  20. Calculated organ doses for Mayak production association central hall using ICRP and MCNP.

    PubMed

    Choe, Dong-Ok; Shelkey, Brenda N; Wilde, Justin L; Walk, Heidi A; Slaughter, David M

    2003-03-01

    As part of an ongoing dose reconstruction project, equivalent organ dose rates from photons and neutrons were estimated using the energy spectra measured in the central hall above the graphite reactor core located in the Russian Mayak Production Association facility. Reconstruction of the work environment was necessary due to the lack of personal dosimeter data for neutrons in the time period prior to 1987. A typical worker scenario for the central hall was developed for the Monte Carlo Neutron Photon-4B (MCNP) code. The resultant equivalent dose rates for neutrons and photons were compared with the equivalent dose rates derived from calculations using the conversion coefficients in the International Commission on Radiological Protection Publications 51 and 74 in order to validate the model scenario for this Russian facility. The MCNP results were in good agreement with the results of the ICRP publications indicating the modeling scenario was consistent with actual work conditions given the spectra provided. The MCNP code will allow for additional orientations to accurately reflect source locations.

  1. Characterization of an atmospheric pressure air plasma source for polymer surface modification

    NASA Astrophysics Data System (ADS)

    Yang, Shujun; Tang, Jiansheng

    2013-10-01

    An atmospheric pressure air plasma source was generated through dielectric barrier discharge (DBD). It was used to modify polyethyleneterephthalate (PET) surfaces with very high throughput. An equivalent circuit model was used to calculate the peak average electron density. The emission spectrum from the plasma was taken and the main peaks in the spectrum were identified. The ozone density in the down plasma region was estimated by Absorption Spectroscopy. NSF and ARC-ODU

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benites, J.; Alumno del Posgrado en CBAP, Universidad Autonoma de Nayarit, Carretera Tepic-Compostela km9. C.P. 63780. Xalisco-Nayarit-Mexico; Vega-Carrillo, H. R.

    Neutron spectra and the ambient dose equivalent were calculated inside the bunker of a 15 MV Varian linac model CLINAC iX. Calculations were carried out using Monte Carlo methods. Neutron spectra in the vicinity of isocentre show the presence of evaporation and knock-on neutrons produced by the source term, while epithermal and thermal neutron remain constant regardless the distance respect to isocentre, due to room return. Along the maze neutron spectra becomes softer as the detector moves along the maze. The ambient dose equivalent is decreased but do not follow the 1/r{sup 2} rule due to changes in the neutronmore » spectra.« less

  3. Alternative Fuels Data Center: Iowa Transportation Data for Alternative

    Science.gov Websites

    Consumption Source: State Energy Data System based on beta data converted to gasoline gallon equivalents of (bbl/day) 0 Renewable Power Plants 41 Renewable Power Plant Capacity (nameplate, MW) 3,807 Source /gallon $2.60/GGE $2.96/gallon $2.66/GGE Source: Average prices per gasoline gallon equivalent (GGE) for

  4. Alternative Fuels Data Center: South Carolina Transportation Data for

    Science.gov Websites

    Consumption Source: State Energy Data System based on beta data converted to gasoline gallon equivalents of (bbl/day) 0 Renewable Power Plants 31 Renewable Power Plant Capacity (nameplate, MW) 3,396 Source /gallon $2.66/GGE Source: Average prices per gasoline gallon equivalent (GGE) for the Lower Atlantic PADD

  5. Real-time monitoring of a microbial electrolysis cell using an electrical equivalent circuit model.

    PubMed

    Hussain, S A; Perrier, M; Tartakovsky, B

    2018-04-01

    Efforts in developing microbial electrolysis cells (MECs) resulted in several novel approaches for wastewater treatment and bioelectrosynthesis. Practical implementation of these approaches necessitates the development of an adequate system for real-time (on-line) monitoring and diagnostics of MEC performance. This study describes a simple MEC equivalent electrical circuit (EEC) model and a parameter estimation procedure, which enable such real-time monitoring. The proposed approach involves MEC voltage and current measurements during its operation with periodic power supply connection/disconnection (on/off operation) followed by parameter estimation using either numerical or analytical solution of the model. The proposed monitoring approach is demonstrated using a membraneless MEC with flow-through porous electrodes. Laboratory tests showed that changes in the influent carbon source concentration and composition significantly affect MEC total internal resistance and capacitance estimated by the model. Fast response of these EEC model parameters to changes in operating conditions enables the development of a model-based approach for real-time monitoring and fault detection.

  6. The Effect of Changes in the ASCA Calibration on the Fe-K Lines in Active Galaxies

    NASA Technical Reports Server (NTRS)

    Yaqoob, T.; Padmanabhan, U.; Dotani, T.; Nandra, K.; White, Nicholas E. (Technical Monitor)

    2001-01-01

    The ASCA calibration has evolved considerably since launch and indeed, is still evolving. There have been concerns in the literature that changes in the ASCA calibration have resulted in the Fe-K lines in active galaxies (AGN) now being systematically narrower than was originally thought. If this were true, a large body of ASCA results would be impacted. In particular, it has been claimed that the broad red wing (when present) of the Fe-K line has been considerably weakened by changes in the ASCA calibration. We demonstrate explicitly that changes in the, ASCA calibration over a period of about eight years have a negligible effect on the width, strength, or shape of the Fe-K lines. The reduction in both width and equivalent width is only approximately 8% or less. We confirm this with simulations and individual sources, as well as sample average profiles. The average profile for type 1 AGN is still very broad, with the red wing extending down to approximately 4 keV. The reason for the claimed, apparently large, discrepancies is that in some sources the Fe-K line is complex, and a single-Gaussian model, being an inadequate description of the line profile, picks up different portions of the profile with different calibration. However, one cannot make inferences about calibration or astrophysics of the sources using models which do not describe the data. Better modeling of the Fe-K in such cases gives completely consistent results with both old and current calibration. Thus, inadequate modeling of the Fe-K line in these sources can seriously underestimate the line width and equivalent width, and therefore lead to incorrect deductions about the astrophysical implications.

  7. Timing of oil and gas generation of petroleum systems in the Southwestern Wyoming Province

    USGS Publications Warehouse

    Roberts, L.N.R.; Lewan, M.D.; Finn, T.M.

    2004-01-01

    Burial history, thermal maturity, and timing of petroleum generation were modeled for eight key source-rock horizons at seven locations throughout the Southwestern Wyoming Province. The horizons are the bases of the Lower Permian Phosphoria Formation, the Upper Cretaceous Mowry Shale, Niobrara Formation, Baxter Shale (and equivalents), upper part of the Mesaverde Group, Lewis Shale, Lance Formation, and the Tertiary (Paleocene) Fort Union Formation. Burial history locations include three in the deepest parts of the province (Adobe Town in the Washakie Basin, Eagles Nest in the Great Divide Basin, and Wagon Wheel in the northern Green River Basin); two at intermediate basin depths (Federal 31-1 and Currant, Creek in the central and southern parts of the Green River Basin, respectively); and two relatively shallow locations (Bear 1 on the southeastern margin of the Sand Wash Basin and Bruff 2 on the Moxa arch). An overall ranking of the burial history locations in order of decreasing thermal maturity is Adobe Town > Eagles Nest > Wagon Wheel > Currant Creek > Federal 31-1 > Bear-1 > Bruff 2. The results of the models indicate that peak petroleum generation from Cretaceous oil- and gas-prone source rocks in the deepest parts of the province occurred from Late Cretaceous through middle Eocene. At the modeled locations, peak oil generation from source rocks of the Phosphoria Formation, which contain type-IIS kerogen, occurred in the Late Cretaceous (80 to 73 million years ago (Ma)). Gas generation from the cracking of Phosphoria oil reached a peak in the late Paleocene (57 Ma) only in the deepest parts of the province. The Mowry Shale, Niobrara Formation, and Baxter Shale (and equivalents) contain type-IIS or a mix of type-II and type-III kerogens. Oil generation from these units, in the deepest parts of the province, reached peak rates during the latest Cretaceous to early Paleocene (66 to 61 Ma). Only at these deepest locations did these units reach peak gas generation from the cracking of oil, which occurred in the early to late Eocene (52 to 41 Ma). For the Mesaverde Group, which also contains a mix of type-II and type-III kerogen, peak oil generation occurred only in the deepest parts of the province during middle Eocene (50 to 41 Ma). Only at Adobe Town did cracking of oil occur and gas generation reach peak in the earliest Oligocene (33 Ma). Gas-prone source rocks (type-III kerogen) of the Mowry and Baxter (and equivalents) Shales reached peak gas generation in the latest Cretaceous (66 Ma) in the deepest parts of the province. At the shallower Bear 1 location, the Mancos Shale (Baxter equivalent) source rocks reached peak gas generation at about this same time. Gas generation from the gas-prone Mesaverde source rocks started at all of the modeled locations, but reached peak generation at only the deepest locations in the early Eocene (54 to 49 Ma). The Lewis Shale, Lance Formation, and Fort Union Formation all contain gas-prone source rocks with type-III kerogen. Peak generation of gas from the Lewis Shale occurred only at Eagles Nest and Adobe Town in the early Eocene (52 Ma). Source rocks of the Lance reached peak gas generation only at the deepest locations during the middle Eocene (48 to 45 Ma) and the Fort Union reached peak gas generation only at Adobe Town also in the middle Eocene (44 Ma).

  8. A Pedagogical Model for the Doppler Effect with Application to Sources with Constant Accelerations

    ERIC Educational Resources Information Center

    Kaura, Lakshya P. S.; Pathak, Praveen

    2017-01-01

    Kinematic models are often very useful. The back and forth throw of a ball between two ice skaters may help us appreciate the meson exchange theory of Yukawa. If the skaters throw the balls at each other, they move backward, which is equivalent to a repulsive force between them. On the other hand, if they snatch the ball from each other, the…

  9. Generating polarization-entangled photon pairs using cross-spliced birefringent fibers.

    PubMed

    Meyer-Scott, Evan; Roy, Vincent; Bourgoin, Jean-Philippe; Higgins, Brendon L; Shalm, Lynden K; Jennewein, Thomas

    2013-03-11

    We demonstrate a novel polarization-entangled photon-pair source based on standard birefringent polarization-maintaining optical fiber. The source consists of two stretches of fiber spliced together with perpendicular polarization axes, and has the potential to be fully fiber-based, with all bulk optics replaced with in-fiber equivalents. By modelling the temporal walk-off in the fibers, we implement compensation necessary for the photon creation processes in the two stretches of fiber to be indistinguishable. Our source subsequently produces a high quality entangled state having (92.2 ± 0.2) % fidelity with a maximally entangled Bell state.

  10. Realistic Subsurface Anomaly Discrimination Using Electromagnetic Induction and an SVM Classifier

    NASA Astrophysics Data System (ADS)

    Pablo Fernández, Juan; Shubitidze, Fridon; Shamatava, Irma; Barrowes, Benjamin E.; O'Neill, Kevin

    2010-12-01

    The environmental research program of the United States military has set up blind tests for detection and discrimination of unexploded ordnance. One such test consists of measurements taken with the EM-63 sensor at Camp Sibert, AL. We review the performance on the test of a procedure that combines a field-potential (HAP) method to locate targets, the normalized surface magnetic source (NSMS) model to characterize them, and a support vector machine (SVM) to classify them. The HAP method infers location from the scattered magnetic field and its associated scalar potential, the latter reconstructed using equivalent sources. NSMS replaces the target with an enclosing spheroid of equivalent radial magnetization whose integral it uses as a discriminator. SVM generalizes from empirical evidence and can be adapted for multiclass discrimination using a voting system. Our method identifies all potentially dangerous targets correctly and has a false-alarm rate of about 5%.

  11. New constraints on the rupture process of the 1999 August 17 Izmit earthquake deduced from estimates of stress glut rate moments

    NASA Astrophysics Data System (ADS)

    Clévédé, E.; Bouin, M.-P.; Bukchin, B.; Mostinskiy, A.; Patau, G.

    2004-12-01

    This paper illustrates the use of integral estimates given by the stress glut rate moments of total degree 2 for constraining the rupture scenario of a large earthquake in the particular case of the 1999 Izmit mainshock. We determine the integral estimates of the geometry, source duration and rupture propagation given by the stress glut rate moments of total degree 2 by inverting long-period surface wave (LPSW) amplitude spectra. Kinematic and static models of the Izmit earthquake published in the literature are quite different from one another. In order to extract the characteristic features of this event, we calculate the same integral estimates directly from those models and compare them with those deduced from our inversion. While the equivalent rupture zone and the eastward directivity are consistent among all models, the LPSW solution displays a strong unilateral character of the rupture associated with a short rupture duration that is not compatible with the solutions deduced from the published models. With the aim of understand this discrepancy, we use simple equivalent kinematic models to reproduce the integral estimates of the considered rupture processes (including ours) by adjusting a few free parameters controlling the western and eastern parts of the rupture. We show that the joint analysis of the LPSW solution and source tomographies allows us to elucidate the scattering of source processes published for this earthquake and to discriminate between the models. Our results strongly suggest that (1) there was significant moment released on the eastern segment of the activated fault system during the Izmit earthquake; (2) the apparent rupture velocity decreases on this segment.

  12. Magnetoencephalography recording and analysis.

    PubMed

    Velmurugan, Jayabal; Sinha, Sanjib; Satishchandra, Parthasarathy

    2014-03-01

    Magnetoencephalography (MEG) non-invasively measures the magnetic field generated due to the excitatory postsynaptic electrical activity of the apical dendritic pyramidal cells. Such a tiny magnetic field is measured with the help of the biomagnetometer sensors coupled with the Super Conducting Quantum Interference Device (SQUID) inside the magnetically shielded room (MSR). The subjects are usually screened for the presence of ferromagnetic materials, and then the head position indicator coils, electroencephalography (EEG) electrodes (if measured simultaneously), and fiducials are digitized using a 3D digitizer, which aids in movement correction and also in transferring the MEG data from the head coordinates to the device and voxel coordinates, thereby enabling more accurate co-registration and localization. MEG data pre-processing involves filtering the data for environmental and subject interferences, artefact identification, and rejection. Magnetic resonance Imaging (MRI) is processed for correction and identifying fiducials. After choosing and computing for the appropriate head models (spherical or realistic; boundary/finite element model), the interictal/ictal epileptiform discharges are selected and modeled by an appropriate source modeling technique (clinically and commonly used - single equivalent current dipole - ECD model). The equivalent current dipole (ECD) source localization of the modeled interictal epileptiform discharge (IED) is considered physiologically valid or acceptable based on waveform morphology, isofield pattern, and dipole parameters (localization, dipole moment, confidence volume, goodness of fit). Thus, MEG source localization can aid clinicians in sublobar localization, lateralization, and grid placement, by evoking the irritative/seizure onset zone. It also accurately localizes the eloquent cortex-like visual, language areas. MEG also aids in diagnosing and delineating multiple novel findings in other neuropsychiatric disorders, including Alzheimer's disease, Parkinsonism, Traumatic brain injury, autistic disorders, and so oon.

  13. Modeling of the Electric Characteristics of Solar Cells

    NASA Astrophysics Data System (ADS)

    Logan, Benjamin; Tzolov, Marian

    The purpose of a solar cell is to covert solar energy, through means of photovoltaic action, into a sustainable electrical current that produces usable electricity. The electrical characteristics of solar cells can be modeled to better understand how they function. As an electrical device, solar cells can be conveniently represented as an equivalent electrical circuit with an ideal diode, ideal current source for the photovoltaic action, a shunt resistor for recombination, a resistor in series to account for contact resistance, and a resistor modeling external power consumption. The values of these elements have been modified to model dark and illumination states. Fitting the model to the experimental current voltage characteristics allows to determine the values of the equivalent circuit elements. Comparing values of open circuit voltage, short circuit current, and shunt resistor can determine factors such as the amount of recombination to diagnose problems in solar cells. The many measurable quantities of a solar cell's characteristics give guidance for the design when they are related with microscopic processes.

  14. Thermal constitutive matrix applied to asynchronous electrical machine using the cell method

    NASA Astrophysics Data System (ADS)

    Domínguez, Pablo Ignacio González; Monzón-Verona, José Miguel; Rodríguez, Leopoldo Simón; Sánchez, Adrián de Pablo

    2018-03-01

    This work demonstrates the equivalence of two constitutive equations. One is used in Fourier's law of the heat conduction equation, the other in electric conduction equation; both are based on the numerical Cell Method, using the Finite Formulation (FF-CM). A 3-D pure heat conduction model is proposed. The temperatures are in steady state and there are no internal heat sources. The obtained results are compared with an equivalent model developed using the Finite Elements Method (FEM). The particular case of 2-D was also studied. The errors produced are not significant at less than 0.2%. The number of nodes is the number of the unknowns and equations to resolve. There is no significant gain in precision with increasing density of the mesh.

  15. Biphasic and monophasic repair: comparative implications for biologically equivalent dose calculations in pulsed dose rate brachytherapy of cervical carcinoma

    PubMed Central

    Millar, W T; Davidson, S E

    2013-01-01

    Objective: To consider the implications of the use of biphasic rather than monophasic repair in calculations of biologically-equivalent doses for pulsed-dose-rate brachytherapy of cervix carcinoma. Methods: Calculations are presented of pulsed-dose-rate (PDR) doses equivalent to former low-dose-rate (LDR) doses, using biphasic vs monophasic repair kinetics, both for cervical carcinoma and for the organ at risk (OAR), namely the rectum. The linear-quadratic modelling calculations included effects due to varying the dose per PDR cycle, the dose reduction factor for the OAR compared with Point A, the repair kinetics and the source strength. Results: When using the recommended 1 Gy per hourly PDR cycle, different LDR-equivalent PDR rectal doses were calculated depending on the choice of monophasic or biphasic repair kinetics pertaining to the rodent central nervous and skin systems. These differences virtually disappeared when the dose per hourly cycle was increased to 1.7 Gy. This made the LDR-equivalent PDR doses more robust and independent of the choice of repair kinetics and α/β ratios as a consequence of the described concept of extended equivalence. Conclusion: The use of biphasic and monophasic repair kinetics for optimised modelling of the effects on the OAR in PDR brachytherapy suggests that an optimised PDR protocol with the dose per hourly cycle nearest to 1.7 Gy could be used. Hence, the durations of the new PDR treatments would be similar to those of the former LDR treatments and not longer as currently prescribed. Advances in knowledge: Modelling calculations indicate that equivalent PDR protocols can be developed which are less dependent on the different α/β ratios and monophasic/biphasic kinetics usually attributed to normal and tumour tissues for treatment of cervical carcinoma. PMID:23934965

  16. Full waveform time domain solutions for source and induced magnetotelluric and controlled-source electromagnetic fields using quasi-equivalent time domain decomposition and GPU parallelization

    NASA Astrophysics Data System (ADS)

    Imamura, N.; Schultz, A.

    2015-12-01

    Recently, a full waveform time domain solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations of non-zero wavenumber, the ability to operate in areas of high levels of source signal spatial complexity and non-stationarity, etc. This goal would not be obtainable if one were to adopt the finite difference time-domain (FDTD) approach for the forward problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across the large frequency bandwidth. It means that for FDTD simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a linear system that is computationally burdensome to solve. We have implemented our code that addresses this situation through the use of a fictitious wave domain method and GPUs to speed up the computation time. We also substantially reduce the size of the linear systems by applying concepts from successive cascade decimation, through quasi-equivalent time domain decomposition. By combining these refinements, we have made good progress toward implementing the core of a full waveform joint source field/earth conductivity inverse modeling method. From results, we found the use of previous generation of CPU/GPU speeds computations by an order of magnitude over a parallel CPU only approach. In part, this arises from the use of the quasi-equivalent time domain decomposition, which shrinks the size of the linear system dramatically.

  17. Computation of Incompressible Potential Flow over an Airfoil Using a High Order Aerodynamic Panel Method Based on Circular Arc Panels.

    DTIC Science & Technology

    1982-08-01

    Vortex Sheet Figure 4 - Properties of Singularity Sheets they may be used to model different types of flow. Transfer of boundary... Vortex Sheet Equivalence Singularity Behavior Using Green’s theorem it is clear that the problem of potential flow over a body can be modeled using ...that source, doublet, or vortex singularities can be used to model potential flow problems, and that the doublet and vortex singularities are

  18. Implementation issues of the nearfield equivalent source imaging microphone array

    NASA Astrophysics Data System (ADS)

    Bai, Mingsian R.; Lin, Jia-Hong; Tseng, Chih-Wen

    2011-01-01

    This paper revisits a nearfield microphone array technique termed nearfield equivalent source imaging (NESI) proposed previously. In particular, various issues concerning the implementation of the NESI algorithm are examined. The NESI can be implemented in both the time domain and the frequency domain. Acoustical variables including sound pressure, particle velocity, active intensity and sound power are calculated by using multichannel inverse filters. Issues concerning sensor deployment are also investigated for the nearfield array. The uniform array outperformed a random array previously optimized for far-field imaging, which contradicts the conventional wisdom in far-field arrays. For applications in which only a patch array with scarce sensors is available, a virtual microphone approach is employed to ameliorate edge effects using extrapolation and to improve imaging resolution using interpolation. To enhance the processing efficiency of the time-domain NESI, an eigensystem realization algorithm (ERA) is developed. Several filtering methods are compared in terms of computational complexity. Significant saving on computations can be achieved using ERA and the frequency-domain NESI, as compared to the traditional method. The NESI technique was also experimentally validated using practical sources including a 125 cc scooter and a wooden box model with a loudspeaker fitted inside. The NESI technique proved effective in identifying broadband and non-stationary sources produced by the sources.

  19. The 2.5-dimensional equivalent sources method for directly exposed and shielded urban canyons.

    PubMed

    Hornikx, Maarten; Forssén, Jens

    2007-11-01

    When a domain in outdoor acoustics is invariant in one direction, an inverse Fourier transform can be used to transform solutions of the two-dimensional Helmholtz equation to a solution of the three-dimensional Helmholtz equation for arbitrary source and observer positions, thereby reducing the computational costs. This previously published approach [D. Duhamel, J. Sound Vib. 197, 547-571 (1996)] is called a 2.5-dimensional method and has here been extended to the urban geometry of parallel canyons, thereby using the equivalent sources method to generate the two-dimensional solutions. No atmospheric effects are considered. To keep the error arising from the transform small, two-dimensional solutions with a very fine frequency resolution are necessary due to the multiple reflections in the canyons. Using the transform, the solution for an incoherent line source can be obtained much more efficiently than by using the three-dimensional solution. It is shown that the use of a coherent line source for shielded urban canyon observer positions leads mostly to an overprediction of levels and can yield erroneous results for noise abatement schemes. Moreover, the importance of multiple facade reflections in shielded urban areas is emphasized by vehicle pass-by calculations, where cases with absorptive and diffusive surfaces have been modeled.

  20. A Novel Series Connected Batteries State of High Voltage Safety Monitor System for Electric Vehicle Application

    PubMed Central

    Jiaxi, Qiang; Lin, Yang; Jianhui, He; Qisheng, Zhou

    2013-01-01

    Batteries, as the main or assistant power source of EV (Electric Vehicle), are usually connected in series with high voltage to improve the drivability and energy efficiency. Today, more and more batteries are connected in series with high voltage, if there is any fault in high voltage system (HVS), the consequence is serious and dangerous. Therefore, it is necessary to monitor the electric parameters of HVS to ensure the high voltage safety and protect personal safety. In this study, a high voltage safety monitor system is developed to solve this critical issue. Four key electric parameters including precharge, contact resistance, insulation resistance, and remaining capacity are monitored and analyzed based on the equivalent models presented in this study. The high voltage safety controller which integrates the equivalent models and control strategy is developed. By the help of hardware-in-loop system, the equivalent models integrated in the high voltage safety controller are validated, and the online electric parameters monitor strategy is analyzed and discussed. The test results indicate that the high voltage safety monitor system designed in this paper is suitable for EV application. PMID:24194677

  1. A novel series connected batteries state of high voltage safety monitor system for electric vehicle application.

    PubMed

    Jiaxi, Qiang; Lin, Yang; Jianhui, He; Qisheng, Zhou

    2013-01-01

    Batteries, as the main or assistant power source of EV (Electric Vehicle), are usually connected in series with high voltage to improve the drivability and energy efficiency. Today, more and more batteries are connected in series with high voltage, if there is any fault in high voltage system (HVS), the consequence is serious and dangerous. Therefore, it is necessary to monitor the electric parameters of HVS to ensure the high voltage safety and protect personal safety. In this study, a high voltage safety monitor system is developed to solve this critical issue. Four key electric parameters including precharge, contact resistance, insulation resistance, and remaining capacity are monitored and analyzed based on the equivalent models presented in this study. The high voltage safety controller which integrates the equivalent models and control strategy is developed. By the help of hardware-in-loop system, the equivalent models integrated in the high voltage safety controller are validated, and the online electric parameters monitor strategy is analyzed and discussed. The test results indicate that the high voltage safety monitor system designed in this paper is suitable for EV application.

  2. Spatial sound field synthesis and upmixing based on the equivalent source method.

    PubMed

    Bai, Mingsian R; Hsu, Hoshen; Wen, Jheng-Ciang

    2014-01-01

    Given scarce number of recorded signals, spatial sound field synthesis with an extended sweet spot is a challenging problem in acoustic array signal processing. To address the problem, a synthesis and upmixing approach inspired by the equivalent source method (ESM) is proposed. The synthesis procedure is based on the pressure signals recorded by a microphone array and requires no source model. The array geometry can also be arbitrary. Four upmixing strategies are adopted to enhance the resolution of the reproduced sound field when there are more channels of loudspeakers than the microphones. Multi-channel inverse filtering with regularization is exploited to deal with the ill-posedness in the reconstruction process. The distance between the microphone and loudspeaker arrays is optimized to achieve the best synthesis quality. To validate the proposed system, numerical simulations and subjective listening experiments are performed. The results demonstrated that all upmixing methods improved the quality of reproduced target sound field over the original reproduction. In particular, the underdetermined ESM interpolation method yielded the best spatial sound field synthesis in terms of the reproduction error, timbral quality, and spatial quality.

  3. Little Green Lies: Dissecting the Hype of Renewables

    DTIC Science & Technology

    2011-05-11

    Sources: 2009 BP Statistical Energy Analysis , US Energy Information Administration Per Capita Energy Use (Kg Oil Equivalent) World 1,819 USA 7,766...Equivalent BUILDING STRONG® Energy Trends Sources: 2006 BP Statistical Energy Analysis Oil 37% Nuclear 6o/o Coal 25% Gas 23o/o Biomass 4% Hydro 3% Wind

  4. Spherical earth gravity and magnetic anomaly analysis by equivalent point source inversion

    NASA Technical Reports Server (NTRS)

    Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.

    1981-01-01

    To facilitate geologic interpretation of satellite elevation potential field data, analysis techniques are developed and verified in the spherical domain that are commensurate with conventional flat earth methods of potential field interpretation. A powerful approach to the spherical earth problem relates potential field anomalies to a distribution of equivalent point sources by least squares matrix inversion. Linear transformations of the equivalent source field lead to corresponding geoidal anomalies, pseudo-anomalies, vector anomaly components, spatial derivatives, continuations, and differential magnetic pole reductions. A number of examples using 1 deg-averaged surface free-air gravity anomalies of POGO satellite magnetometer data for the United States, Mexico, and Central America illustrate the capabilities of the method.

  5. 77 FR 13968 - Dividend Equivalents From Sources Within the United States; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-08

    ...--INCOME TAXES 0 Paragraph 1. The authority citation for part 1 continues to read in part as follows... temporary regulations (TD 9572), relating to dividend equivalents from sources within the United States.... List of Subjects in 26 CFR Part 1 Income taxes, Reporting and recordkeeping requirements. Correction of...

  6. Source apportionment of the carcinogenic potential of polycyclic aromatic hydrocarbons (PAH) associated to airborne PM10 by a PMF model.

    PubMed

    Callén, M S; Iturmendi, A; López, J M; Mastral, A M

    2014-02-01

    In order to perform a study of the carcinogenic potential of polycyclic aromatic hydrocarbons (PAH), benzo(a)pyrene equivalent (BaP-eq) concentration was calculated and modelled by a receptor model based on positive matrix factorization (PMF). Nineteen PAH associated to airborne PM10 of Zaragoza, Spain, were quantified during the sampling period 2001-2009 and used as potential variables by the PMF model. Afterwards, multiple linear regression analysis was used to quantify the potential sources of BaP-eq. Five sources were obtained as the optimal solution and vehicular emission was identified as the main carcinogenic source (35 %) followed by heavy-duty vehicles (28 %), light-oil combustion (18 %), natural gas (10 %) and coal combustion (9 %). Two of the most prevailing directions contributing to this carcinogenic character were the NE and N directions associated with a highway, industrial parks and a paper factory. The lifetime lung cancer risk exceeded the unit risk of 8.7 x 10(-5) per ng/m(3) BaP in both winter and autumn seasons and the most contributing source was the vehicular emission factor becoming an important issue in control strategies.

  7. Constellation Stick Figures Convey Information about Gravity and Neutrinos

    NASA Astrophysics Data System (ADS)

    Mc Leod, David Matthew; Mc Leod, Roger David

    2008-10-01

    12/21/98, at America's Stonehenge, DMM detected, and drew, the full stick-figure equivalent of Canis Major, CM, as depicted by our Wolf Clan leaders, and many others. Profound, foundational physics is implied, since this occurred in the Watch House there, hours before the ``model rose.'' Similar configurations like Orion, Osiris of ancient Egypt, show that such figures are projected through solid parts of the Earth, as two-dimensional equivalents of the three-dimensional star constellations. Such ``sticks'' indicate that ``line equivalents'' connect the stars, and the physical mechanism projects outlines detectable by traditional cultures. We had discussed this ``flashlight'' effect, and recognized some of its implications. RDM states that the flashlight is a strong, distant neutrino source; the lines represent neutrinos longitudinally aligned in gravitational excitation, opaque, to earthbound, transient, transversely excited neutrinos. ``Sticks'' represent ``graviton'' detection. Neutrinos' longitudinal alignment accounts for the weakness of gravitational force.

  8. A mesostate-space model for EEG and MEG.

    PubMed

    Daunizeau, Jean; Friston, Karl J

    2007-10-15

    We present a multi-scale generative model for EEG, that entails a minimum number of assumptions about evoked brain responses, namely: (1) bioelectric activity is generated by a set of distributed sources, (2) the dynamics of these sources can be modelled as random fluctuations about a small number of mesostates, (3) mesostates evolve in a temporal structured way and are functionally connected (i.e. influence each other), and (4) the number of mesostates engaged by a cognitive task is small (e.g. between one and a few). A Variational Bayesian learning scheme is described that furnishes the posterior density on the models parameters and its evidence. Since the number of meso-sources specifies the model, the model evidence can be used to compare models and find the optimum number of meso-sources. In addition to estimating the dynamics at each cortical dipole, the mesostate-space model and its inversion provide a description of brain activity at the level of the mesostates (i.e. in terms of the dynamics of meso-sources that are distributed over dipoles). The inclusion of a mesostate level allows one to compute posterior probability maps of each dipole being active (i.e. belonging to an active mesostate). Critically, this model accommodates constraints on the number of meso-sources, while retaining the flexibility of distributed source models in explaining data. In short, it bridges the gap between standard distributed and equivalent current dipole models. Furthermore, because it is explicitly spatiotemporal, the model can embed any stochastic dynamical causal model (e.g. a neural mass model) as a Markov process prior on the mesostate dynamics. The approach is evaluated and compared to standard inverse EEG techniques, using synthetic data and real data. The results demonstrate the added-value of the mesostate-space model and its variational inversion.

  9. Shear stress along the conduit wall as a plausible source of tilt at Soufrière Hills volcano, Montserrat

    NASA Astrophysics Data System (ADS)

    Green, D. N.; Neuberg, J.; Cayol, V.

    2006-05-01

    Surface deformations recorded in close proximity to the active lava dome at Soufrière Hills volcano, Montserrat, can be used to infer stresses within the uppermost 1000 m of the conduit system. Most deformation source models consider only isotropic pressurisation of the conduit. We show that tilt recorded during rapid magma extrusion in 1997 could have also been generated by shear stresses sustained along the conduit wall; these stresses are a consequence of pressure gradients that develop along the conduit. Numerical modelling, incorporating realistic topography, can reproduce both the morphology and half the amplitude of the measured deformation field using a realistic shear stress amplitude, equivalent to a pressure gradient of 3.5 × 104 Pa m-1 along a 1000 m long conduit with a 15 m radius. This shear stress model has advantages over the isotropic pressure models because it does not require either physically unattainable overpressures or source radii larger than 200 m to explain the same deformation.

  10. Experimental measurement and modeling of snow accumulation and snowmelt in a mountain microcatchment

    NASA Astrophysics Data System (ADS)

    Danko, Michal; Krajčí, Pavel; Hlavčo, Jozef; Kostka, Zdeněk; Holko, Ladislav

    2016-04-01

    Fieldwork is a very useful source of data in all geosciences. This naturally applies also to the snow hydrology. Snow accumulation and snowmelt are spatially very heterogeneous especially in non-forested, mountain environments. Direct field measurements provide the most accurate information about it. Quantification and understanding of processes, that cause these spatial differences are crucial in prediction and modelling of runoff volumes in spring snowmelt period. This study presents possibilities of detailed measurement and modeling of snow cover characteristics in a mountain experimental microcatchment located in northern part of Slovakia in Western Tatra mountains. Catchment area is 0.059 km2 and mean altitude is 1500 m a.s.l. Measurement network consists of 27 snow poles, 3 small snow lysimeters, discharge measurement device and standard automatic weather station. Snow depth and snow water equivalent (SWE) were measured twice a month near the snow poles. These measurements were used to estimate spatial differences in accumulation of SWE. Snowmelt outflow was measured by small snow lysimeters. Measurements were performed in winter 2014/2015. Snow water equivalent variability was very high in such a small area. Differences between particular measuring points reached 600 mm in time of maximum SWE. The results indicated good performance of a snow lysimeter in case of snowmelt timing identification. Increase of snowmelt measured by the snow lysimeter had the same timing as increase in discharge at catchment's outlet and the same timing as the increase in air temperature above the freezing point. Measured data were afterwards used in distributed rainfall-runoff model MIKE-SHE. Several methods were used for spatial distribution of precipitation and snow water equivalent. The model was able to simulate snow water equivalent and snowmelt timing in daily step reasonably well. Simulated discharges were slightly overestimated in later spring.

  11. Spatio-temporal variability of snow water equivalent in the extra-tropical Andes Cordillera from distributed energy balance modeling and remotely sensed snow cover

    NASA Astrophysics Data System (ADS)

    Cornwell, E.; Molotch, N. P.; McPhee, J.

    2016-01-01

    Seasonal snow cover is the primary water source for human use and ecosystems along the extratropical Andes Cordillera. Despite its importance, relatively little research has been devoted to understanding the properties, distribution and variability of this natural resource. This research provides high-resolution (500 m), daily distributed estimates of end-of-winter and spring snow water equivalent over a 152 000 km2 domain that includes the mountainous reaches of central Chile and Argentina. Remotely sensed fractional snow-covered area and other relevant forcings are combined with extrapolated data from meteorological stations and a simplified physically based energy balance model in order to obtain melt-season melt fluxes that are then aggregated to estimate the end-of-winter (or peak) snow water equivalent (SWE). Peak SWE estimates show an overall coefficient of determination R2 of 0.68 and RMSE of 274 mm compared to observations at 12 automatic snow water equivalent sensors distributed across the model domain, with R2 values between 0.32 and 0.88. Regional estimates of peak SWE accumulation show differential patterns strongly modulated by elevation, latitude and position relative to the continental divide. The spatial distribution of peak SWE shows that the 4000-5000 m a.s.l. elevation band is significant for snow accumulation, despite having a smaller surface area than the 3000-4000 m a.s.l. band. On average, maximum snow accumulation is observed in early September in the western Andes, and in early October on the eastern side of the continental divide. The results presented here have the potential of informing applications such as seasonal forecast model assessment and improvement, regional climate model validation, as well as evaluation of observational networks and water resource infrastructure development.

  12. Constraints on the rupture process of the 17 August 1999 Izmit earthquake

    NASA Astrophysics Data System (ADS)

    Bouin, M.-P.; Clévédé, E.; Bukchin, B.; Mostinski, A.; Patau, G.

    2003-04-01

    Kinematic and static models of the 17 August 1999 Izmit earthquake published in the literature are quite different from one to each other. In order to extract the characteristic features of this event, we determine the integral estimates of the geometry, source duration and rupture propagation of this event. Those estimates are given by the stress glut moments of total degree 2 inverting long period surface wave (LPSW) amplitude spectra (Bukchin, 1995). We draw comparisons with the integral estimates deduced from kinematic models obtained by inversion of strong motion data set and/or teleseismic body wave (Bouchon et al, 2002; Delouis et al., 2000; Yagi and Kukuchi, 2000; Sekiguchi and Iwata, 2002). While the equivalent rupture zone and the eastward directivity are consistent among all models, the LPSW solution displays a strong unilateral character of the rupture associated with a short rupture duration that is not compatible with the solutions deduced from the published models. Using a simple equivalent kinematic model, we reproduce the integral estimates of the rupture process by adjusting a few free parameters controlling the western and eastern parts of the rupture. We show that the LPSW solution strongly suggest that: - There was significant moment released on the eastern segment of the activated fault system during the Izmit earthquake; - The rupture velocity decreases on this segment. We will discuss how these results allow to enlighten the scattering of source process published for this earthquake.

  13. Energy and direction distribution of neutrons in workplace fields: implication of the results from the EVIDOS project for the set-up of simulated workplace fields.

    PubMed

    Luszik-Bhadra, M; Lacoste, V; Reginatto, M; Zimbal, A

    2007-01-01

    Workplace neutron spectra from nuclear facilities obtained within the European project EVIDOS are compared with those of the simulated workplace fields CANEL and SIGMA and fields set-up with radionuclide sources at the PTB. Contributions of neutrons to ambient dose equivalent and personal dose equivalent are given in three energy intervals (for thermal, intermediate and fast neutrons) together with the corresponding direction distribution, characterised by three different types of distributions (isotropic, weakly directed and directed). The comparison shows that none of the simulated workplace fields investigated here can model all the characteristics of the fields observed at power reactors.

  14. 10 CFR 35.49 - Suppliers for sealed sources or devices for medical use.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... accordance with a license issued under 10 CFR part 30 and 10 CFR 32.74 of this chapter or equivalent requirements of an Agreement State; (b) Sealed sources or devices non-commercially transferred from a Part 35... in accordance with a license issued under 10 CFR part 30 or the equivalent requirements of an...

  15. 10 CFR 35.49 - Suppliers for sealed sources or devices for medical use.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... accordance with a license issued under 10 CFR Part 30 and 10 CFR 32.74 of this chapter or equivalent requirements of an Agreement State; (b) Sealed sources or devices non-commercially transferred from a Part 35... in accordance with a license issued under 10 CFR Part 30 or the equivalent requirements of an...

  16. 10 CFR 35.49 - Suppliers for sealed sources or devices for medical use.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... accordance with a license issued under 10 CFR Part 30 and 10 CFR 32.74 of this chapter or equivalent requirements of an Agreement State; (b) Sealed sources or devices non-commercially transferred from a Part 35... in accordance with a license issued under 10 CFR Part 30 or the equivalent requirements of an...

  17. 10 CFR 35.49 - Suppliers for sealed sources or devices for medical use.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... accordance with a license issued under 10 CFR Part 30 and 10 CFR 32.74 of this chapter or equivalent requirements of an Agreement State; (b) Sealed sources or devices non-commercially transferred from a Part 35... in accordance with a license issued under 10 CFR Part 30 or the equivalent requirements of an...

  18. 10 CFR 35.49 - Suppliers for sealed sources or devices for medical use.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... accordance with a license issued under 10 CFR Part 30 and 10 CFR 32.74 of this chapter or equivalent requirements of an Agreement State; (b) Sealed sources or devices non-commercially transferred from a Part 35... in accordance with a license issued under 10 CFR Part 30 or the equivalent requirements of an...

  19. Transformation of body force localized near the surface of a half-space into equivalent surface stresses.

    PubMed

    Rouge, Clémence; Lhémery, Alain; Ségur, Damien

    2013-10-01

    An electromagnetic acoustic transducer (EMAT) or a laser used to generate elastic waves in a component is often described as a source of body force confined in a layer close to the surface. On the other hand, models for elastic wave radiation more efficiently handle sources described as distributions of surface stresses. Equivalent surface stresses can be obtained by integrating the body force with respect to depth. They are assumed to generate the same field as the one that would be generated by the body force. Such an integration scheme can be applied to Lorentz force for conventional EMAT configuration. When applied to magnetostrictive force generated by an EMAT in a ferromagnetic material, the same scheme fails, predicting a null stress. Transforming body force into equivalent surface stresses therefore, requires taking into account higher order terms of the force moments, the zeroth order being the simple force integration over the depth. In this paper, such a transformation is derived up to the second order, assuming that body forces are localized at depths shorter than the ultrasonic wavelength. Two formulations are obtained, each having some advantages depending on the application sought. They apply regardless of the nature of the force considered.

  20. Incorporation of dosimetry in the derivation of reference concentrations for ambient or workplace air: a conceptual approach.

    PubMed

    Oller, Adriana R; Oberdörster, Günter

    2016-09-01

    Dosimetric models are essential tools to refine inhalation risk assessments based on local respiratory effects. Dosimetric adjustments to account for differences in aerosol particle size and respiratory tract deposition and/or clearance among rodents, workers, and the general public can be applied to experimentally- and epidemiologically-determined points of departure (PODs) to calculate size-selected (e.g., PM 10 , inhalable aerosol fraction, respirable aerosol fraction) equivalent concentrations (e.g., HEC or Human Equivalent Concentration; REC or Rodent Equivalent Concentration). A modified POD (e.g., HEC) can then feed into existing frameworks for the derivation of occupational or ambient air concentration limits or reference concentrations. HECs that are expressed in terms of aerosol particle sizes experienced by humans but are derived from animal studies allow proper comparison of exposure levels and associated health effects in animals and humans. This can inform differences in responsiveness between animals and humans, based on the same deposited or retained doses and can also allow the use of both data sources in an integrated weight of evidence approach for hazard and risk assessment purposes. Whenever possible, default values should be replaced by substance-specific and target population-specific parameters. Assumptions and sources of uncertainty need to be clearly reported.

  1. Incorporation of dosimetry in the derivation of reference concentrations for ambient or workplace air: a conceptual approach

    PubMed Central

    Oberdörster, Günter

    2016-01-01

    Dosimetric models are essential tools to refine inhalation risk assessments based on local respiratory effects. Dosimetric adjustments to account for differences in aerosol particle size and respiratory tract deposition and/or clearance among rodents, workers, and the general public can be applied to experimentally- and epidemiologically-determined points of departure (PODs) to calculate size-selected (e.g., PM10, inhalable aerosol fraction, respirable aerosol fraction) equivalent concentrations (e.g., HEC or Human Equivalent Concentration; REC or Rodent Equivalent Concentration). A modified POD (e.g., HEC) can then feed into existing frameworks for the derivation of occupational or ambient air concentration limits or reference concentrations. HECs that are expressed in terms of aerosol particle sizes experienced by humans but are derived from animal studies allow proper comparison of exposure levels and associated health effects in animals and humans. This can inform differences in responsiveness between animals and humans, based on the same deposited or retained doses and can also allow the use of both data sources in an integrated weight of evidence approach for hazard and risk assessment purposes. Whenever possible, default values should be replaced by substance-specific and target population-specific parameters. Assumptions and sources of uncertainty need to be clearly reported. PMID:27721518

  2. Source effects on the simulation of the strong groud motion of the 2011 Lorca earthquake

    NASA Astrophysics Data System (ADS)

    Saraò, Angela; Moratto, Luca; Vuan, Alessandro; Mucciarelli, Marco; Jimenez, Maria Jose; Garcia Fernandez, Mariano

    2016-04-01

    On May 11, 2011 a moderate seismic event (Mw=5.2) struck the city of Lorca (South-East Spain) causing nine casualties, a large number of injured people and damages at the civil buildings. The largest PGA value (360 cm/s2) ever recorded so far in Spain, was observed at the accelerometric station located in Lorca (LOR), and it was explained as due to the source directivity, rather than to local site effects. During the last years different source models, retrieved from the inversions of geodetic or seismological data, or a combination of the two, have been published. To investigate the variability that equivalent source models of an average earthquake can introduce in the computation of strong motion, we calculated seismograms (up to 1 Hz), using an approach based on the wavenumber integration and, as input, four different source models taken from the literature. The source models differ mainly for the slip distribution on the fault. Our results show that, as effect of the different sources, the ground motion variability, in terms of pseudo-spectral velocity (1s), can reach one order of magnitude for near source receivers or for sites influenced by the forward-directivity effect. Finally, we compute the strong motion at frequencies higher than 1 Hz using the Empirical Green Functions and the source model parameters that better reproduce the recorded shaking up to 1 Hz: the computed seismograms fit satisfactorily the signals recorded at LOR station as well as at the other stations close to the source.

  3. Equivalent source modeling of the main field using MAGSAT data

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Modeling and software development of the main field using MAGSAT data is discussed. The cause of the apparent bulge in the power spectrum of Dipole model no. 4 was investigated by simulation with POGO crustal anomaly field model. Results for cases with and without noise, and the spectra of selected reslts are given. It is indicated that the beginning of the bump in the spectrum of Dipole no. 4 is due to crustal influence, while the departure of the spectrum from that of MGST (12/80-2) around expansion order 17 is due to the resolution limits of the Dipole density.

  4. A fast algorithm for forward-modeling of gravitational fields in spherical coordinates with 3D Gauss-Legendre quadrature

    NASA Astrophysics Data System (ADS)

    Zhao, G.; Liu, J.; Chen, B.; Guo, R.; Chen, L.

    2017-12-01

    Forward modeling of gravitational fields at large-scale requires to consider the curvature of the Earth and to evaluate the Newton's volume integral in spherical coordinates. To acquire fast and accurate gravitational effects for subsurface structures, subsurface mass distribution is usually discretized into small spherical prisms (called tesseroids). The gravity fields of tesseroids are generally calculated numerically. One of the commonly used numerical methods is the 3D Gauss-Legendre quadrature (GLQ). However, the traditional GLQ integration suffers from low computational efficiency and relatively poor accuracy when the observation surface is close to the source region. We developed a fast and high accuracy 3D GLQ integration based on the equivalence of kernel matrix, adaptive discretization and parallelization using OpenMP. The equivalence of kernel matrix strategy increases efficiency and reduces memory consumption by calculating and storing the same matrix elements in each kernel matrix just one time. In this method, the adaptive discretization strategy is used to improve the accuracy. The numerical investigations show that the executing time of the proposed method is reduced by two orders of magnitude compared with the traditional method that without these optimized strategies. High accuracy results can also be guaranteed no matter how close the computation points to the source region. In addition, the algorithm dramatically reduces the memory requirement by N times compared with the traditional method, where N is the number of discretization of the source region in the longitudinal direction. It makes the large-scale gravity forward modeling and inversion with a fine discretization possible.

  5. Inter-Individual Variability in High-Throughput Risk ...

    EPA Pesticide Factsheets

    We incorporate realistic human variability into an open-source high-throughput (HT) toxicokinetics (TK) modeling framework for use in a next-generation risk prioritization approach. Risk prioritization involves rapid triage of thousands of environmental chemicals, most which have little or no existing TK data. Chemicals are prioritized based on model estimates of hazard and exposure, to decide which chemicals should be first in line for further study. Hazard may be estimated with in vitro HT screening assays, e.g., U.S. EPA’s ToxCast program. Bioactive ToxCast concentrations can be extrapolated to doses that produce equivalent concentrations in body tissues using a reverse TK approach in which generic TK models are parameterized with 1) chemical-specific parameters derived from in vitro measurements and predicted from chemical structure; and 2) with physiological parameters for a virtual population. Here we draw physiological parameters from realistic estimates of distributions of demographic and anthropometric quantities in the modern U.S. population, based on the most recent CDC NHANES data. A Monte Carlo approach, accounting for the correlation structure in physiological parameters, is used to estimate ToxCast equivalent doses for the most sensitive portion of the population. To quantify risk, ToxCast equivalent doses are compared to estimates of exposure rates based on Bayesian inferences drawn from NHANES urinary analyte biomonitoring data. The inclusion

  6. 40 CFR 63.41 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... existing equipment will be equivalent to that level of control currently achieved by other well-controlled similar sources (i.e., equivalent to the level of control that would be provided by a current BACT, LAER... control equipment will be equivalent to the percent control efficiency provided by the control equipment...

  7. 40 CFR 63.41 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... existing equipment will be equivalent to that level of control currently achieved by other well-controlled similar sources (i.e., equivalent to the level of control that would be provided by a current BACT, LAER... control equipment will be equivalent to the percent control efficiency provided by the control equipment...

  8. 40 CFR 63.41 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... existing equipment will be equivalent to that level of control currently achieved by other well-controlled similar sources (i.e., equivalent to the level of control that would be provided by a current BACT, LAER... control equipment will be equivalent to the percent control efficiency provided by the control equipment...

  9. 40 CFR 63.41 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... existing equipment will be equivalent to that level of control currently achieved by other well-controlled similar sources (i.e., equivalent to the level of control that would be provided by a current BACT, LAER... control equipment will be equivalent to the percent control efficiency provided by the control equipment...

  10. 40 CFR 63.41 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... existing equipment will be equivalent to that level of control currently achieved by other well-controlled similar sources (i.e., equivalent to the level of control that would be provided by a current BACT, LAER... control equipment will be equivalent to the percent control efficiency provided by the control equipment...

  11. Towards a Comprehensive Model of Jet Noise Using an Acoustic Analogy and Steady RANS Solutions

    NASA Technical Reports Server (NTRS)

    Miller, Steven A. E.

    2013-01-01

    An acoustic analogy is developed to predict the noise from jet flows. It contains two source models that independently predict the noise from turbulence and shock wave shear layer interactions. The acoustic analogy is based on the Euler equations and separates the sources from propagation. Propagation effects are taken into account by calculating the vector Green's function of the linearized Euler equations. The sources are modeled following the work of Tam and Auriault, Morris and Boluriaan, and Morris and Miller. A statistical model of the two-point cross-correlation of the velocity fluctuations is used to describe the turbulence. The acoustic analogy attempts to take into account the correct scaling of the sources for a wide range of nozzle pressure and temperature ratios. It does not make assumptions regarding fine- or large-scale turbulent noise sources, self- or shear-noise, or convective amplification. The acoustic analogy is partially informed by three-dimensional steady Reynolds-Averaged Navier-Stokes solutions that include the nozzle geometry. The predictions are compared with experiments of jets operating subsonically through supersonically and at unheated and heated temperatures. Predictions generally capture the scaling of both mixing noise and BBSAN for the conditions examined, but some discrepancies remain that are due to the accuracy of the steady RANS turbulence model closure, the equivalent sources, and the use of a simplified vector Green's function solver of the linearized Euler equations.

  12. Catalytic Ignition and Upstream Reaction Propagation in Monolith Reactors

    NASA Technical Reports Server (NTRS)

    Struk, Peter M.; Dietrich, Daniel L.; Miller, Fletcher J.; T'ien, James S.

    2007-01-01

    Using numerical simulations, this work demonstrates a concept called back-end ignition for lighting-off and pre-heating a catalytic monolith in a power generation system. In this concept, a downstream heat source (e.g. a flame) or resistive heating in the downstream portion of the monolith initiates a localized catalytic reaction which subsequently propagates upstream and heats the entire monolith. The simulations used a transient numerical model of a single catalytic channel which characterizes the behavior of the entire monolith. The model treats both the gas and solid phases and includes detailed homogeneous and heterogeneous reactions. An important parameter in the model for back-end ignition is upstream heat conduction along the solid. The simulations used both dry and wet CO chemistry as a model fuel for the proof-of-concept calculations; the presence of water vapor can trigger homogenous reactions, provided that gas-phase temperatures are adequately high and there is sufficient fuel remaining after surface reactions. With sufficiently high inlet equivalence ratio, back-end ignition occurs using the thermophysical properties of both a ceramic and metal monolith (coated with platinum in both cases), with the heat-up times significantly faster for the metal monolith. For lower equivalence ratios, back-end ignition occurs without upstream propagation. Once light-off and propagation occur, the inlet equivalence ratio could be reduced significantly while still maintaining an ignited monolith as demonstrated by calculations using complete monolith heating.

  13. Towards a new approach to model guidance laws

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borne, P.; Duflos, E.; Vanheeghe, P.

    1994-12-31

    Proportional navigation laws have been widely used and studied. Nevertheless very few publications explain rigorously the origin of all these laws. For researchers who are starting to work on guidance laws, a feeling of confusion can result. For others, this lack of explanation can be, for example, the source of the difficulties to make the true proportional navigation become equivalent to the pure proportional navigation. The authors propose here a way to model guidance laws in order to fill this lack of explanation. The first consequence is a better exploration of the kinematic behaviors arising during the guidance process. Themore » second consequence is the definition of a new 3D guidance law which can be seen as a generalization of the true proportional navigation. Moreover, this generalization allow this last law to become equivalent to the pure proportional navigation in terms of initial conditions which allow the object to reach its target.« less

  14. Precise SAR measurements in the near-field of RF antenna systems

    NASA Astrophysics Data System (ADS)

    Hakim, Bandar M.

    Wireless devices must meet specific safety radiation limits, and in order to assess the health affects of such devices, standard procedures are used in which standard phantoms, tissue-equivalent liquids, and miniature electric field probes are used. The accuracy of such measurements depend on the precision in measuring the dielectric properties of the tissue-equivalent liquids and the associated calibrations of the electric-field probes. This thesis describes work on the theoretical modeling and experimental measurement of the complex permittivity of tissue-equivalent liquids, and associated calibration of miniature electric-field probes. The measurement method is based on measurements of the field attenuation factor and power reflection coefficient of a tissue-equivalent sample. A novel method, to the best of the authors knowledge, for determining the dielectric properties and probe calibration factors is described and validated. The measurement system is validated using saline at different concentrations, and measurements of complex permittivity and calibration factors have been made on tissue-equivalent liquids at 900MHz and 1800MHz. Uncertainty analysis have been conducted to study the measurement system sensitivity. Using the same waveguide to measure tissue-equivalent permittivity and calibrate e-field probes eliminates a source of uncertainty associated with using two different measurement systems. The measurement system is used to test GSM cell-phones at 900MHz and 1800MHz for Specific Absorption Rate (SAR) compliance using a Specific Anthropomorphic Mannequin phantom (SAM).

  15. 10 CFR 835.702 - Individual monitoring records.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... emergency exposures. (b) Recording of the non-uniform equivalent dose to the skin is not required if the... internal dose (committed effective dose or committed equivalent dose) is not required for any monitoring...: (i) The effective dose from external sources of radiation (equivalent dose to the whole body may be...

  16. 10 CFR 835.702 - Individual monitoring records.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... emergency exposures. (b) Recording of the non-uniform equivalent dose to the skin is not required if the... internal dose (committed effective dose or committed equivalent dose) is not required for any monitoring...: (i) The effective dose from external sources of radiation (equivalent dose to the whole body may be...

  17. 10 CFR 835.702 - Individual monitoring records.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... emergency exposures. (b) Recording of the non-uniform equivalent dose to the skin is not required if the... internal dose (committed effective dose or committed equivalent dose) is not required for any monitoring...: (i) The effective dose from external sources of radiation (equivalent dose to the whole body may be...

  18. 10 CFR 835.702 - Individual monitoring records.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... emergency exposures. (b) Recording of the non-uniform equivalent dose to the skin is not required if the... internal dose (committed effective dose or committed equivalent dose) is not required for any monitoring...: (i) The effective dose from external sources of radiation (equivalent dose to the whole body may be...

  19. 10 CFR 835.702 - Individual monitoring records.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... emergency exposures. (b) Recording of the non-uniform equivalent dose to the skin is not required if the... internal dose (committed effective dose or committed equivalent dose) is not required for any monitoring...: (i) The effective dose from external sources of radiation (equivalent dose to the whole body may be...

  20. Reverse radiance: a fast accurate method for determining luminance

    NASA Astrophysics Data System (ADS)

    Moore, Kenneth E.; Rykowski, Ronald F.; Gangadhara, Sanjay

    2012-10-01

    Reverse ray tracing from a region of interest backward to the source has long been proposed as an efficient method of determining luminous flux. The idea is to trace rays only from where the final flux needs to be known back to the source, rather than tracing in the forward direction from the source outward to see where the light goes. Once the reverse ray reaches the source, the radiance the equivalent forward ray would have represented is determined and the resulting flux computed. Although reverse ray tracing is conceptually simple, the method critically depends upon an accurate source model in both the near and far field. An overly simplified source model, such as an ideal Lambertian surface substantially detracts from the accuracy and thus benefit of the method. This paper will introduce an improved method of reverse ray tracing that we call Reverse Radiance that avoids assumptions about the source properties. The new method uses measured data from a Source Imaging Goniometer (SIG) that simultaneously measures near and far field luminous data. Incorporating this data into a fast reverse ray tracing integration method yields fast, accurate data for a wide variety of illumination problems.

  1. Estimation of neutron dose equivalent at the mezzanine of the Advanced Light Source and the laboratory boundary using the ORNL program MORSE.

    PubMed

    Sun, R K

    1990-12-01

    To investigate the radiation effect of neutrons near the Advanced Light Source (ALS) at Lawrence Berkeley Laboratory (LBL) with respect to the neutron dose equivalents in nearby occupied areas and at the site boundary, the neutron transport code MORSE, from Oak Ridge National Laboratory (ORNL), was used. These dose equivalents result from both skyshine neutrons transported by air scattering and direct neutrons penetrating the shielding. The ALS neutron sources are a 50-MeV linear accelerator and its transfer line, a 1.5-GeV booster, a beam extraction line, and a 1.9-GeV storage ring. The most conservative total occupational-dose-equivalent rate in the center of the ALS mezzanine, 39 m from the ALS center, was found to be 1.14 X 10(-3) Sv y-1 per 2000-h "occupational" year, and the total environmental-dose-equivalent rate at the ALS boundary, 125 m from the ALS center, was found to be 3.02 X 10(-4) Sv y-1 per 8760-h calendar year. More realistic dose-equivalent rates, using the nominal (expected) storage-ring current, were calculated to be 1.0 X 10(-4) Sv y-1 and 2.65 X 10(-5) Sv y-1 occupational year and calendar year, respectively, which are much lower than the DOE reporting levels.

  2. Global equivalent magnetization of the oceanic lithosphere

    NASA Astrophysics Data System (ADS)

    Dyment, J.; Choi, Y.; Hamoudi, M.; Lesur, V.; Thebault, E.

    2015-11-01

    As a by-product of the construction of a new World Digital Magnetic Anomaly Map over oceanic areas, we use an original approach based on the global forward modeling of seafloor spreading magnetic anomalies and their comparison to the available marine magnetic data to derive the first map of the equivalent magnetization over the World's ocean. This map reveals consistent patterns related to the age of the oceanic lithosphere, the spreading rate at which it was formed, and the presence of mantle thermal anomalies which affects seafloor spreading and the resulting lithosphere. As for the age, the equivalent magnetization decreases significantly during the first 10-15 Myr after its formation, probably due to the alteration of crustal magnetic minerals under pervasive hydrothermal alteration, then increases regularly between 20 and 70 Ma, reflecting variations in the field strength or source effects such as the acquisition of a secondary magnetization. As for the spreading rate, the equivalent magnetization is twice as strong in areas formed at fast rate than in those formed at slow rate, with a threshold at ∼40 km/Myr, in agreement with an independent global analysis of the amplitude of Anomaly 25. This result, combined with those from the study of the anomalous skewness of marine magnetic anomalies, allows building a unified model for the magnetic structure of normal oceanic lithosphere as a function of spreading rate. Finally, specific areas affected by thermal mantle anomalies at the time of their formation exhibit peculiar equivalent magnetization signatures, such as the cold Australian-Antarctic Discordance, marked by a lower magnetization, and several hotspots, marked by a high magnetization.

  3. Water equivalency evaluation of PRESAGE® dosimeters for dosimetry of Cs-137 and Ir-192 brachytherapy sources

    NASA Astrophysics Data System (ADS)

    Gorjiara, Tina; Hill, Robin; Kuncic, Zdenka; Baldock, Clive

    2010-11-01

    A major challenge in brachytherapy dosimetry is the measurement of steep dose gradients. This can be achieved with a high spatial resolution three dimensional (3D) dosimeter. PRESAGE® is a polyurethane based dosimeter which is suitable for 3D dosimetry. Since an ideal dosimeter is radiologically water equivalent, we have investigated the relative dose response of three different PRESAGE® formulations, two with a lower chloride and bromide content than original one, for Cs-137 and Ir-192 brachytherapy sources. Doses were calculated using the EGSnrc Monte Carlo package. Our results indicate that PRESAGE® dosimeters are suitable for relative dose measurement of Cs-137 and Ir-192 brachytherapy sources and the lower halogen content PRESAGE® dosimeters are more water equivalent than the original formulation.

  4. Seismic noise frequency dependent P and S wave sources

    NASA Astrophysics Data System (ADS)

    Stutzmann, E.; Schimmel, M.; Gualtieri, L.; Farra, V.; Ardhuin, F.

    2013-12-01

    Seismic noise in the period band 3-10 sec is generated in the oceans by the interaction of ocean waves. Noise signal is dominated by Rayleigh waves but body waves can be extracted using a beamforming approach. We select the TAPAS array deployed in South Spain between June 2008 and September 2009 and we use the vertical and horizontal components to extract noise P and S waves, respectively. Data are filtered in narrow frequency bands and we select beam azimuths and slownesses that correspond to the largest continuous sources per day. Our procedure automatically discard earthquakes which are localized during short time durations. Using this approach, we detect many more noise P-waves than S-waves. Source locations are determined by back-projecting the detected slowness/azimuth. P and S waves are generated in nearby areas and both source locations are frequency dependent. Long period sources are dominantly in the South Atlantic and Indian Ocean whereas shorter period sources are rather in the North Atlantic Ocean. We further show that the detected S-waves are dominantly Sv-waves. We model the observed body waves using an ocean wave model that takes into account all possible wave interactions including coastal reflection. We use the wave model to separate direct and multiply reflected phases for P and S waves respectively. We show that in the South Atlantic the complex source pattern can be explained by the existence of both coastal and pelagic sources whereas in the North Atlantic most body wave sources are pelagic. For each detected source, we determine the equivalent source magnitude which is compared to the model.

  5. NOTE: Development of modified voxel phantoms for the numerical dosimetric reconstruction of radiological accidents involving external sources: implementation in SESAME tool

    NASA Astrophysics Data System (ADS)

    Courageot, Estelle; Sayah, Rima; Huet, Christelle

    2010-05-01

    Estimating the dose distribution in a victim's body is a relevant indicator in assessing biological damage from exposure in the event of a radiological accident caused by an external source. When the dose distribution is evaluated with a numerical anthropomorphic model, the posture and morphology of the victim have to be reproduced as realistically as possible. Several years ago, IRSN developed a specific software application, called the simulation of external source accident with medical images (SESAME), for the dosimetric reconstruction of radiological accidents by numerical simulation. This tool combines voxel geometry and the MCNP(X) Monte Carlo computer code for radiation-material interaction. This note presents a new functionality in this software that enables the modelling of a victim's posture and morphology based on non-uniform rational B-spline (NURBS) surfaces. The procedure for constructing the modified voxel phantoms is described, along with a numerical validation of this new functionality using a voxel phantom of the RANDO tissue-equivalent physical model.

  6. Development of modified voxel phantoms for the numerical dosimetric reconstruction of radiological accidents involving external sources: implementation in SESAME tool.

    PubMed

    Courageot, Estelle; Sayah, Rima; Huet, Christelle

    2010-05-07

    Estimating the dose distribution in a victim's body is a relevant indicator in assessing biological damage from exposure in the event of a radiological accident caused by an external source. When the dose distribution is evaluated with a numerical anthropomorphic model, the posture and morphology of the victim have to be reproduced as realistically as possible. Several years ago, IRSN developed a specific software application, called the simulation of external source accident with medical images (SESAME), for the dosimetric reconstruction of radiological accidents by numerical simulation. This tool combines voxel geometry and the MCNP(X) Monte Carlo computer code for radiation-material interaction. This note presents a new functionality in this software that enables the modelling of a victim's posture and morphology based on non-uniform rational B-spline (NURBS) surfaces. The procedure for constructing the modified voxel phantoms is described, along with a numerical validation of this new functionality using a voxel phantom of the RANDO tissue-equivalent physical model.

  7. Circuit Models and Experimental Noise Measurements of Micropipette Amplifiers for Extracellular Neural Recordings from Live Animals

    PubMed Central

    Chen, Chang Hao; Pun, Sio Hang; Mak, Peng Un; Vai, Mang I; Klug, Achim; Lei, Tim C.

    2014-01-01

    Glass micropipettes are widely used to record neural activity from single neurons or clusters of neurons extracellularly in live animals. However, to date, there has been no comprehensive study of noise in extracellular recordings with glass micropipettes. The purpose of this work was to assess various noise sources that affect extracellular recordings and to create model systems in which novel micropipette neural amplifier designs can be tested. An equivalent circuit of the glass micropipette and the noise model of this circuit, which accurately describe the various noise sources involved in extracellular recordings, have been developed. Measurement schemes using dead brain tissue as well as extracellular recordings from neurons in the inferior colliculus, an auditory brain nucleus of an anesthetized gerbil, were used to characterize noise performance and amplification efficacy of the proposed micropipette neural amplifier. According to our model, the major noise sources which influence the signal to noise ratio are the intrinsic noise of the neural amplifier and the thermal noise from distributed pipette resistance. These two types of noise were calculated and measured and were shown to be the dominating sources of background noise for in vivo experiments. PMID:25133158

  8. MEG (Magnetoencephalography) multipolar modeling of distributed sources using RAP-MUSIC (Recursively Applied and Projected Multiple Signal Characterization)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, J. C.; Baillet, S.; Jerbi, K.

    2001-01-01

    We describe the use of truncated multipolar expansions for producing dynamic images of cortical neural activation from measurements of the magnetoencephalogram. We use a signal-subspace method to find the locations of a set of multipolar sources, each of which represents a region of activity in the cerebral cortex. Our method builds up an estimate of the sources in a recursive manner, i.e. we first search for point current dipoles, then magnetic dipoles, and finally first order multipoles. The dynamic behavior of these sources is then computed using a linear fit to the spatiotemporal data. The final step in the proceduremore » is to map each of the multipolar sources into an equivalent distributed source on the cortical surface. The method is illustrated through an application to epileptic interictal MEG data.« less

  9. Crustal structure of the Churchill-Superior boundary zone between 80 and 98 deg W longitude from Magsat anomaly maps and stacked passes

    NASA Technical Reports Server (NTRS)

    Hall, D. H.; Millar, T. W.; Noble, I. A.

    1985-01-01

    A modeling technique using spherical shell elements and equivalent dipole sources has been applied to Magsat signatures at the Churchill-Superior boundary in Manitoba, Ontario, and Ungava. A large satellite magnetic anomaly (12 nT amplitude) on POGO and Magsat maps near the Churchill-Superior boundary was found to be related to the Richmond Gulf aulacogen. The averaged crustal magnetization in the source region is 5.2 A/m. Stacking of the magnetic traces from Magsat passes reveals a magnetic signature (10 nT amplitude) at the Churchill-Superior boundary in an area studied between 80 deg W and 98 deg W. Modeling suggests a steplike thickening of the crust on the Churchill side of the boundary in a layer with a magnetization of 5 A/m. Signatures on aeromagnetic maps are also found in the source areas for both of these satellite anomalies.

  10. Analysis of Ground Motion from An Underground Chemical Explosion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pitarka, Arben; Mellors, Robert J.; Walter, William R.

    Here in this paper we investigate the excitation and propagation of far-field seismic waves from the 905 kg trinitrotoluene equivalent underground chemical explosion SPE-3 recorded during the Source Physics Experiment (SPE) at the Nevada National Security Site. The recorded far-field ground motion at short and long distances is characterized by substantial shear-wave energy, and large azimuthal variations in P-and S-wave amplitudes. The shear waves observed on the transverse component of sensors at epicentral distances <50 m suggests they were generated at or very near the source. The relative amplitude of the shear waves grows as the waves propagate away frommore » the source. We analyze and model the shear-wave excitation during the explosion in the 0.01–10 Hz frequency range, at epicentral distances of up to 1 km. We used two simulation techniques. One is based on the empirical isotropic Mueller–Murphy (MM) (Mueller and Murphy, 1971) nuclear explosion source model, and 3D anelastic wave propagation modeling. The second uses a physics-based approach that couples hydrodynamic modeling of the chemical explosion source with anelastic wave propagation modeling. Comparisons with recorded data show the MM source model overestimates the SPE-3 far-field ground motion by an average factor of 4. The observations show that shear waves with substantial high-frequency energy were generated at the source. However, to match the observations additional shear waves from scattering, including surface topography, and heterogeneous shallow structure contributed to the amplification of far-field shear motion. Comparisons between empirically based isotropic and physics-based anisotropic source models suggest that both wave-scattering effects and near-field nonlinear effects are needed to explain the amplitude and irregular radiation pattern of shear motion observed during the SPE-3 explosion.« less

  11. Analysis of Ground Motion from An Underground Chemical Explosion

    DOE PAGES

    Pitarka, Arben; Mellors, Robert J.; Walter, William R.; ...

    2015-09-08

    Here in this paper we investigate the excitation and propagation of far-field seismic waves from the 905 kg trinitrotoluene equivalent underground chemical explosion SPE-3 recorded during the Source Physics Experiment (SPE) at the Nevada National Security Site. The recorded far-field ground motion at short and long distances is characterized by substantial shear-wave energy, and large azimuthal variations in P-and S-wave amplitudes. The shear waves observed on the transverse component of sensors at epicentral distances <50 m suggests they were generated at or very near the source. The relative amplitude of the shear waves grows as the waves propagate away frommore » the source. We analyze and model the shear-wave excitation during the explosion in the 0.01–10 Hz frequency range, at epicentral distances of up to 1 km. We used two simulation techniques. One is based on the empirical isotropic Mueller–Murphy (MM) (Mueller and Murphy, 1971) nuclear explosion source model, and 3D anelastic wave propagation modeling. The second uses a physics-based approach that couples hydrodynamic modeling of the chemical explosion source with anelastic wave propagation modeling. Comparisons with recorded data show the MM source model overestimates the SPE-3 far-field ground motion by an average factor of 4. The observations show that shear waves with substantial high-frequency energy were generated at the source. However, to match the observations additional shear waves from scattering, including surface topography, and heterogeneous shallow structure contributed to the amplification of far-field shear motion. Comparisons between empirically based isotropic and physics-based anisotropic source models suggest that both wave-scattering effects and near-field nonlinear effects are needed to explain the amplitude and irregular radiation pattern of shear motion observed during the SPE-3 explosion.« less

  12. Protein Data Bank depositions from synchrotron sources.

    PubMed

    Jiang, Jiansheng; Sweet, Robert M

    2004-07-01

    A survey and analysis of Protein Data Bank (PDB) depositions from international synchrotron radiation facilities, based on the latest released PDB entries, are reported. The results (http://asdp.bnl.gov/asda/Libraries/) show that worldwide, every year since 1999, more than 50% of the deposited X-ray structures have used synchrotron facilities, reaching 75% by 2003. In this web-based database, all PDB entries among individual synchrotron beamlines are archived, synchronized with the weekly PDB release. Statistics regarding the quality of experimental data and the refined model for all structures are presented, and these are analysed to reflect the impact of synchrotron sources. The results confirm the common impression that synchrotron sources extend the size of structures that can be solved with equivalent or better quality than home sources.

  13. Review of Recent Development of Dynamic Wind Farm Equivalent Models Based on Big Data Mining

    NASA Astrophysics Data System (ADS)

    Wang, Chenggen; Zhou, Qian; Han, Mingzhe; Lv, Zhan’ao; Hou, Xiao; Zhao, Haoran; Bu, Jing

    2018-04-01

    Recently, the big data mining method has been applied in dynamic wind farm equivalent modeling. In this paper, its recent development with present research both domestic and overseas is reviewed. Firstly, the studies of wind speed prediction, equivalence and its distribution in the wind farm are concluded. Secondly, two typical approaches used in the big data mining method is introduced, respectively. For single wind turbine equivalent modeling, it focuses on how to choose and identify equivalent parameters. For multiple wind turbine equivalent modeling, the following three aspects are concentrated, i.e. aggregation of different wind turbine clusters, the parameters in the same cluster, and equivalence of collector system. Thirdly, an outlook on the development of dynamic wind farm equivalent models in the future is discussed.

  14. 42 CFR 81.4 - Definition of terms used in this part.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...]. (e) Equivalent dose means the absorbed dose in a tissue or organ multiplied by a radiation weighting... dose means the portion of the equivalent dose that is received from radiation sources outside of the... pattern and level of radiation exposure. (h) Internal dose means the portion of the equivalent dose that...

  15. The Symbiotic System SS73 17 seen with Suzaku

    NASA Technical Reports Server (NTRS)

    Smith, Randall K.; Mushotzky, Richard; Kallman, Tim; Tueller, Jack; Mukai, Koji; Markwardt, Craig

    2007-01-01

    We observed with Suzaku the symbiotic star SS73 17, motivated by the discovery by the INTEGRAL satellite and the Swift BAT survey that it emits hard X-rays. Our observations showed a highly-absorbed X-ray spectrum with NH > loz3 emp2, equivalent to Av > 26, although the source has B magnitude 11.3 and is also bright in UV. The source also shows strong, narrow iron lines including fluorescent Fe K as well as Fe xxv and Fe XXVI. The X-ray spectrum can be fit with a thermal model including an absorption component that partially covers the source. Most of the equivalent width of the iron fluorescent line in this model can be explained as a combination of reprocessing in a dense absorber plus reflection off a white dwarf surface, but it is likely that the continuum is partially seen in reflection as well. Unlike other symbiotic systems that show hard X-ray emission (CH Cyg, RT Cru, T CrB, GX1+4), SS73 17 is not known to have shown nova-like optical variability, X-ray flashes, or pulsations, and has always shown faint soft X-ray emission. As a result, although it is likely a white dwarf, the nature of the compact object in SS73 17 is still uncertain. SS73 17 is probably an extreme example of the recently discovered and relatively small class of hard X-ray emitting symbiotic systems.

  16. A simple object-oriented and open-source model for scientific and policy analyses of the global climate system – Hector v1.0

    DOE PAGES

    Hartin, Corinne A.; Patel, Pralit L.; Schwarber, Adria; ...

    2015-04-01

    Simple climate models play an integral role in the policy and scientific communities. They are used for climate mitigation scenarios within integrated assessment models, complex climate model emulation, and uncertainty analyses. Here we describe Hector v1.0, an open source, object-oriented, simple global climate carbon-cycle model. This model runs essentially instantaneously while still representing the most critical global-scale earth system processes. Hector has a three-part main carbon cycle: a one-pool atmosphere, land, and ocean. The model's terrestrial carbon cycle includes primary production and respiration fluxes, accommodating arbitrary geographic divisions into, e.g., ecological biomes or political units. Hector actively solves the inorganicmore » carbon system in the surface ocean, directly calculating air–sea fluxes of carbon and ocean pH. Hector reproduces the global historical trends of atmospheric [CO 2], radiative forcing, and surface temperatures. The model simulates all four Representative Concentration Pathways (RCPs) with equivalent rates of change of key variables over time compared to current observations, MAGICC (a well-known simple climate model), and models from the 5th Coupled Model Intercomparison Project. Hector's flexibility, open-source nature, and modular design will facilitate a broad range of research in various areas.« less

  17. The effect of reflector geometry on the acoustic field and bubble dynamics produced by an electrohydraulic shock wave lithotripter.

    PubMed

    Zhou, Yufeng; Zhong, Pei

    2006-06-01

    A theoretical model for the propagation of shock wave from an axisymmetric reflector was developed by modifying the initial conditions for the conventional solution of a nonlinear parabolic wave equation (i.e., the Khokhlov-Zabolotskaya-Kuznestsov equation). The ellipsoidal reflector of an HM-3 lithotripter is modeled equivalently as a self-focusing spherically distributed pressure source. The pressure wave form generated by the spark discharge of the HM-3 electrode was measured by a fiber optic probe hydrophone and used as source conditions in the numerical calculation. The simulated pressure wave forms, accounting for the effects of diffraction, nonlinearity, and thermoviscous absorption in wave propagation and focusing, were compared with the measured results and a reasonably good agreement was found. Furthermore, the primary characteristics in the pressure wave forms produced by different reflector geometries, such as that produced by a reflector insert, can also be predicted by this model. It is interesting to note that when the interpulse delay time calculated by linear geometric model is less than about 1.5 micros, two pulses from the reflector insert and the uncovered bottom of the original HM-3 reflector will merge together. Coupling the simulated pressure wave form with the Gilmore model was carried out to evaluate the effect of reflector geometry on resultant bubble dynamics in a lithotripter field. Altogether, the equivalent reflector model was found to provide a useful tool for the prediction of pressure wave form generated in a lithotripter field. This model may be used to guide the design optimization of reflector geometries for improving the performance and safety of clinical lithotripters.

  18. The effect of reflector geometry on the acoustic field and bubble dynamics produced by an electrohydraulic shock wave lithotripter

    PubMed Central

    Zhou, Yufeng; Zhong, Pei

    2007-01-01

    A theoretical model for the propagation of shock wave from an axisymmetric reflector was developed by modifying the initial conditions for the conventional solution of a nonlinear parabolic wave equation (i.e., the Khokhlov–Zabolotskaya–Kuznestsov equation). The ellipsoidal reflector of an HM-3 lithotripter is modeled equivalently as a self-focusing spherically distributed pressure source. The pressure wave form generated by the spark discharge of the HM-3 electrode was measured by a fiber optic probe hydrophone and used as source conditions in the numerical calculation. The simulated pressure wave forms, accounting for the effects of diffraction, nonlinearity, and thermoviscous absorption in wave propagation and focusing, were compared with the measured results and a reasonably good agreement was found. Furthermore, the primary characteristics in the pressure wave forms produced by different reflector geometries, such as that produced by a reflector insert, can also be predicted by this model. It is interesting to note that when the interpulse delay time calculated by linear geometric model is less than about 1.5 μs, two pulses from the reflector insert and the uncovered bottom of the original HM-3 reflector will merge together. Coupling the simulated pressure wave form with the Gilmore model was carried out to evaluate the effect of reflector geometry on resultant bubble dynamics in a lithotripter field. Altogether, the equivalent reflector model was found to provide a useful tool for the prediction of pressure wave form generated in a lithotripter field. This model may be used to guide the design optimization of reflector geometries for improving the performance and safety of clinical lithotripters. PMID:16838506

  19. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation.

    PubMed

    Kim, Sangroh; Yoshizumi, Terry T; Yin, Fang-Fang; Chetty, Indrin J

    2013-04-21

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan-scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the 'ISource = 8: Phase-Space Source Incident from Multiple Directions' in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral CT scan dose in the BEAMnrc/EGSnrc system.

  20. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation

    NASA Astrophysics Data System (ADS)

    Kim, Sangroh; Yoshizumi, Terry T.; Yin, Fang-Fang; Chetty, Indrin J.

    2013-04-01

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan—scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the ‘ISource = 8: Phase-Space Source Incident from Multiple Directions’ in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral CT scan dose in the BEAMnrc/EGSnrc system.

  1. Source-receptor matrix calculation with a Lagrangian particle dispersion model in backward mode

    NASA Astrophysics Data System (ADS)

    Seibert, P.; Frank, A.

    2004-01-01

    The possibility to calculate linear-source receptor relationships for the transport of atmospheric trace substances with a Lagrangian particle dispersion model (LPDM) running in backward mode is shown and presented with many tests and examples. This mode requires only minor modifications of the forward LPDM. The derivation includes the action of sources and of any first-order processes (transformation with prescribed rates, dry and wet deposition, radioactive decay, etc.). The backward mode is computationally advantageous if the number of receptors is less than the number of sources considered. The combination of an LPDM with the backward (adjoint) methodology is especially attractive for the application to point measurements, which can be handled without artificial numerical diffusion. Practical hints are provided for source-receptor calculations with different settings, both in forward and backward mode. The equivalence of forward and backward calculations is shown in simple tests for release and sampling of particles, pure wet deposition, pure convective redistribution and realistic transport over a short distance. Furthermore, an application example explaining measurements of Cs-137 in Stockholm as transport from areas contaminated heavily in the Chernobyl disaster is included.

  2. Precision measurement and modeling of superconducting magnetic bearings for the satellite test of the equivalence principle

    NASA Astrophysics Data System (ADS)

    Sapilewski, Glen Alan

    The Satellite Test of the Equivalence Principle (STEP) is a modern version of Galileo's experiment of dropping two objects from the leaning tower of Pisa. The Equivalence Principle states that all objects fall with the same acceleration, independent of their composition. The primary scientific objective of STEP is to measure a possible violation of the Equivalence Principle one million times better than the best ground based tests. This extraordinary sensitivity is made possible by using cryogenic differential accelerometers in the space environment. Critical to the STEP experiment is a sound fundamental understanding of the behavior of the superconducting magnetic linear bearings used in the accelerometers. We have developed a theoretical bearing model and a precision measuring system with which to validate the model. The accelerometers contain two concentric hollow cylindrical test masses, of different materials, each levitated and constrained to axial motion by a superconducting magnetic bearing. Ensuring that the bearings satisfy the stringent mission specifications requires developing new testing apparatus and methods. The bearing is tested using an actively-controlled table which tips it relative to gravity. This balances the magnetic forces from the bearing against a component of gravity. The magnetic force profile of the bearing can be mapped by measuring the tilt necessary to position the test mass at various locations. An operational bearing has been built and is being used to verify the theoretical levitation models. The experimental results obtained from the bearing test apparatus were inconsistent with the previous models used for STEP bearings. This led to the development of a new bearing model that includes the influence of surface current variations in the bearing wires and the effect of the superconducting transformer. The new model, which has been experimentally verified, significantly improves the prediction of levitation current, accurately estimates the relationship between tilting and translational modes, and predicts the dependence of radial mode frequencies on the bearing current. In addition, we developed a new model for the forces produced by trapped magnetic fluxons, a potential source of imperfections in the bearing. This model estimates the forces between magnetic fluxons trapped in separate superconducting objects.

  3. 42 CFR 82.5 - Definition of terms used in this part.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Illness Compensation Program Act of 2000, 42 U.S.C. 7384-7385 [1994, supp. 2001]. (i) Equivalent dose is... equivalent dose that is received from radiation sources outside of the body. (k) Internal dose means that portion of the equivalent dose that is received from radioactive materials taken into the body. (l) NIOSH...

  4. Equivalent model and power flow model for electric railway traction network

    NASA Astrophysics Data System (ADS)

    Wang, Feng

    2018-05-01

    An equivalent model of the Cable Traction Network (CTN) considering the distributed capacitance effect of the cable system is proposed. The model can be divided into 110kV side and 27.5kV side two kinds. The 110kV side equivalent model can be used to calculate the power supply capacity of the CTN. The 27.5kV side equivalent model can be used to solve the voltage of the catenary. Based on the equivalent simplified model of CTN, the power flow model of CTN which involves the reactive power compensation coefficient and the interaction of voltage and current, is derived.

  5. On the equivalence of generalized least-squares approaches to the evaluation of measurement comparisons

    NASA Astrophysics Data System (ADS)

    Koo, A.; Clare, J. F.

    2012-06-01

    Analysis of CIPM international comparisons is increasingly being carried out using a model-based approach that leads naturally to a generalized least-squares (GLS) solution. While this method offers the advantages of being easier to audit and having general applicability to any form of comparison protocol, there is a lack of consensus over aspects of its implementation. Two significant results are presented that show the equivalence of three differing approaches discussed by or applied in comparisons run by Consultative Committees of the CIPM. Both results depend on a mathematical condition equivalent to the requirement that any two artefacts in the comparison are linked through a sequence of measurements of overlapping pairs of artefacts. The first result is that a GLS estimator excluding all sources of error common to all measurements of a participant is equal to the GLS estimator incorporating all sources of error, including those associated with any bias in the standards or procedures of the measuring laboratory. The second result identifies the component of uncertainty in the estimate of bias that arises from possible systematic effects in the participants' measurement standards and procedures. The expression so obtained is a generalization of an expression previously published for a one-artefact comparison with no inter-participant correlations, to one for a comparison comprising any number of repeat measurements of multiple artefacts and allowing for inter-laboratory correlations.

  6. Multi-Fidelity Uncertainty Propagation for Cardiovascular Modeling

    NASA Astrophysics Data System (ADS)

    Fleeter, Casey; Geraci, Gianluca; Schiavazzi, Daniele; Kahn, Andrew; Marsden, Alison

    2017-11-01

    Hemodynamic models are successfully employed in the diagnosis and treatment of cardiovascular disease with increasing frequency. However, their widespread adoption is hindered by our inability to account for uncertainty stemming from multiple sources, including boundary conditions, vessel material properties, and model geometry. In this study, we propose a stochastic framework which leverages three cardiovascular model fidelities: 3D, 1D and 0D models. 3D models are generated from patient-specific medical imaging (CT and MRI) of aortic and coronary anatomies using the SimVascular open-source platform, with fluid structure interaction simulations and Windkessel boundary conditions. 1D models consist of a simplified geometry automatically extracted from the 3D model, while 0D models are obtained from equivalent circuit representations of blood flow in deformable vessels. Multi-level and multi-fidelity estimators from Sandia's open-source DAKOTA toolkit are leveraged to reduce the variance in our estimated output quantities of interest while maintaining a reasonable computational cost. The performance of these estimators in terms of computational cost reductions is investigated for a variety of output quantities of interest, including global and local hemodynamic indicators. Sandia National Labs is a multimission laboratory managed and operated by NTESS, LLC, for the U.S. DOE under contract DE-NA0003525. Funding for this project provided by NIH-NIBIB R01 EB018302.

  7. Calculated organ doses using Monte Carlo simulations in a reference male phantom undergoing HDR brachytherapy applied to localized prostate carcinoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candela-Juan, Cristian; Perez-Calatayud, Jose; Ballester, Facundo

    Purpose: The aim of this study was to obtain equivalent doses in radiosensitive organs (aside from the bladder and rectum) when applying high-dose-rate (HDR) brachytherapy to a localized prostate carcinoma using {sup 60}Co or {sup 192}Ir sources. These data are compared with results in a water phantom and with expected values in an infinite water medium. A comparison with reported values from proton therapy and intensity-modulated radiation therapy (IMRT) is also provided. Methods: Monte Carlo simulations in Geant4 were performed using a voxelized phantom described in International Commission on Radiological Protection (ICRP) Publication 110, which reproduces masses and shapes frommore » an adult reference man defined in ICRP Publication 89. Point sources of {sup 60}Co or {sup 192}Ir with photon energy spectra corresponding to those exiting their capsules were placed in the center of the prostate, and equivalent doses per clinical absorbed dose in this target organ were obtained in several radiosensitive organs. Values were corrected to account for clinical circumstances with the source located at various positions with differing dwell times throughout the prostate. This was repeated for a homogeneous water phantom. Results: For the nearest organs considered (bladder, rectum, testes, small intestine, and colon), equivalent doses given by {sup 60}Co source were smaller (8%-19%) than from {sup 192}Ir. However, as the distance increases, the more penetrating gamma rays produced by {sup 60}Co deliver higher organ equivalent doses. The overall result is that effective dose per clinical absorbed dose from a {sup 60}Co source (11.1 mSv/Gy) is lower than from a {sup 192}Ir source (13.2 mSv/Gy). On the other hand, equivalent doses were the same in the tissue and the homogeneous water phantom for those soft tissues closer to the prostate than about 30 cm. As the distance increased, the differences of photoelectric effect in water and soft tissue, and appearance of other materials such as air, bone, or lungs, produced variations between both phantoms which were at most 35% in the considered organ equivalent doses. Finally, effective doses per clinical absorbed dose from IMRT and proton therapy were comparable to those from both brachytherapy sources, with brachytherapy being advantageous over external beam radiation therapy for the furthest organs. Conclusions: A database of organ equivalent doses when applying HDR brachytherapy to the prostate with either {sup 60}Co or {sup 192}Ir is provided. According to physical considerations, {sup 192}Ir is dosimetrically advantageous over {sup 60}Co sources at large distances, but not in the closest organs. Damage to distant healthy organs per clinical absorbed dose is lower with brachytherapy than with IMRT or protons, although the overall effective dose per Gy given to the prostate seems very similar. Given that there are several possible fractionation schemes, which result in different total amounts of therapeutic absorbed dose, advantage of a radiation treatment (according to equivalent dose to healthy organs) is treatment and facility dependent.« less

  8. Equivalent Dynamic Models.

    PubMed

    Molenaar, Peter C M

    2017-01-01

    Equivalences of two classes of dynamic models for weakly stationary multivariate time series are discussed: dynamic factor models and autoregressive models. It is shown that exploratory dynamic factor models can be rotated, yielding an infinite set of equivalent solutions for any observed series. It also is shown that dynamic factor models with lagged factor loadings are not equivalent to the currently popular state-space models, and that restriction of attention to the latter type of models may yield invalid results. The known equivalent vector autoregressive model types, standard and structural, are given a new interpretation in which they are conceived of as the extremes of an innovating type of hybrid vector autoregressive models. It is shown that consideration of hybrid models solves many problems, in particular with Granger causality testing.

  9. Stoichiometry of Reducing Equivalents and Splitting of Water in the Citric Acid Cycle.

    ERIC Educational Resources Information Center

    Madeira, Vitor M. C.

    1988-01-01

    Presents a solution to the problem of finding the source of extra reducing equivalents, and accomplishing the stoichiometry of glucose oxidation reactions. Discusses the citric acid cycle and glycolysis. (CW)

  10. Microbial risk assessment of drinking water based on hydrodynamic modelling of pathogen concentrations in source water.

    PubMed

    Sokolova, Ekaterina; Petterson, Susan R; Dienus, Olaf; Nyström, Fredrik; Lindgren, Per-Eric; Pettersson, Thomas J R

    2015-09-01

    Norovirus contamination of drinking water sources is an important cause of waterborne disease outbreaks. Knowledge on pathogen concentrations in source water is needed to assess the ability of a drinking water treatment plant (DWTP) to provide safe drinking water. However, pathogen enumeration in source water samples is often not sufficient to describe the source water quality. In this study, the norovirus concentrations were characterised at the contamination source, i.e. in sewage discharges. Then, the transport of norovirus within the water source (the river Göta älv in Sweden) under different loading conditions was simulated using a hydrodynamic model. Based on the estimated concentrations in source water, the required reduction of norovirus at the DWTP was calculated using quantitative microbial risk assessment (QMRA). The required reduction was compared with the estimated treatment performance at the DWTP. The average estimated concentration in source water varied between 4.8×10(2) and 7.5×10(3) genome equivalents L(-1); and the average required reduction by treatment was between 7.6 and 8.8 Log10. The treatment performance at the DWTP was estimated to be adequate to deal with all tested loading conditions, but was heavily dependent on chlorine disinfection, with the risk of poor reduction by conventional treatment and slow sand filtration. To our knowledge, this is the first article to employ discharge-based QMRA, combined with hydrodynamic modelling, in the context of drinking water. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. On the nature of the unidentified high latitude UHURU sources

    NASA Technical Reports Server (NTRS)

    Holt, S. S.; Boldt, E. A.; Serlemitsos, P. J.; Murray, S. S.; Giacconi, R.; Kellogg, E. M.; Matilsky, T. A.

    1973-01-01

    It is found that the unidentified high latitude UHURU sources can have either of two very different explanations. They must either reside at great distances with luminosity equivalent to or greater than 10 to the 46th power ergs/sec, or be contained in the galaxy with luminosity equivalent to or less than 10 to the 34th power ergs/sec. The two possibilities are indistinguishable with the available data.

  12. Determination of equivalent sound speed profiles for ray tracing in near-ground sound propagation.

    PubMed

    Prospathopoulos, John M; Voutsinas, Spyros G

    2007-09-01

    The determination of appropriate sound speed profiles in the modeling of near-ground propagation using a ray tracing method is investigated using a ray tracing model which is capable of performing axisymmetric calculations of the sound field around an isolated source. Eigenrays are traced using an iterative procedure which integrates the trajectory equations for each ray launched from the source at a specific direction. The calculation of sound energy losses is made by introducing appropriate coefficients to the equations representing the effect of ground and atmospheric absorption and the interaction with the atmospheric turbulence. The model is validated against analytical and numerical predictions of other methodologies for simple cases, as well as against measurements for nonrefractive atmospheric environments. A systematic investigation for near-ground propagation in downward and upward refractive atmosphere is made using experimental data. Guidelines for the suitable simulation of the wind velocity profile are derived by correlating predictions with measurements.

  13. Two methods for transmission line simulation model creation based on time domain measurements

    NASA Astrophysics Data System (ADS)

    Rinas, D.; Frei, S.

    2011-07-01

    The emission from transmission lines plays an important role in the electromagnetic compatibility of automotive electronic systems. In a frequency range below 200 MHz radiation from cables is often the dominant emission factor. In higher frequency ranges radiation from PCBs and their housing becomes more relevant. Main sources for this emission are the conducting traces. The established field measurement methods according CISPR 25 for evaluation of emissions suffer from the need to use large anechoic chambers. Furthermore measurement data can not be used for simulation model creation in order to compute the overall fields radiated from a car. In this paper a method to determine the far-fields and a simulation model of radiating transmission lines, esp. cable bundles and conducting traces on planar structures, is proposed. The method measures the electromagnetic near-field above the test object. Measurements are done in time domain in order to get phase information and to reduce measurement time. On the basis of near-field data equivalent source identification can be done. Considering correlations between sources along each conductive structure in model creation process, the model accuracy increases and computational costs can be reduced.

  14. Effect of Arctic Amplification on Design Snow Loads in Alaska

    DTIC Science & Technology

    2016-09-01

    snow water equivalent UFC Unified Facilities Criteria UTC Coordinated Universal Time Keywords: Alaska, Arctic amplification, climate change...extreme value analysis, snow loads, snow water equivalent , SWE Acknowledgements: This work was conducted with support from the Strategic... equivalent (SWE) of the snowpack. We acquired SWE data from a number of sources that provide automatic or manual observations, reanalysis data, or

  15. On Structural Equation Model Equivalence.

    ERIC Educational Resources Information Center

    Raykov, Tenko; Penev, Spiridon

    1999-01-01

    Presents a necessary and sufficient condition for the equivalence of structural-equation models that is applicable to models with parameter restrictions and models that may or may not fulfill assumptions of the rules. Illustrates the application of the approach for studying model equivalence. (SLD)

  16. Chemiluminescence-based multivariate sensing of local equivalence ratios in premixed atmospheric methane-air flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tripathi, Markandey M.; Krishnan, Sundar R.; Srinivasan, Kalyan K.

    Chemiluminescence emissions from OH*, CH*, C2, and CO2 formed within the reaction zone of premixed flames depend upon the fuel-air equivalence ratio in the burning mixture. In the present paper, a new partial least square regression (PLS-R) based multivariate sensing methodology is investigated and compared with an OH*/CH* intensity ratio-based calibration model for sensing equivalence ratio in atmospheric methane-air premixed flames. Five replications of spectral data at nine different equivalence ratios ranging from 0.73 to 1.48 were used in the calibration of both models. During model development, the PLS-R model was initially validated with the calibration data set using themore » leave-one-out cross validation technique. Since the PLS-R model used the entire raw spectral intensities, it did not need the nonlinear background subtraction of CO2 emission that is required for typical OH*/CH* intensity ratio calibrations. An unbiased spectral data set (not used in the PLS-R model development), for 28 different equivalence ratio conditions ranging from 0.71 to 1.67, was used to predict equivalence ratios using the PLS-R and the intensity ratio calibration models. It was found that the equivalence ratios predicted with the PLS-R based multivariate calibration model matched the experimentally measured equivalence ratios within 7%; whereas, the OH*/CH* intensity ratio calibration grossly underpredicted equivalence ratios in comparison to measured equivalence ratios, especially under rich conditions ( > 1.2). The practical implications of the chemiluminescence-based multivariate equivalence ratio sensing methodology are also discussed.« less

  17. MAGSAT anomaly field inversion and interpretation for the US

    NASA Technical Reports Server (NTRS)

    Mayhew, M. A. (Principal Investigator)

    1982-01-01

    Long wavelength anomalies in the total magnetic field measured by MAGSAT over the United States and adjacent areas are inverted to an equivalent layer crustal magnetization distribution. The model is based on an equal area dipole grid at the Earth's surface. Model resolution, defined as the closest dipole spacing giving a solution having physical significance, is about 220 km for MAGSAT data in the elevation range 300-500 km. The magnetization contours correlate well with large scale tectonic provinces. A higher resolution (200 km) model based on relatively noise free synthetic "pseudodata" is also presented. Magnetic anomaly component data measured by MAGSAT is compared with synthetic anomaly component fields arising from an equivalent source dipole array at the Earth's surface generated from total field anomaly data alone. An excellent inverse correlation between apparent magnetization and heat flow in the western U.S. is demonstrated. A regional heat flow map which is presented and compared with published maps, predicts high heat flow in Nebraska and the Dakotas, suggesting the presence of a "blind" geothermal area of regional extent.

  18. Geothermal studies in the San Juan Basin and the Four Corners Area of the Colorado Plateau II. Steady-state models of the thermal source of the San Juan volcanic field

    NASA Astrophysics Data System (ADS)

    Reiter, Marshall; Clarkson, Gerry

    1983-01-01

    The increase of heat flow approaching the San Juan volcanic field depicts a smooth profile having a relatively large half width, perhaps 50-100 km. One may suggest thermal sources creating the observed anomaly at equivalent depths under, or in proximity to, the San Juan volcanic field. Although the cause of the increased heat flow approaching the San Juan field may be in part associated with more regional Southern Rocky Mountain tectonics; geologic, heat-flow, and seismic data support the idea of a separate thermal source associated with the San Juan volcanic field. It can be shown that cooling and solidification of very deep magma bodies (to 75 km) provide less heat than required by the observed anomaly. Replenishment of the thermal source causing the heat-flow anomaly is postulated. This replenishment is approximated in a limiting case by developing finite-difference steady-state models. The best models are consistent with a plume which rises from depths of at least 100 km to depths as shallow as 35 km, whose edge is about 10 km south of Durango.

  19. Disappearing Q operator

    NASA Astrophysics Data System (ADS)

    Jones, H. F.; Rivers, R. J.

    2007-01-01

    In the Schrödinger formulation of non-Hermitian quantum theories a positive-definite metric operator η≡e-Q must be introduced in order to ensure their probabilistic interpretation. This operator also gives an equivalent Hermitian theory, by means of a similarity transformation. If, however, quantum mechanics is formulated in terms of functional integrals, we show that the Q operator makes only a subliminal appearance and is not needed for the calculation of expectation values. Instead, the relation to the Hermitian theory is encoded via the external source j(t). These points are illustrated and amplified for two non-Hermitian quantum theories: the Swanson model, a non-Hermitian transform of the simple harmonic oscillator, and the wrong-sign quartic oscillator, which has been shown to be equivalent to a conventional asymmetric quartic oscillator.

  20. Disappearing Q operator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, H. F.; Rivers, R. J.

    In the Schroedinger formulation of non-Hermitian quantum theories a positive-definite metric operator {eta}{identical_to}e{sup -Q} must be introduced in order to ensure their probabilistic interpretation. This operator also gives an equivalent Hermitian theory, by means of a similarity transformation. If, however, quantum mechanics is formulated in terms of functional integrals, we show that the Q operator makes only a subliminal appearance and is not needed for the calculation of expectation values. Instead, the relation to the Hermitian theory is encoded via the external source j(t). These points are illustrated and amplified for two non-Hermitian quantum theories: the Swanson model, a non-Hermitianmore » transform of the simple harmonic oscillator, and the wrong-sign quartic oscillator, which has been shown to be equivalent to a conventional asymmetric quartic oscillator.« less

  1. ADVANCED WAVEFORM SIMULATION FOR SEISMIC MONITORING EVENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helmberger, Donald V.; Tromp, Jeroen; Rodgers, Arthur J.

    Earthquake source parameters underpin several aspects of nuclear explosion monitoring. Such aspects are: calibration of moment magnitudes (including coda magnitudes) and magnitude and distance amplitude corrections (MDAC); source depths; discrimination by isotropic moment tensor components; and waveform modeling for structure (including waveform tomography). This project seeks to improve methods for and broaden the applicability of estimating source parameters from broadband waveforms using the Cut-and-Paste (CAP) methodology. The CAP method uses a library of Green’s functions for a one-dimensional (1D, depth-varying) seismic velocity model. The method separates the main arrivals of the regional waveform into 5 windows: Pnl (vertical and radialmore » components), Rayleigh (vertical and radial components) and Love (transverse component). Source parameters are estimated by grid search over strike, dip, rake and depth and seismic moment or equivalently moment magnitude, MW, are adjusted to fit the amplitudes. Key to the CAP method is allowing the synthetic seismograms to shift in time relative to the data in order to account for path-propagation errors (delays) in the 1D seismic velocity model used to compute the Green’s functions. The CAP method has been shown to improve estimates of source parameters, especially when delay and amplitude biases are calibrated using high signal-to-noise data from moderate earthquakes, CAP+.« less

  2. Independent component analysis of EEG dipole source localization in resting and action state of brain

    NASA Astrophysics Data System (ADS)

    Almurshedi, Ahmed; Ismail, Abd Khamim

    2015-04-01

    EEG source localization was studied in order to determine the location of the brain sources that are responsible for the measured potentials at the scalp electrodes using EEGLAB with Independent Component Analysis (ICA) algorithm. Neuron source locations are responsible in generating current dipoles in different states of brain through the measured potentials. The current dipole sources localization are measured by fitting an equivalent current dipole model using a non-linear optimization technique with the implementation of standardized boundary element head model. To fit dipole models to ICA components in an EEGLAB dataset, ICA decomposition is performed and appropriate components to be fitted are selected. The topographical scalp distributions of delta, theta, alpha, and beta power spectrum and cross coherence of EEG signals are observed. In close eyes condition it shows that during resting and action states of brain, alpha band was activated from occipital (O1, O2) and partial (P3, P4) area. Therefore, parieto-occipital area of brain are active in both resting and action state of brain. However cross coherence tells that there is more coherence between right and left hemisphere in action state of brain than that in the resting state. The preliminary result indicates that these potentials arise from the same generators in the brain.

  3. The carbon footprint of dairy production systems through partial life cycle assessment.

    PubMed

    Rotz, C A; Montes, F; Chianese, D S

    2010-03-01

    Greenhouse gas (GHG) emissions and their potential effect on the environment has become an important national and international issue. Dairy production, along with all other types of animal agriculture, is a recognized source of GHG emissions, but little information exists on the net emissions from dairy farms. Component models for predicting all important sources and sinks of CH(4), N(2)O, and CO(2) from primary and secondary sources in dairy production were integrated in a software tool called the Dairy Greenhouse Gas model, or DairyGHG. This tool calculates the carbon footprint of a dairy production system as the net exchange of all GHG in CO(2) equivalent units per unit of energy-corrected milk produced. Primary emission sources include enteric fermentation, manure, cropland used in feed production, and the combustion of fuel in machinery used to produce feed and handle manure. Secondary emissions are those occurring during the production of resources used on the farm, which can include fuel, electricity, machinery, fertilizer, pesticides, plastic, and purchased replacement animals. A long-term C balance is assumed for the production system, which does not account for potential depletion or sequestration of soil carbon. An evaluation of dairy farms of various sizes and production strategies gave carbon footprints of 0.37 to 0.69kg of CO(2) equivalent units/kg of energy-corrected milk, depending upon milk production level and the feeding and manure handling strategies used. In a comparison with previous studies, DairyGHG predicted C footprints similar to those reported when similar assumptions were made for feeding strategy, milk production, allocation method between milk and animal coproducts, and sources of CO(2) and secondary emissions. DairyGHG provides a relatively simple tool for evaluating management effects on net GHG emissions and the overall carbon footprint of dairy production systems.

  4. Analytical and numerical construction of equivalent cables.

    PubMed

    Lindsay, K A; Rosenberg, J R; Tucker, G

    2003-08-01

    The mathematical complexity experienced when applying cable theory to arbitrarily branched dendrites has lead to the development of a simple representation of any branched dendrite called the equivalent cable. The equivalent cable is an unbranched model of a dendrite and a one-to-one mapping of potentials and currents on the branched model to those on the unbranched model, and vice versa. The piecewise uniform cable, with a symmetrised tri-diagonal system matrix, is shown to represent the canonical form for an equivalent cable. Through a novel application of the Laplace transform it is demonstrated that an arbitrary branched model of a dendrite can be transformed to the canonical form of an equivalent cable. The characteristic properties of the equivalent cable are extracted from the matrix for the transformed branched model. The one-to-one mapping follows automatically from the construction of the equivalent cable. The equivalent cable is used to provide a new procedure for characterising the location of synaptic contacts on spinal interneurons.

  5. Estimates of internal-dose equivalent from inhalation and ingestion of selected radionuclides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunning, D.E.

    1982-01-01

    This report presents internal radiation dose conversion factors for radionuclides of interest in environmental assessments of nuclear fuel cycles. This volume provides an updated summary of estimates of committed dose equivalent for radionuclides considered in three previous Oak Ridge National Laboratory (ORNL) reports. Intakes by inhalation and ingestion are considered. The International Commission on Radiological Protection (ICRP) Task Group Lung Model has been used to simulate the deposition and retention of particulate matter in the respiratory tract. Results corresponding to activity median aerodynamic diameters (AMAD) of 0.3, 1.0, and 5.0 ..mu..m are given. The gastorintestinal (GI) tract has been representedmore » by a four-segment catenary model with exponential transfer of radioactivity from one segment to the next. Retention of radionuclides in systemic organs is characterized by linear combinations of decaying exponential functions, recommended in ICRP Publication 30. The first-year annual dose rate, maximum annual dose rate, and fifty-year dose commitment per microcurie intake of each radionuclide is given for selected target organs and the effective dose equivalent. These estimates include contributions from specified source organs plus the systemic activity residing in the rest of the body; cross irradiation due to penetrating radiations has been incorporated into these estimates. 15 references.« less

  6. Scattering in infrared radiative transfer: A comparison between the spectrally averaging model JURASSIC and the line-by-line model KOPRA

    NASA Astrophysics Data System (ADS)

    Griessbach, Sabine; Hoffmann, Lars; Höpfner, Michael; Riese, Martin; Spang, Reinhold

    2013-09-01

    The viability of a spectrally averaging model to perform radiative transfer calculations in the infrared including scattering by atmospheric particles is examined for the application of infrared limb remote sensing measurements. Here we focus on the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) aboard the European Space Agency's Envisat. Various spectra for clear air and cloudy conditions were simulated with a spectrally averaging radiative transfer model and a line-by-line radiative transfer model for three atmospheric window regions (825-830, 946-951, 1224-1228 cm-1) and compared to each other. The results are rated in terms of the MIPAS noise equivalent spectral radiance (NESR). The clear air simulations generally agree within one NESR. The cloud simulations neglecting the scattering source term agree within two NESR. The differences between the cloud simulations including the scattering source term are generally below three and always below four NESR. We conclude that the spectrally averaging approach is well suited for fast and accurate infrared radiative transfer simulations including scattering by clouds. We found that the main source for the differences between the cloud simulations of both models is the cloud edge sampling. Furthermore we reasoned that this model comparison for clouds is also valid for atmospheric aerosol in general.

  7. Simulating the Permafrost Distribution on the Seward Peninsula, Alaska

    NASA Astrophysics Data System (ADS)

    Busey, R.; Hinzman, L. D.; Yoshikawa, K.; Liston, G. E.

    2005-12-01

    Permafrost extent has been estimated using an equivalent latitude / elevation model based upon good climate, terrain and soil property data. This research extends a previously developed model to a relatively data sparse region. We are applying the general equivalent attitude model developed for Caribou-Poker Creeks Research Watershed over the much larger area of the Seward Peninsula, Alaska. This region of sub-Arctic Alaska is a proxy for a warmer Arctic due to the broad expanses of tussock tundra, invading shrubs and fragile permafrost with average temperatures just below freezing. The equivalent latitude model combines elevation, slope, and aspect with snow cover, where the snow cover distribution was defined using MicroMet and SnowModel. Source data for the distributed snow model came from meteorological stations across the Seward Peninsula from the National Weather Service, SNOTEL, RAWS, and our own stations. Simulations of permafrost extent will enable us to compare the current distribution to that existing during past climates and estimate the future state of permafrost on the Seward Peninsula. The broadest impacts to the terrestrial arctic regions will result through consequent effects of changing permafrost structure and extent. As the climate differentially warms in summer and winter, the permafrost will become warmer, the active layer (the layer of soil above the permafrost that annually experiences freeze and thaw) will become thicker, the lower boundary of permafrost will become shallower and permafrost extent will decrease in area. These simple structural changes will affect every aspect of the surface water and energy balances. As permafrost extent decreases, there is more infiltration to groundwater. This has significant impacts on large and small scales.

  8. Mode-based equivalent multi-degree-of-freedom system for one-dimensional viscoelastic response analysis of layered soil deposit

    NASA Astrophysics Data System (ADS)

    Li, Chong; Yuan, Juyun; Yu, Haitao; Yuan, Yong

    2018-01-01

    Discrete models such as the lumped parameter model and the finite element model are widely used in the solution of soil amplification of earthquakes. However, neither of the models will accurately estimate the natural frequencies of soil deposit, nor simulate a damping of frequency independence. This research develops a new discrete model for one-dimensional viscoelastic response analysis of layered soil deposit based on the mode equivalence method. The new discrete model is a one-dimensional equivalent multi-degree-of-freedom (MDOF) system characterized by a series of concentrated masses, springs and dashpots with a special configuration. The dynamic response of the equivalent MDOF system is analytically derived and the physical parameters are formulated in terms of modal properties. The equivalent MDOF system is verified through a comparison of amplification functions with the available theoretical solutions. The appropriate number of degrees of freedom (DOFs) in the equivalent MDOF system is estimated. A comparative study of the equivalent MDOF system with the existing discrete models is performed. It is shown that the proposed equivalent MDOF system can exactly present the natural frequencies and the hysteretic damping of soil deposits and provide more accurate results with fewer DOFs.

  9. Identification of active sources inside cavities using the equivalent source method-based free-field recovery technique

    NASA Astrophysics Data System (ADS)

    Bi, Chuan-Xing; Hu, Ding-Yu; Zhang, Yong-Bin; Jing, Wen-Qian

    2015-06-01

    In previous studies, an equivalent source method (ESM)-based technique for recovering the free sound field in a noisy environment has been successfully applied to exterior problems. In order to evaluate its performance when applied to a more general noisy environment, that technique is used to identify active sources inside cavities where the sound field is composed of the field radiated by active sources and that reflected by walls. A patch approach with two semi-closed surfaces covering the target active sources is presented to perform the measurements, and the field that would be radiated by these target active sources into free space is extracted from the mixed field by using the proposed technique, which will be further used as the input of nearfield acoustic holography for source identification. Simulation and experimental results validate the effectiveness of the proposed technique for source identification in cavities, and show the feasibility of performing the measurements with a double layer planar array.

  10. Norovirus Dynamics in Wastewater Discharges and in the Recipient Drinking Water Source: Long-Term Monitoring and Hydrodynamic Modeling.

    PubMed

    Dienus, Olaf; Sokolova, Ekaterina; Nyström, Fredrik; Matussek, Andreas; Löfgren, Sture; Blom, Lena; Pettersson, Thomas J R; Lindgren, Per-Eric

    2016-10-04

    Norovirus (NoV) that enters drinking water sources with wastewater discharges is a common cause of waterborne outbreaks. The impact of wastewater treatment plants (WWTPs) on the river Göta älv (Sweden) was studied using monitoring and hydrodynamic modeling. The concentrations of NoV genogroups (GG) I and II in samples collected at WWTPs and drinking water intakes (source water) during one year were quantified using duplex real-time reverse-transcription polymerase chain reaction. The mean (standard deviation) NoV GGI and GGII genome concentrations were 6.2 (1.4) and 6.8 (1.8) in incoming wastewater and 5.3 (1.4) and 5.9 (1.4) log 10 genome equivalents (g.e.) L -1 in treated wastewater, respectively. The reduction at the WWTPs varied between 0.4 and 1.1 log 10 units. In source water, the concentration ranged from below the detection limit to 3.8 log 10 g.e. L -1 . NoV GGII was detected in both wastewater and source water more frequently during the cold than the warm period of the year. The spread of NoV in the river was simulated using a three-dimensional hydrodynamic model. The modeling results indicated that the NoV GGI and GGII genome concentrations in source water may occasionally be up to 2.8 and 1.9 log 10 units higher, respectively, than the concentrations measured during the monitoring project.

  11. Attenuation Model Using the Large-N Array from the Source Physics Experiment

    NASA Astrophysics Data System (ADS)

    Atterholt, J.; Chen, T.; Snelson, C. M.; Mellors, R. J.

    2017-12-01

    The Source Physics Experiment (SPE) consists of a series of chemical explosions at the Nevada National Security Site. SPE seeks to better characterize the influence of subsurface heterogeneities on seismic wave propagation and energy dissipation from explosions. As a part of this experiment, SPE-5, a 5000 kg TNT equivalent chemical explosion, was detonated in 2016. During the SPE-5 experiment, a Large-N array of 996 geophones (half 3-component and half z-component) was deployed. This array covered an area that includes loosely consolidated alluvium (weak rock) and weathered granite (hard rock), and recorded the SPE-5 explosion as well as 53 weight drops. We use these Large-N recordings to develop an attenuation model of the area to better characterize how geologic structures influence source energy partitioning. We found a clear variation in seismic attenuation for different rock types: high attenuation (low Q) for alluvium and low attenuation (high Q) for granite. The attenuation structure correlates well with local geology, and will be incorporated into the large simulation effort of the SPE program to validate predictive models. (LA-UR-17-26382)

  12. Organ dose conversion coefficients for voxel models of the reference male and female from idealized photon exposures

    NASA Astrophysics Data System (ADS)

    Schlattl, H.; Zankl, M.; Petoussi-Henss, N.

    2007-04-01

    A new series of organ equivalent dose conversion coefficients for whole body external photon exposure is presented for a standardized couple of human voxel models, called Rex and Regina. Irradiations from broad parallel beams in antero-posterior, postero-anterior, left- and right-side lateral directions as well as from a 360° rotational source have been performed numerically by the Monte Carlo transport code EGSnrc. Dose conversion coefficients from an isotropically distributed source were computed, too. The voxel models Rex and Regina originating from real patient CT data comply in body and organ dimensions with the currently valid reference values given by the International Commission on Radiological Protection (ICRP) for the average Caucasian man and woman, respectively. While the equivalent dose conversion coefficients of many organs are in quite good agreement with the reference values of ICRP Publication 74, for some organs and certain geometries the discrepancies amount to 30% or more. Differences between the sexes are of the same order with mostly higher dose conversion coefficients in the smaller female model. However, much smaller deviations from the ICRP values are observed for the resulting effective dose conversion coefficients. With the still valid definition for the effective dose (ICRP Publication 60), the greatest change appears in lateral exposures with a decrease in the new models of at most 9%. However, when the modified definition of the effective dose as suggested by an ICRP draft is applied, the largest deviation from the current reference values is obtained in postero-anterior geometry with a reduction of the effective dose conversion coefficient by at most 12%.

  13. Comparative burial and thermal history of lower Upper Cretaceous strata, Powder River basin, Wyoming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuccio, V.F.

    1989-03-01

    Burial histories were reconstructed for three localities in the Powder River basin (PRB), Wyoming. Thermal maturity of lower Upper Cretaceous source rocks was determined by vitrinite reflectance (R/sub m/) and time-temperature index (TTI) modeling, producing independent estimates for timing of the oil window (0.55-1.35% R/sub m/). In the northwestern PRB, lower Upper Cretaceous rocks were buried to about 12,500 ft and achieved a thermal maturity of 0.50% to 0.56% at maximum burial, 10 Ma, based on measured R/sub m/. TTI modeling suggests a slightly higher thermal maturity, with an R/sub m/ equivalent of approximately 0.75%, placing the source rocks atmore » the beginning of the oil window 30 Ma. In the southwestern PRB, lower Upper Cretaceous rocks have been buried to about 15,000 ft and achieved thermal maturities between 0.66% and 0.75% about 10 Ma based on measured R/sub m/; therefore, petroleum generation may have begun slightly earlier. TTI modeling estimates an R/sub m/ equivalent of 1.10%, placing the beginning of the oil window at 45 Ma. In the northeastern PRB, lower Upper Cretaceous rocks have been buried only to approximately 5500 ft. Measured R/sub m/ and TTI modeling indicate a thermal maturity for lower Upper Cretaceous rocks between 0.45% and 0.50% R/sub m/, too low for petroleum generation. The higher R/sub m/ values determined by the TTI models may be due to overestimation of maximum burial depth and/or paleogeothermal gradients. The two independent maturity indicators do, however, constrain fairly narrowly the onset of petroleum generation.« less

  14. Collaboration, not competition: cost analysis of neonatal nurse practitioner plus neonatologist versus neonatologist-only care models.

    PubMed

    Bosque, Elena

    2015-04-01

    Although advanced practice in neonatal nursing is accepted and supported by the American Academy of Pediatrics and National Association of Neonatal Nurse Practitioners, less than one-half of all states allow independent prescriptive authority by advanced practice nurse practitioners. The purpose of this study was to compare costs of a collaborative practice model that includes neonatal nurse practitioner (NNP) plus neonatologist (Neo) versus a neonatologist only (Neo-Only) practice in Washington state. Published Internet median salary figures from 3 sources were averaged to produce mean ± SD provider salaries, and costs for each care model were calculated in this descriptive, comparative study. Median NNP versus Neo salaries were $99,773 ± $5206 versus $228,871 ± $9654, respectively (P < .0001). The NNP + Neo (5 NNP/3 Neo full-time equivalents [FTEs]) cost $1,185,475 versus Neo-Only (8 Neo FTEs) cost $1,830,960. The NNP + Neo practice model with 8 FTEs suggests a cost savings, with assumed equivalent reimbursement, of $645,485/year. These results may provide the impetus for more states to adopt broader scope of practice licensure for NNPs. These data may provide rationale for analysis of actual costs and outcomes of collaborative practice.

  15. The Equivalent Electrokinetic Circuit Model of Ion Concentration Polarization Layer: Electrical Double Layer, Extended Space Charge and Electro-convection

    NASA Astrophysics Data System (ADS)

    Cho, Inhee; Huh, Keon; Kwak, Rhokyun; Lee, Hyomin; Kim, Sung Jae

    2016-11-01

    The first direct chronopotentiometric measurement was provided to distinguish the potential difference through the extended space charge (ESC) layer which is formed with the electrical double layer (EDL) near a perm-selective membrane. From this experimental result, the linear relationship was obtained between the resistance of ESC and the applied current density. Furthermore, we observed the step-wise distributions of relaxation time at the limiting current regime, confirming the existence of ESC capacitance other than EDL's. In addition, we proposed the equivalent electrokinetic circuit model inside ion concentration polarization (ICP) layer under rigorous consideration of EDL, ESC and electro-convection (EC). In order to elucidate the voltage configuration in chronopotentiometric measurement, the EC component was considered as the "dependent voltage source" which is serially connected to the ESC layer. This model successfully described the charging behavior of the ESC layer with or without EC, where both cases determined each relaxation time, respectively. Finally, we quantitatively verified their values utilizing the Poisson-Nernst-Planck equations. Therefore, this unified circuit model would provide a key insight of ICP system and potential energy-efficient applications.

  16. Equivalent model of a dually-fed machine for electric drive control systems

    NASA Astrophysics Data System (ADS)

    Ostrovlyanchik, I. Yu; Popolzin, I. Yu

    2018-05-01

    The article shows that the mathematical model of a dually-fed machine is complicated because of the presence of a controlled voltage source in the rotor circuit. As a method of obtaining a mathematical model, the method of a generalized two-phase electric machine is applied and a rotating orthogonal coordinate system is chosen that is associated with the representing vector of a stator current. In the chosen coordinate system in the operator form the differential equations of electric equilibrium for the windings of the generalized machine (the Kirchhoff equation) are written together with the expression for the moment, which determines the electromechanical energy transformation in the machine. Equations are transformed so that they connect the currents of the windings, that determine the moment of the machine, and the voltages on these windings. The structural diagram of the machine is assigned to the written equations. Based on the written equations and accepted assumptions, expressions were obtained for the balancing the EMF of windings, and on the basis of these expressions an equivalent mathematical model of a dually-fed machine is proposed, convenient for use in electric drive control systems.

  17. Measurements of the cesium flow from a surface-plasma H/sup -/ ion source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, H.V.; Allison, P.W.

    1979-01-01

    A surface ionization gauge (SIG) was constructed and used to measure the Cs/sup 0/ flow rate through the emission slit of a surface-plasma source (SPS) of H/sup -/ ions with Penning geometry. The equivalent cesium density in the SPS discharge is deduced from these flow measurements. For dc operation the optimum H/sup -/ current occurs at an equivalent cesium density of approx. 7 x 10/sup 12/ cm/sup -3/ (corresponding to an average cesium consumption rate of 0.5 mg/h). For pulsed operation the optimum H/sup -/ current occurs at an equivalent cesium density of approx. 2 x 10/sup 13/ cm/sup -3/more » (1-mg/h average cesium consumption rate). Cesium trapping by the SPS discharge was observed for both dc and pulsed operation. A cesium energy of approx. 0.1 eV is deduced from the observed time of flight to the SIG. In addition to providing information on the physics of the source, the SIG is a useful diagnostic tool for source startup and operation.« less

  18. Pseudo-dynamic source characterization accounting for rough-fault effects

    NASA Astrophysics Data System (ADS)

    Galis, Martin; Thingbaijam, Kiran K. S.; Mai, P. Martin

    2016-04-01

    Broadband ground-motion simulations, ideally for frequencies up to ~10Hz or higher, are important for earthquake engineering; for example, seismic hazard analysis for critical facilities. An issue with such simulations is realistic generation of radiated wave-field in the desired frequency range. Numerical simulations of dynamic ruptures propagating on rough faults suggest that fault roughness is necessary for realistic high-frequency radiation. However, simulations of dynamic ruptures are too expensive for routine applications. Therefore, simplified synthetic kinematic models are often used. They are usually based on rigorous statistical analysis of rupture models inferred by inversions of seismic and/or geodetic data. However, due to limited resolution of the inversions, these models are valid only for low-frequency range. In addition to the slip, parameters such as rupture-onset time, rise time and source time functions are needed for complete spatiotemporal characterization of the earthquake rupture. But these parameters are poorly resolved in the source inversions. To obtain a physically consistent quantification of these parameters, we simulate and analyze spontaneous dynamic ruptures on rough faults. First, by analyzing the impact of fault roughness on the rupture and seismic radiation, we develop equivalent planar-fault kinematic analogues of the dynamic ruptures. Next, we investigate the spatial interdependencies between the source parameters to allow consistent modeling that emulates the observed behavior of dynamic ruptures capturing the rough-fault effects. Based on these analyses, we formulate a framework for pseudo-dynamic source model, physically consistent with the dynamic ruptures on rough faults.

  19. Electrochemical process for the preparation of nitrogen fertilizers

    DOEpatents

    Aulich, Ted R [Grand Forks, ND; Olson, Edwin S [Grand Forks, ND; Jiang, Junhua [Grand Forks, ND

    2012-04-10

    The present invention provides methods and apparatus for the preparation of nitrogen fertilizers including ammonium nitrate, urea, urea-ammonium nitrate, and/or ammonia, at low temperature and pressure, preferably at ambient temperature and pressure, utilizing a source of carbon, a source of nitrogen, and/or a source of hydrogen or hydrogen equivalent. Implementing an electrolyte serving as ionic charge carrier, (1) ammonium nitrate is produced via the reduction of a nitrogen source at the cathode and the oxidation of a nitrogen source at the anode; (2) urea or its isomers are produced via the simultaneous cathodic reduction of a carbon source and a nitrogen source; (3) ammonia is produced via the reduction of nitrogen source at the cathode and the oxidation of a hydrogen source or a hydrogen equivalent such as carbon monoxide or a mixture of carbon monoxide and hydrogen at the anode; and (4) urea-ammonium nitrate is produced via the simultaneous cathodic reduction of a carbon source and a nitrogen source, and anodic oxidation of a nitrogen source. The electrolyte can be aqueous, non-aqueous, or solid.

  20. Mobile source CO2 mitigation through smart growth development and vehicle fleet hybridization.

    PubMed

    Stone, Brian; Mednick, Adam C; Holloway, Tracey; Spak, Scott N

    2009-03-15

    This paper presents the results of a study on the effectiveness of smart growth development patterns and vehicle fleet hybridization in reducing mobile source emissions of carbon dioxide (CO2) across 11 major metropolitan regions of the Midwestern U.S. over a 50-year period. Through the integration of a vehicle travel activity modeling framework developed by researchers atthe Oak Ridge National Laboratory with small area population projections, we model mobile source emissions of CO2 associated with alternative land development and technology change scenarios between 2000 and 2050. Our findings suggest that under an aggressive smart growth scenario, growth in emissions expected to occur under a business as usual scenario is reduced by 34%, while the full dissemination of hybrid-electric vehicles throughout the light vehicle fleet is found to offset the expected growth in emissions by 97%. Our results further suggest that high levels of urban densification could achieve reductions in 2050 CO2 emissions equivalent to those attainable through the full dissemination of hybrid-electric vehicle technologies.

  1. Calibration factors for the SNOOPY NP-100 neutron dosimeter

    NASA Astrophysics Data System (ADS)

    Moscu, D. F.; McNeill, F. E.; Chase, J.

    2007-10-01

    Within CANDU nuclear power facilities, only a small fraction of workers are exposed to neutron radiation. For these individuals, roughly 4.5% of the total radiation equivalent dose is the result of exposure to neutrons. When this figure is considered across all workers receiving external exposure of any kind, only 0.25% of the total radiation equivalent dose is the result of exposure to neutrons. At many facilities, the NP-100 neutron dosimeter, manufactured by Canberra Industries Incorporated, is employed in both direct and indirect dosimetry methods. Also known as "SNOOPY", these detectors undergo calibration, which results in a calibration factor relating the neutron count rate to the ambient dose equivalent rate, using a standard Am-Be neutron source. Using measurements presented in a technical note, readings from the dosimeter for six different neutron fields in six source-detector orientations were used, to determine a calibration factor for each of these sources. The calibration factor depends on the neutron energy spectrum and the radiation weighting factor to link neutron fluence to equivalent dose. Although the neutron energy spectra measured in the CANDU workplace are quite different than that of the Am-Be calibration source, the calibration factor remains constant - within acceptable limits - regardless of the neutron source used in the calibration; for the specified calibration orientation and current radiation weighting factors. However, changing the value of the radiation weighting factors would result in changes to the calibration factor. In the event of changes to the radiation weighting factors, it will be necessary to assess whether a change to the calibration process or resulting calibration factor is warranted.

  2. Can we use the equivalent sphere model to approximate organ doses in space radiation environments?

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Wei

    For space radiation protection one often calculates the dose or dose equivalent in blood forming organs (BFO). It has been customary to use a 5cm equivalent sphere to approximate the BFO dose. However, previous studies have concluded that a 5cm sphere gives a very different dose from the exact BFO dose. One study concludes that a 9cm sphere is a reasonable approximation for the BFO dose in solar particle event (SPE) environments. In this study we investigate the reason behind these observations and extend earlier studies by studying whether BFO, eyes or the skin can be approximated by the equivalent sphere model in different space radiation environments such as solar particle events and galactic cosmic ray (GCR) environments. We take the thickness distribution functions of the organs from the CAM (Computerized Anatomical Man) model, then use a deterministic radiation transport to calculate organ doses in different space radiation environments. The organ doses have been evaluated with a water or aluminum shielding from 0 to 20 g/cm2. We then compare these exact doses with results from the equivalent sphere model and determine in which cases and at what radius parameters the equivalent sphere model is a reasonable approximation. Furthermore, we propose to use a modified equivalent sphere model with two radius parameters to represent the skin or eyes. For solar particle events, we find that the radius parameters for the organ dose equivalent increase significantly with the shielding thickness, and the model works marginally for BFO but is unacceptable for eyes or the skin. For galactic cosmic rays environments, the equivalent sphere model with one organ-specific radius parameter works well for the BFO dose equivalent, marginally well for the BFO dose and the dose equivalent of eyes or the skin, but is unacceptable for the dose of eyes or the skin. The BFO radius parameters are found to be significantly larger than 5 cm in all cases, consistent with the conclusion of an earlier study. The radius parameters for the dose equivalent in GCR environments are approximately between 10 and 11 cm for the BFO, 3.7 to 4.8 cm for eyes, and 3.5 to 5.6 cm for the skin; while the radius parameters are between 10 and 13 cm for the BFO dose. In the proposed modified equivalent sphere model, the range of each of the two radius parameters for the skin (or eyes) is much tighter than that in the equivalent sphere model with one radius parameter. Our results thus show that the equivalent sphere model works better in galactic cosmic rays environments than in solar particle events. The model works well or marginally well for BFO but usually does not work for eyes or the skin. A modified model with two radius parameters works much better in approximating the dose and dose equivalent in eyes or the skin.

  3. An informatics model for guiding assembly of telemicrobiology workstations for malaria collaborative diagnostics using commodity products and open-source software.

    PubMed

    Suhanic, West; Crandall, Ian; Pennefather, Peter

    2009-07-17

    Deficits in clinical microbiology infrastructure exacerbate global infectious disease burdens. This paper examines how commodity computation, communication, and measurement products combined with open-source analysis and communication applications can be incorporated into laboratory medicine microbiology protocols. Those commodity components are all now sourceable globally. An informatics model is presented for guiding the use of low-cost commodity components and free software in the assembly of clinically useful and usable telemicrobiology workstations. The model incorporates two general principles: 1) collaborative diagnostics, where free and open communication and networking applications are used to link distributed collaborators for reciprocal assistance in organizing and interpreting digital diagnostic data; and 2) commodity engineering, which leverages globally available consumer electronics and open-source informatics applications, to build generic open systems that measure needed information in ways substantially equivalent to more complex proprietary systems. Routine microscopic examination of Giemsa and fluorescently stained blood smears for diagnosing malaria is used as an example to validate the model. The model is used as a constraint-based guide for the design, assembly, and testing of a functioning, open, and commoditized telemicroscopy system that supports distributed acquisition, exploration, analysis, interpretation, and reporting of digital microscopy images of stained malarial blood smears while also supporting remote diagnostic tracking, quality assessment and diagnostic process development. The open telemicroscopy workstation design and use-process described here can address clinical microbiology infrastructure deficits in an economically sound and sustainable manner. It can boost capacity to deal with comprehensive measurement of disease and care outcomes in individuals and groups in a distributed and collaborative fashion. The workstation enables local control over the creation and use of diagnostic data, while allowing for remote collaborative support of diagnostic data interpretation and tracking. It can enable global pooling of malaria disease information and the development of open, participatory, and adaptable laboratory medicine practices. The informatic model highlights how the larger issue of access to generic commoditized measurement, information processing, and communication technology in both high- and low-income countries can enable diagnostic services that are much less expensive, but substantially equivalent to those currently in use in high-income countries.

  4. Can carbon offsetting pay for upland ecological restoration?

    NASA Astrophysics Data System (ADS)

    Worrall, F.

    2012-04-01

    Upland peat soils represent a large terrestrial carbon store and as such have the potential to be either an ongoing net sink of carbon or a significant net source of carbon. In the UK many upland peats are managed for a range of purposes but these purposes have rarely included carbon stewardship. However, there is now an opportunity to consider whether management practices could be altered to enhance storage of carbon in upland peats. Further, there are now voluntary and regulated carbon trading schemes operational throughout Europe that mean stored carbon, if verified, could have an economic and tradeable value. This means that new income streams could become available for upland management. The 'Sustainable Uplands' RELU project has developed a model for calculating carbon fluxes from peat soils that covers all carbon uptake and release pathways (e.g. fluvial and gaseous pathways). The model has been developed so that the impact of common management options within UK upland peats can be considered. The model was run for a decade from 1997-2006 and applied to an area of 550 km2 of upland peat soils in the Peak District. The study estimates that the region is presently a net sink of -62 Ktonnes CO2 equivalent at an average export of -136 tonnes CO2 equivalent/km2/yr.. If management interventions were targeted across the area the total sink could increase to -160 Ktonnes CO2/yr at an average export of -219 tonnes CO2 equivalent/km2/yr. However, not all interventions resulted in a benefit; some resulted in increased losses of CO2 equivalents. Given present costs of peatland restoration and value of carbon offsets, the study suggests that 51% of those areas, where a carbon benefit was estimated by modelling for targeted action of management interventions, would show a profit from carbon offsetting within 30 years. However, this percentage is very dependent upon the price of carbon used.

  5. Acoustic Analogy and Alternative Theories for Jet Noise Prediction

    NASA Technical Reports Server (NTRS)

    Morris, Philip J.; Farassat, F.

    2002-01-01

    Several methods for the prediction of jet noise are described. All but one of the noise prediction schemes are based on Lighthill's or Lilley's acoustic analogy, whereas the other is the jet noise generation model recently proposed by Tam and Auriault. In all of the approaches, some assumptions must be made concerning the statistical properties of the turbulent sources. In each case the characteristic scales of the turbulence are obtained from a solution of the Reynolds-averaged Navier-Stokes equation using a kappa-sigma turbulence model. It is shown that, for the same level of empiricism, Tam and Auriault's model yields better agreement with experimental noise measurements than the acoustic analogy. It is then shown that this result is not because of some fundamental flaw in the acoustic analogy approach, but instead is associated with the assumptions made in the approximation of the turbulent source statistics. If consistent assumptions are made, both the acoustic analogy and Tam and Auriault's model yield identical noise predictions. In conclusion, a proposal is presented for an acoustic analogy that provides a clearer identification of the equivalent source mechanisms, as is a discussion of noise prediction issues that remain to be resolved.

  6. The Acoustic Analogy and Alternative Theories for Jet Noise Prediction

    NASA Technical Reports Server (NTRS)

    Morris, Philip J.; Farassat, F.; Morris, Philip J.

    2002-01-01

    This paper describes several methods for the prediction of jet noise. All but one of the noise prediction schemes are based on Lighthill's or Lilley's acoustic analogy while the other is the jet noise generation model recently proposed by Tam and Auriault. In all the approaches some assumptions must be made concerning the statistical properties of the turbulent sources. In each case the characteristic scales of the turbulence are obtained from a solution of the Reynolds-averaged Navier Stokes equation using a k-epsilon turbulence model. It is shown that, for the same level of empiricism, Tam and Auriault's model yields better agreement with experimental noise measurements than the acoustic analogy. It is then shown that this result is not because of some fundamental flaw in the acoustic analogy approach: but, is associated with the assumptions made in the approximation of the turbulent source statistics. If consistent assumptions are made, both the acoustic analogy and Tam and Auriault's model yield identical noise predictions. The paper concludes with a proposal for an acoustic analogy that provides a clearer identification of the equivalent source mechanisms and a discussion of noise prediction issues that remain to be resolved.

  7. The Acoustic Analogy and Alternative Theories for Jet Noise Prediction

    NASA Technical Reports Server (NTRS)

    Morris, Philip J.; Farassat, F.

    2002-01-01

    This paper describes several methods for the prediction of jet noise. All but one of the noise prediction schemes are based on Lighthill's or Lilley's acoustic analogy while the other is the jet noise generation model recently proposed by Tam and Auriault. In all the approaches some assumptions must be made concerning the statistical properties of the turbulent sources. In each case the characteristic scales of the turbulence are obtained from a solution of the Reynolds-averaged Navier Stokes equation using a k - epsilon turbulence model. It is shown that, for the same level of empiricism, Tam and Auriault's model yields better agreement with experimental noise measurements than the acoustic analogy. It is then shown that this result is not because of some fundamental flaw in the acoustic analogy approach: but, is associated with the assumptions made in the approximation of the turbulent source statistics. If consistent assumptions are made, both the acoustic analogy and Tam and Auriault's model yield identical noise predictions. The paper concludes with a proposal for an acoustic analogy that provides a clearer identification of the equivalent source mechanisms and a discussion of noise prediction issues that remain to be resolved.

  8. Revisiting the social cost of carbon.

    PubMed

    Nordhaus, William D

    2017-02-14

    The social cost of carbon (SCC) is a central concept for understanding and implementing climate change policies. This term represents the economic cost caused by an additional ton of carbon dioxide emissions or its equivalent. The present study presents updated estimates based on a revised DICE model (Dynamic Integrated model of Climate and the Economy). The study estimates that the SCC is $31 per ton of CO 2 in 2010 US$ for the current period (2015). For the central case, the real SCC grows at 3% per year over the period to 2050. The paper also compares the estimates with those from other sources.

  9. Revisiting the social cost of carbon

    NASA Astrophysics Data System (ADS)

    Nordhaus, William D.

    2017-02-01

    The social cost of carbon (SCC) is a central concept for understanding and implementing climate change policies. This term represents the economic cost caused by an additional ton of carbon dioxide emissions or its equivalent. The present study presents updated estimates based on a revised DICE model (Dynamic Integrated model of Climate and the Economy). The study estimates that the SCC is 31 per ton of CO2 in 2010 US for the current period (2015). For the central case, the real SCC grows at 3% per year over the period to 2050. The paper also compares the estimates with those from other sources.

  10. Nanodosimetry of electrons: analysis by experiment and modelling.

    PubMed

    Bantsar, A; Pszona, S

    2015-09-01

    Nanodosimetry experiments for high-energy electrons from a (131)I radioactive source interacting with gaseous nitrogen with sizes on a scale equivalent to the mass per area of a segment of DNA and nucleosome are described. The discrete ionisation cluster-size distributions were measured in experiments carried out with the Jet Counter. The experimental results were compared with those obtained by Monte Carlo modelling. The descriptors of radiation damages have been derived from the data obtained from ionisation cluster-size distributions. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. LANDSAT 4 band 6 data evaluation

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Satellite data collected over Lake Ontario were processed to observed surface temperature values. This involved computing apparent radiance values for each point where surface temperatures were known from averaged digital count values. These radiance values were then converted by using the LOWTRAN 5A atmospheric propagation model. This model was modified by incorporating a spectral response function for the LANDSAT band 6 sensors. A downwelled radiance term derived from LOWTRAN was included to account for reflected sky radiance. A blackbody equivalent source radiance was computed. Measured temperatures were plotted against the predicted temperature. The RMS error between the data sets is 0.51K.

  12. A major crustal feature in the southeastern United States inferred from the MAGSAT equivalent source anomaly field

    NASA Technical Reports Server (NTRS)

    Ruder, M. E.; Alexander, S. S.

    1985-01-01

    The MAGSAT equivalent-source anomaly field evaluated at 325 km altitude depicts a prominent anomaly centered over southeast Georgia, which is adjacent to the high-amplitude positive Kentucky anomaly. To overcome the satellite resolution constraint in studying this anomaly, conventional geophysical data were included in analysis: Bouguer gravity, seismic reflection and refraction, aeromagnetic, and in-situ stress-strain measurements. This integrated geophysical approach, infers more specifically the nature and extent of the crustal and/or lithospheric source of the Georgia MAGSAT anomaly. Physical properties and tectonic evolution of the area are all important in the interpretation.

  13. Pressure pulsations in piping system excited by a centrifugal turbomachinery taking the damping characteristics into consideration

    NASA Astrophysics Data System (ADS)

    Hayashi, I.; Kaneko, S.

    2014-02-01

    Pressure pulsations excited by a centrifugal turbomachinery such as compressor, fan or pump at the blade passing frequency may cause severe noise and vibrations in piping system. Therefore, the practical evaluation method of pressure pulsations is strongly recommended. In particular, the maximum pressure amplitude under the resonant conditions should be appropriately evaluated. In this study, a one-dimensional excitation source model for a compressor or pump is introduced based on the equation of motion, so as to incorporate the non-linear damping proportional to velocity squared in the total piping system including the compressor or pump. The damping characteristics of the compressor or pump are investigated by using the semi-empirical model. It is shown that the resistance coefficient of the compressor or pump depends on the Reynolds number that is defined using the equivalent velocity of the pulsating flow. The frequency response of the pressure amplitude and the pressure distribution in the piping system can be evaluated by introducing the equivalent resistance of the compressor or pump and that of piping system. In particular, the relation of the maximum pressure amplitude in piping system to the location of the excitation source under resonant conditions can be evaluated. Finally, the reduction of the pressure pulsations by use of an orifice plate is discussed in terms of the pulsation energy loss.

  14. PHOTON SPECTRA IN NPL STANDARD RADIONUCLIDE NEUTRON FIELDS.

    PubMed

    Roberts, N J

    2017-09-23

    A HPGe detector has been used to measure the photon spectra from the majority of radionuclide neutron sources in use at NPL (252Cf, 241Am-Be, 241Am-Li, 241Am-B). The HPGe was characterised then modelled to produce a response matrix. The measured pulse height spectra were then unfolded to produce photon fluence spectra. Changes in the photon spectrum with time from a 252Cf source are evident. Spectra from a 2-year-old and 42-year-old 252Cf source are presented showing the change from a continuum to peaks from long-lived isotopes of Cf. Other radionuclide neutron source spectra are also presented and discussed. The new spectra were used to improve the photon to neutron dose equivalent ratios from some earlier work at NPL with GM tubes and EPDs. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Evaluation of water-mimicking solid phantom materials for use in HDR and LDR brachytherapy dosimetry

    NASA Astrophysics Data System (ADS)

    Schoenfeld, Andreas A.; Thieben, Maike; Harder, Dietrich; Poppe, Björn; Chofor, Ndimofor

    2017-12-01

    In modern HDR or LDR brachytherapy with photon emitters, fast checks of the dose profiles generated in water or a water-equivalent phantom have to be available in the interest of patient safety. However, the commercially available brachytherapy photon sources cover a wide range of photon emission spectra, and the range of the in-phantom photon spectrum is further widened by Compton scattering, so that the achievement of water-mimicking properties of such phantoms involves high requirements on their atomic composition. In order to classify the degree of water equivalence of the numerous commercially available solid water-mimicking phantom materials and the energy ranges of their applicability, the radial profiles of the absorbed dose to water, D w, have been calculated using Monte Carlo simulations in these materials and in water phantoms of the same dimensions. This study includes the HDR therapy sources Nucletron Flexisource Co-60 HDR (60Co), Eckert und Ziegler BEBIG GmbH CSM-11 (137Cs), Implant Sciences Corporation HDR Yb-169 Source 4140 (169Yb) as well as the LDR therapy sources IsoRay Inc. Proxcelan CS-1 (131Cs), IsoAid Advantage I-125 IAI-125A (125I), and IsoAid Advantage Pd-103 IAPd-103A (103Pd). Thereby our previous comparison between phantom materials and water surrounding a Varian GammaMed Plus HDR therapy 192Ir source (Schoenfeld et al 2015) has been complemented. Simulations were performed in cylindrical phantoms consisting of either water or the materials RW1, RW3, Solid Water, HE Solid Water, Virtual Water, Plastic Water DT, Plastic Water LR, Original Plastic Water (2015), Plastic Water (1995), Blue Water, polyethylene, polystyrene and PMMA. While for 192Ir, 137Cs and 60Co most phantom materials can be regarded as water equivalent, for 169Yb the materials Plastic Water LR, Plastic Water DT and RW1 appear as water equivalent. For the low-energy sources 106Pd, 131Cs and 125I, only Plastic Water LR can be classified as water equivalent.

  16. Evaluation of water-mimicking solid phantom materials for use in HDR and LDR brachytherapy dosimetry.

    PubMed

    Schoenfeld, Andreas A; Thieben, Maike; Harder, Dietrich; Poppe, Björn; Chofor, Ndimofor

    2017-11-21

    In modern HDR or LDR brachytherapy with photon emitters, fast checks of the dose profiles generated in water or a water-equivalent phantom have to be available in the interest of patient safety. However, the commercially available brachytherapy photon sources cover a wide range of photon emission spectra, and the range of the in-phantom photon spectrum is further widened by Compton scattering, so that the achievement of water-mimicking properties of such phantoms involves high requirements on their atomic composition. In order to classify the degree of water equivalence of the numerous commercially available solid water-mimicking phantom materials and the energy ranges of their applicability, the radial profiles of the absorbed dose to water, D w , have been calculated using Monte Carlo simulations in these materials and in water phantoms of the same dimensions. This study includes the HDR therapy sources Nucletron Flexisource Co-60 HDR ( 60 Co), Eckert und Ziegler BEBIG GmbH CSM-11 ( 137 Cs), Implant Sciences Corporation HDR Yb-169 Source 4140 ( 169 Yb) as well as the LDR therapy sources IsoRay Inc. Proxcelan CS-1 ( 131 Cs), IsoAid Advantage I-125 IAI-125A ( 125 I), and IsoAid Advantage Pd-103 IAPd-103A ( 103 Pd). Thereby our previous comparison between phantom materials and water surrounding a Varian GammaMed Plus HDR therapy 192 Ir source (Schoenfeld et al 2015) has been complemented. Simulations were performed in cylindrical phantoms consisting of either water or the materials RW1, RW3, Solid Water, HE Solid Water, Virtual Water, Plastic Water DT, Plastic Water LR, Original Plastic Water (2015), Plastic Water (1995), Blue Water, polyethylene, polystyrene and PMMA. While for 192 Ir, 137 Cs and 60 Co most phantom materials can be regarded as water equivalent, for 169 Yb the materials Plastic Water LR, Plastic Water DT and RW1 appear as water equivalent. For the low-energy sources 106 Pd, 131 Cs and 125 I, only Plastic Water LR can be classified as water equivalent.

  17. Dispersion modeling of polycyclic aromatic hydrocarbons from combustion of biomass and fossil fuels and production of coke in Tianjin, China.

    PubMed

    Tao, Shu; Li, Xinrong; Yang, Yu; Coveney, Raymond M; Lu, Xiaoxia; Chen, Haitao; Shen, Weiran

    2006-08-01

    A USEPA, procedure, ISCLT3 (Industrial Source Complex Long-Term), was applied to model the spatial distribution of polycyclic aromatic hydrocarbons (PAHs) emitted from various sources including coal, petroleum, natural gas, and biomass into the atmosphere of Tianjin, China. Benzo[a]pyrene equivalent concentrations (BaPeq) were calculated for risk assessment. Model results were provisionally validated for concentrations and profiles based on the observed data at two monitoring stations. The dominant emission sources in the area were domestic coal combustion, coke production, and biomass burning. Mainly because of the difference in the emission heights, the contributions of various sources to the average concentrations at receptors differ from proportions emitted. The shares of domestic coal increased from approximately 43% at the sources to 56% at the receptors, while the contributions of coking industry decreased from approximately 23% at the sources to 7% at the receptors. The spatial distributions of gaseous and particulate PAHs were similar, with higher concentrations occurring within urban districts because of domestic coal combustion. With relatively smaller contributions, the other minor sources had limited influences on the overall spatial distribution. The calculated average BaPeq value in air was 2.54 +/- 2.87 ng/m3 on an annual basis. Although only 2.3% of the area in Tianjin exceeded the national standard of 10 ng/m3, 41% of the entire population lives within this area.

  18. Setup for investigating gold nanoparticle penetration through reconstructed skin and comparison to published human skin data

    NASA Astrophysics Data System (ADS)

    Labouta, Hagar I.; Thude, Sibylle; Schneider, Marc

    2013-06-01

    Owing to the limited source of human skin (HS) and the ethical restrictions of using animals in experiments, in vitro skin equivalents are a possible alternative for conducting particle penetration experiments. The conditions for conducting penetration experiments with model particles, 15-nm gold nanoparticles (AuNP), through nonsealed skin equivalents are described for the first time. These conditions include experimental setup, sterility conditions, effective applied dose determination, skin sectioning, and skin integrity check. Penetration at different exposure times (two and 24 h) and after tissue fixation (fixed versus unfixed skin) are examined to establish a benchmark in comparison to HS in an attempt to get similar results to HS experiments presented earlier. Multiphoton microscopy is used to detect gold luminescence in skin sections. λex=800 nm is used for excitation of AuNP and skin samples, allowing us to determine a relative index for particle penetration. Despite the observed overpredictability of penetration into skin equivalents, they could serve as a first fast screen for testing the behavior of nanoparticles and extrapolate their penetration behavior into HS. Further investigations are required to test a wide range of particles of different physicochemical properties to validate the skin equivalent-human skin particle penetration relationship.

  19. SU-F-T-02: Estimation of Radiobiological Doses (BED and EQD2) of Single Fraction Electronic Brachytherapy That Equivalent to I-125 Eye Plaque: By Using Linear-Quadratic and Universal Survival Curve Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Y; Waldron, T; Pennington, E

    Purpose: To test the radiobiological impact of hypofractionated choroidal melanoma brachytherapy, we calculated single fraction equivalent doses (SFED) of the tumor that equivalent to 85 Gy of I125-BT for 20 patients. Corresponding organs-at-risks (OARs) doses were estimated. Methods: Twenty patients treated with I125-BT were retrospectively examined. The tumor SFED values were calculated from tumor BED using a conventional linear-quadratic (L-Q) model and an universal survival curve (USC). The opposite retina (α/β = 2.58), macula (2.58), optic disc (1.75), and lens (1.2) were examined. The % doses of OARs over tumor doses were assumed to be the same as for amore » single fraction delivery. The OAR SFED values were converted into BED and equivalent dose in 2 Gy fraction (EQD2) by using both L-Q and USC models, then compared to I125-BT. Results: The USC-based BED and EQD2 doses of the macula, optic disc, and the lens were on average 118 ± 46% (p < 0.0527), 126 ± 43% (p < 0.0354), and 112 ± 32% (p < 0.0265) higher than those of I125-BT, respectively. The BED and EQD2 doses of the opposite retina were 52 ± 9% lower than I125-BT. The tumor SFED values were 25.2 ± 3.3 Gy and 29.1 ± 2.5 Gy when using USC and LQ models which can be delivered within 1 hour. All BED and EQD2 values using L-Q model were significantly larger when compared to the USC model (p < 0.0274) due to its large single fraction size (> 14 Gy). Conclusion: The estimated single fraction doses were feasible to be delivered within 1 hour using a high dose rate source such as electronic brachytherapy (eBT). However, the estimated OAR doses using eBT were 112 ∼ 118% higher than when using the I125-BT technique. Continued exploration of alternative dose rate or fractionation schedules should be followed.« less

  20. 40 CFR 60.47Da - Commercial demonstration permit.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... may not exceed the following equivalent MW electrical generation capacity for any one technology... plants may not exceed 15,000 MW. Technology Pollutant Equivalent electrical capacity(MW electrical output... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Electric Utility...

  1. 40 CFR 63.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Protection Agency. Equivalent emission limitation means any maximum achievable control technology emission... common control that is included in a section 112(c) source category or subcategory for which a section... pollutant at least equivalent to the reduction in emissions of such pollutant achieved under a relevant...

  2. 40 CFR 63.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Protection Agency. Equivalent emission limitation means any maximum achievable control technology emission... common control that is included in a section 112(c) source category or subcategory for which a section... pollutant at least equivalent to the reduction in emissions of such pollutant achieved under a relevant...

  3. 40 CFR 63.2 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Protection Agency. Equivalent emission limitation means any maximum achievable control technology emission... common control that is included in a section 112(c) source category or subcategory for which a section... pollutant at least equivalent to the reduction in emissions of such pollutant achieved under a relevant...

  4. 40 CFR 63.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Protection Agency. Equivalent emission limitation means any maximum achievable control technology emission... common control that is included in a section 112(c) source category or subcategory for which a section... pollutant at least equivalent to the reduction in emissions of such pollutant achieved under a relevant...

  5. 40 CFR 63.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Protection Agency. Equivalent emission limitation means any maximum achievable control technology emission... common control that is included in a section 112(c) source category or subcategory for which a section... pollutant at least equivalent to the reduction in emissions of such pollutant achieved under a relevant...

  6. 40 CFR 60.47Da - Commercial demonstration permit.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... may not exceed the following equivalent MW electrical generation capacity for any one technology... plants may not exceed 15,000 MW. Technology Pollutant Equivalent electrical capacity(MW electrical output... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Electric Utility...

  7. MICROSCOPE Mission: First Constraints on the Violation of the Weak Equivalence Principle by a Light Scalar Dilaton

    NASA Astrophysics Data System (ADS)

    Bergé, Joel; Brax, Philippe; Métris, Gilles; Pernot-Borràs, Martin; Touboul, Pierre; Uzan, Jean-Philippe

    2018-04-01

    The existence of a light or massive scalar field with a coupling to matter weaker than gravitational strength is a possible source of violation of the weak equivalence principle. We use the first results on the Eötvös parameter by the MICROSCOPE experiment to set new constraints on such scalar fields. For a massive scalar field of mass smaller than 10-12 eV (i.e., range larger than a few 1 05 m ), we improve existing constraints by one order of magnitude to |α |<10-11 if the scalar field couples to the baryon number and to |α |<10-12 if the scalar field couples to the difference between the baryon and the lepton numbers. We also consider a model describing the coupling of a generic dilaton to the standard matter fields with five parameters, for a light field: We find that, for masses smaller than 10-12 eV , the constraints on the dilaton coupling parameters are improved by one order of magnitude compared to previous equivalence principle tests.

  8. MICROSCOPE Mission: First Constraints on the Violation of the Weak Equivalence Principle by a Light Scalar Dilaton.

    PubMed

    Bergé, Joel; Brax, Philippe; Métris, Gilles; Pernot-Borràs, Martin; Touboul, Pierre; Uzan, Jean-Philippe

    2018-04-06

    The existence of a light or massive scalar field with a coupling to matter weaker than gravitational strength is a possible source of violation of the weak equivalence principle. We use the first results on the Eötvös parameter by the MICROSCOPE experiment to set new constraints on such scalar fields. For a massive scalar field of mass smaller than 10^{-12}  eV (i.e., range larger than a few 10^{5}  m), we improve existing constraints by one order of magnitude to |α|<10^{-11} if the scalar field couples to the baryon number and to |α|<10^{-12} if the scalar field couples to the difference between the baryon and the lepton numbers. We also consider a model describing the coupling of a generic dilaton to the standard matter fields with five parameters, for a light field: We find that, for masses smaller than 10^{-12}  eV, the constraints on the dilaton coupling parameters are improved by one order of magnitude compared to previous equivalence principle tests.

  9. Numerical modeling of NO formation in laminar Bunsen flames -- A flamelet approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chou, C.P.; Chen, J.Y.; Yam, C.G.

    1998-08-01

    Based on the flamelet concept, a numerical model has been developed for fast predictions of NO{sub x} and CO emissions from laminar flames. The model is applied to studying NO formation in the secondary nonpremixed flame zone of fuel-rich methane Bunsen flames. By solving the steady-state flamelet equations with the detailed GR12.1 methane-air mechanism, a flamelet library is generated containing thermochemical information for a range of scalar dissipation rates at the ambient pressure condition. Modeling of NO formation is made by solving its conservation equation with chemical source term evaluated based on flamelet library using the extended Zeldovich mechanism andmore » NO reburning reactions. The optically-thin radiation heat transfer model is used to explore the potential effect of heat loss on thermal NO formation. The numerical scheme solves the two-dimensional Navier-Stokes equations as well as three additional equations: the mixture fraction, the NO mass fraction, and the enthalpy deficit due to radiative heat loss. With an established flamelet library, typical computing times are about 5 hours per calculation on a DEC-3000 300LX workstation. The predicted mixing field, radial temperature profiles, and NO distributions compare favorably with recent experimental data obtained by Nguyen et al. The dependence of NO{sub x} emission on equivalence ratio is studied numerically and the predictions are found to agree reasonably well with the measurements by Muss. The computed results show a decreasing trend of NO{sub x} emission with the equivalence ratio but an increasing trend in the CO emission index. By examining this trade-off between NO{sub x} and CO, an optimal equivalence ratio of 1.4 is found to yield the lowest combined emission.« less

  10. Incentive Analysis for Clean Water Act Reauthorization: Point Source/Nonpoint Source Trading for Nutrient Discharge Reductions (1992)

    EPA Pesticide Factsheets

    Paper focuses on trading schemes in which regulated point sources are allowed to avoid upgrading their pollution control technology to meet water quality-based effluent limits if they pay for equivalent (or greater) reductions in nonpoint source pollution.

  11. Using open source computational tools for predicting human metabolic stability and additional absorption, distribution, metabolism, excretion, and toxicity properties.

    PubMed

    Gupta, Rishi R; Gifford, Eric M; Liston, Ted; Waller, Chris L; Hohman, Moses; Bunin, Barry A; Ekins, Sean

    2010-11-01

    Ligand-based computational models could be more readily shared between researchers and organizations if they were generated with open source molecular descriptors [e.g., chemistry development kit (CDK)] and modeling algorithms, because this would negate the requirement for proprietary commercial software. We initially evaluated open source descriptors and model building algorithms using a training set of approximately 50,000 molecules and a test set of approximately 25,000 molecules with human liver microsomal metabolic stability data. A C5.0 decision tree model demonstrated that CDK descriptors together with a set of Smiles Arbitrary Target Specification (SMARTS) keys had good statistics [κ = 0.43, sensitivity = 0.57, specificity = 0.91, and positive predicted value (PPV) = 0.64], equivalent to those of models built with commercial Molecular Operating Environment 2D (MOE2D) and the same set of SMARTS keys (κ = 0.43, sensitivity = 0.58, specificity = 0.91, and PPV = 0.63). Extending the dataset to ∼193,000 molecules and generating a continuous model using Cubist with a combination of CDK and SMARTS keys or MOE2D and SMARTS keys confirmed this observation. When the continuous predictions and actual values were binned to get a categorical score we observed a similar κ statistic (0.42). The same combination of descriptor set and modeling method was applied to passive permeability and P-glycoprotein efflux data with similar model testing statistics. In summary, open source tools demonstrated predictive results comparable to those of commercial software with attendant cost savings. We discuss the advantages and disadvantages of open source descriptors and the opportunity for their use as a tool for organizations to share data precompetitively, avoiding repetition and assisting drug discovery.

  12. Extension of Characterized Source Model for Broadband Strong Ground Motion Simulations (0.1-50s) of M9 Earthquake

    NASA Astrophysics Data System (ADS)

    Asano, K.; Iwata, T.

    2014-12-01

    After the 2011 Tohoku earthquake in Japan (Mw9.0), many papers on the source model of this mega subduction earthquake have been published. From our study on the modeling of strong motion waveforms in the period 0.1-10s, four isolated strong motion generation areas (SMGAs) were identified in the area deeper than 25 km (Asano and Iwata, 2012). The locations of these SMGAs were found to correspond to the asperities of M7-class events in 1930's. However, many studies on kinematic rupture modeling using seismic, geodetic and tsunami data revealed that the existence of the large slip area from the trench to the hypocenter (e.g., Fujii et al., 2011; Koketsu et al., 2011; Shao et al., 2011; Suzuki et al., 2011). That is, the excitation of seismic wave is spatially different in long and short period ranges as is already discussed by Lay et al.(2012) and related studies. The Tohoku earthquake raised a new issue we have to solve on the relationship between the strong motion generation and the fault rupture process, and it is an important issue to advance the source modeling for future strong motion prediction. The previous our source model consists of four SMGAs, and observed ground motions in the period range 0.1-10s are explained well by this source model. We tried to extend our source model to explain the observed ground motions in wider period range with a simple assumption referring to the previous our study and the concept of the characterized source model (Irikura and Miyake, 2001, 2011). We obtained a characterized source model, which have four SMGAs in the deep part, one large slip area in the shallow part and background area with low slip. The seismic moment of this source model is equivalent to Mw9.0. The strong ground motions are simulated by the empirical Green's function method (Irikura, 1986). Though the longest period limit is restricted by the SN ratio of the EGF event (Mw~6.0) records, this new source model succeeded to reproduce the observed waveforms and Fourier amplitude spectra in the period range 0.1-50s. The location of this large slip area seems to overlap the source regions of historical events in 1793 and 1897 off Sanriku area. We think the source model for strong motion prediction of Mw9 event could be constructed by the combination of hierarchical multiple asperities or source patches related to histrorical events in this region.

  13. Theory of the Bloch oscillating transistor

    NASA Astrophysics Data System (ADS)

    Hassel, J.; Seppä, H.

    2005-01-01

    The Bloch oscillating transistor (BOT) is a device in which single electron current through a normal tunnel junction enhances Cooper pair current in a mesoscopic Josephson junction, leading to signal amplification. In this article we develop a theory in which the BOT dynamics is described as a two-level system. The theory is used to predict current-voltage characteristics and small-signal response. The transition from stable operation into the hysteretic regime is studied. By identifying the two-level switching noise as the main source of fluctuations, the expressions for equivalent noise sources and the noise temperature are derived. The validity of the model is tested by comparing the results with simulations and experiments.

  14. Frequencies and Flutter Speed Estimation for Damaged Aircraft Wing Using Scaled Equivalent Plate Analysis

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Thiagarajan

    2010-01-01

    Equivalent plate analysis is often used to replace the computationally expensive finite element analysis in initial design stages or in conceptual design of aircraft wing structures. The equivalent plate model can also be used to design a wind tunnel model to match the stiffness characteristics of the wing box of a full-scale aircraft wing model while satisfying strength-based requirements An equivalent plate analysis technique is presented to predict the static and dynamic response of an aircraft wing with or without damage. First, a geometric scale factor and a dynamic pressure scale factor are defined to relate the stiffness, load and deformation of the equivalent plate to the aircraft wing. A procedure using an optimization technique is presented to create scaled equivalent plate models from the full scale aircraft wing using geometric and dynamic pressure scale factors. The scaled models are constructed by matching the stiffness of the scaled equivalent plate with the scaled aircraft wing stiffness. It is demonstrated that the scaled equivalent plate model can be used to predict the deformation of the aircraft wing accurately. Once the full equivalent plate geometry is obtained, any other scaled equivalent plate geometry can be obtained using the geometric scale factor. Next, an average frequency scale factor is defined as the average ratio of the frequencies of the aircraft wing to the frequencies of the full-scaled equivalent plate. The average frequency scale factor combined with the geometric scale factor is used to predict the frequency response of the aircraft wing from the scaled equivalent plate analysis. A procedure is outlined to estimate the frequency response and the flutter speed of an aircraft wing from the equivalent plate analysis using the frequency scale factor and geometric scale factor. The equivalent plate analysis is demonstrated using an aircraft wing without damage and another with damage. Both of the problems show that the scaled equivalent plate analysis can be successfully used to predict the frequencies and flutter speed of a typical aircraft wing.

  15. [The Performance Analysis for Lighting Sources in Highway Tunnel Based on Visual Function].

    PubMed

    Yang, Yong; Han, Wen-yuan; Yan, Ming; Jiang, Hai-feng; Zhu, Li-wei

    2015-10-01

    Under the condition of mesopic vision, the spectral luminous efficiency function is shown as a series of curves. Its peak wavelength and intensity are affected by light spectrum, background brightness and other aspects. The impact of light source to lighting visibility could not be carried out via a single optical parametric characterization. The reaction time of visual cognition is regard as evaluating indexes in this experiment. Under the condition of different speed and luminous environment, testing visual cognition based on vision function method. The light sources include high pressure sodium, electrodeless fluorescent lamp and white LED with three kinds of color temperature (the range of color temperature is from 1 958 to 5 537 K). The background brightness value is used for basic section of highway tunnel illumination and general outdoor illumination, its range is between 1 and 5 cd x m(-)2. All values are in the scope of mesopic vision. Test results show that: under the same condition of speed and luminance, the reaction time of visual cognition that corresponding to high color temperature of light source is shorter than it corresponding to low color temperature; the reaction time corresponding to visual target in high speed is shorter than it in low speed. At the end moment, however, the visual angle of target in observer's visual field that corresponding to low speed was larger than it corresponding to high speed. Based on MOVE model, calculating the equivalent luminance of human mesopic vision, which is on condition of different emission spectrum and background brightness that formed by test lighting sources. Compared with photopic vision result, the standard deviation (CV) of time-reaction curve corresponding to equivalent brightness of mesopic vision is smaller. Under the condition of mesopic vision, the discrepancy between equivalent brightness of different lighting source and photopic vision, that is one of the main reasons for causing the discrepancy of visual recognition. The emission spectrum peak of GaN chip is approximate to the wave length peak of efficiency function in photopic vision. The lighting visual effect of write LED in high color temperature is better than it in low color temperature and electrodeless fluorescent lamp. The lighting visual effect of high pressure sodium is weak. Because of its peak value is around the Na+ characteristic spectra.

  16. Numerical Homogenization of Jointed Rock Masses Using Wave Propagation Simulation

    NASA Astrophysics Data System (ADS)

    Gasmi, Hatem; Hamdi, Essaïeb; Bouden Romdhane, Nejla

    2014-07-01

    Homogenization in fractured rock analyses is essentially based on the calculation of equivalent elastic parameters. In this paper, a new numerical homogenization method that was programmed by means of a MATLAB code, called HLA-Dissim, is presented. The developed approach simulates a discontinuity network of real rock masses based on the International Society of Rock Mechanics (ISRM) scanline field mapping methodology. Then, it evaluates a series of classic joint parameters to characterize density (RQD, specific length of discontinuities). A pulse wave, characterized by its amplitude, central frequency, and duration, is propagated from a source point to a receiver point of the simulated jointed rock mass using a complex recursive method for evaluating the transmission and reflection coefficient for each simulated discontinuity. The seismic parameters, such as delay, velocity, and attenuation, are then calculated. Finally, the equivalent medium model parameters of the rock mass are computed numerically while taking into account the natural discontinuity distribution. This methodology was applied to 17 bench fronts from six aggregate quarries located in Tunisia, Spain, Austria, and Sweden. It allowed characterizing the rock mass discontinuity network, the resulting seismic performance, and the equivalent medium stiffness. The relationship between the equivalent Young's modulus and rock discontinuity parameters was also analyzed. For these different bench fronts, the proposed numerical approach was also compared to several empirical formulas, based on RQD and fracture density values, published in previous research studies, showing its usefulness and efficiency in estimating rapidly the Young's modulus of equivalent medium for wave propagation analysis.

  17. Future change in seasonal march of snow water equivalent due to global climate change

    NASA Astrophysics Data System (ADS)

    Hara, M.; Kawase, H.; Ma, X.; Wakazuki, Y.; Fujita, M.; Kimura, F.

    2012-04-01

    Western side of Honshu Island in Japan is one of the heaviest snowfall areas in the world, although the location is relatively lower latitude than other heavy snowfall areas. Snowfall is one of major source for agriculture, industrial, and house-use in Japan. The change in seasonal march of snow water equivalent, e.g., snowmelt season and amount will strongly influence to social-economic activities (ex. Ma et al., 2011). We performed the four numerical experiments including present and future climate simulations and much-snow and less-snow cases using a regional climate model. Pseudo-Global-Warming (PGW) method (Kimura and Kitoh, 2008) is applied for the future climate simulations. NCEP/NCAR reanalysis is used for initial and boundary conditions in present climate simulation and PGW method. MIROC 3.2 medres 2070s output under IPCC SRES A2 scenario and 1990s output under 20c3m scenario used for PGW method. In much-snow cases, Maximum total snow water equivalent over Japan, which is mostly observed in early February, is 49 G ton in the present simulation, the one decreased 26 G ton in the future simulation. The decreasing rate of snow water equivalent due to climate change was 49%. Main cause of the decrease of the total snow water equivalent is strongly affected by the air temperature rise due to global climate change. The difference in present and future precipitation amount is little.

  18. Mutagens from the cooking of food. III. Survey by Ames/Salmonella test of mutagen formation in secondary sources of cooked dietary protein.

    PubMed

    Bjeldanes, L F; Morris, M M; Felton, J S; Healy, S; Stuermer, D; Berry, P; Timourian, H; Hatch, F T

    1982-08-01

    A survey of mutagen formation during the cooking of a variety of protein-rich foods that are minor sources of protein intake in the American diet is reported (see Bjeldanes, Morris, Felton et al. (1982) for survey of major protein foods). Milk, cheese, tofu and organ meats showed negligible mutagen formation except following high-temperature cooking for long periods of time. Even under the most extreme conditions, tofu, cheese and milk exhibited fewer than 500 Ames/Salmonella typhimurium revertants/100 g equivalents (wet weight of uncooked food), and organ meats only double that amount. Beans showed low mutagen formation after boiling and boiling followed by frying (with and without oil). Only boiling of beans followed by baking for 1 hr gave appreciable mutagenicity (3650 revertants/100g equivalents). Seafood samples gave a variety of results: red snapper, salmon, trout, halibut and rock cod all gave more than 1000 revertants/100 g wet weight equivalents when pan-fried or griddle-fried for about 6 min/side. Baked or poached rock and deep-fried shrimp showed no significant mutagen formation. Broiled lamb chops showed mutagen formation similar to that in red meats tested in the preceding paper: 16,000 revertants/100 g equivalents. These findings show that as measured by bioassay in S. typhimurium, most of the foods that are minor sources of protein in the American diet are also minor sources of cooking-induced mutagens.

  19. 40 CFR 70.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... program to control air pollution from outer continental shelf sources, under section 328 of the Act; (12... other functionally-equivalent opening. General permit means a part 70 permit that meets the requirements of § 70.6(d). Major source means any stationary source (or any group of stationary sources that are...

  20. 40 CFR 70.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... program to control air pollution from outer continental shelf sources, under section 328 of the Act; (12... other functionally-equivalent opening. General permit means a part 70 permit that meets the requirements of § 70.6(d). Major source means any stationary source (or any group of stationary sources that are...

  1. 40 CFR 70.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... program to control air pollution from outer continental shelf sources, under section 328 of the Act; (12... other functionally-equivalent opening. General permit means a part 70 permit that meets the requirements of § 70.6(d). Major source means any stationary source (or any group of stationary sources that are...

  2. Material and physical model for evaluation of deep brain activity contribution to EEG recordings

    NASA Astrophysics Data System (ADS)

    Ye, Yan; Li, Xiaoping; Wu, Tiecheng; Li, Zhe; Xie, Wenwen

    2015-12-01

    Deep brain activity is conventionally recorded with surgical implantation of electrodes. During the neurosurgery, brain tissue damage and the consequent side effects to patients are inevitably incurred. In order to eliminate undesired risks, we propose that deep brain activity should be measured using the noninvasive scalp electroencephalography (EEG) technique. However, the deeper the neuronal activity is located, the noisier the corresponding scalp EEG signals are. Thus, the present study aims to evaluate whether deep brain activity could be observed from EEG recordings. In the experiment, a three-layer cylindrical head model was constructed to mimic a human head. A single dipole source (sine wave, 10 Hz, altering amplitudes) was embedded inside the model to simulate neuronal activity. When the dipole source was activated, surface potential was measured via electrodes attached on the top surface of the model and raw data were recorded for signal analysis. Results show that the dipole source activity positioned at 66 mm depth in the model, equivalent to the depth of deep brain structures, is clearly observed from surface potential recordings. Therefore, it is highly possible that deep brain activity could be observed from EEG recordings and deep brain activity could be measured using the noninvasive scalp EEG technique.

  3. Characterization of heat transfer in nutrient materials, part 2

    NASA Technical Reports Server (NTRS)

    Cox, J. E.; Bannerot, R. B.; Chen, C. K.; Witte, L. C.

    1973-01-01

    A thermal model is analyzed that takes into account phase changes in the nutrient material. The behavior of fluids in low gravity environments is discussed along with low gravity heat transfer. Thermal contact resistance in the Skylab food heater is analyzed. The original model is modified to include: equivalent conductance due to radiation, radial equivalent conductance, wall equivalent conductance, and equivalent heat capacity. A constant wall-temperature model is presented.

  4. Lu-Hf AND Sm-Nd EVOLUTION IN LUNAR MARE BASALTS.

    USGS Publications Warehouse

    Unruh, D.M.; Stille, P.; Patchett, P.J.; Tatsumoto, M.

    1984-01-01

    Lu-Hf and Sm-Nd data for mare basalts combined with Rb-Sr and total REE data taken from the literature suggest that the mare basalts were derived by small ( less than equivalent to 10%) degrees of partial melting of cumulate sources, but that the magma ocean from which these sources formed was light REE and hf-enriched. Calculated source compositions range from lherzolite to olivine websterite. Nonmodal melting of small amounts of ilmenite ( less than equivalent to 3%) in the sources seems to be required by the Lu/Hf data. A comparison of the Hf and Nd isotopic characteristics between the mare basalts and terrestrial oceanic basalts reveals that the epsilon Hf/ epsilon Nd ratios in low-Ti mare basalts are much higher than in terrestrial ocean basalts.

  5. Photon noise from chaotic and coherent millimeter-wave sources measured with horn-coupled, aluminum lumped-element kinetic inductance detectors

    NASA Astrophysics Data System (ADS)

    Flanigan, D.; McCarrick, H.; Jones, G.; Johnson, B. R.; Abitbol, M. H.; Ade, P.; Araujo, D.; Bradford, K.; Cantor, R.; Che, G.; Day, P.; Doyle, S.; Kjellstrand, C. B.; Leduc, H.; Limon, M.; Luu, V.; Mauskopf, P.; Miller, A.; Mroczkowski, T.; Tucker, C.; Zmuidzinas, J.

    2016-02-01

    We report photon-noise limited performance of horn-coupled, aluminum lumped-element kinetic inductance detectors at millimeter wavelengths. The detectors are illuminated by a millimeter-wave source that uses an active multiplier chain to produce radiation between 140 and 160 GHz. We feed the multiplier with either amplified broadband noise or a continuous-wave tone from a microwave signal generator. We demonstrate that the detector response over a 40 dB range of source power is well-described by a simple model that considers the number of quasiparticles. The detector noise-equivalent power (NEP) is dominated by photon noise when the absorbed power is greater than approximately 1 pW, which corresponds to NEP≈2 ×10-17 W Hz-1 /2 , referenced to absorbed power. At higher source power levels, we observe the relationships between noise and power expected from the photon statistics of the source signal: NEP∝P for broadband (chaotic) illumination and NEP∝P1 /2 for continuous-wave (coherent) illumination.

  6. Theoretical comparison, equivalent transformation, and conjunction operations of electromagnetic induction generator and triboelectric nanogenerator for harvesting mechanical energy.

    PubMed

    Zhang, Chi; Tang, Wei; Han, Changbao; Fan, Fengru; Wang, Zhong Lin

    2014-06-11

    Triboelectric nanogenerator (TENG) is a newly invented technology that is effective using conventional organic materials with functionalized surfaces for converting mechanical energy into electricity, which is light weight, cost-effective and easy scalable. Here, we present the first systematic analysis and comparison of EMIG and TENG from their working mechanisms, governing equations and output characteristics, aiming at establishing complementary applications of the two technologies for harvesting various mechanical energies. The equivalent transformation and conjunction operations of the two power sources for the external circuit are also explored, which provide appropriate evidences that the TENG can be considered as a current source with a large internal resistance, while the EMIG is equivalent to a voltage source with a small internal resistance. The theoretical comparison and experimental validations presented in this paper establish the basis of using the TENG as a new energy technology that could be parallel or possibly equivalently important as the EMIG for general power application at large-scale. It opens a field of organic nanogenerator for chemists and materials scientists who can be first time using conventional organic materials for converting mechanical energy into electricity at a high efficiency. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Sound field separation with sound pressure and particle velocity measurements.

    PubMed

    Fernandez-Grande, Efren; Jacobsen, Finn; Leclère, Quentin

    2012-12-01

    In conventional near-field acoustic holography (NAH) it is not possible to distinguish between sound from the two sides of the array, thus, it is a requirement that all the sources are confined to only one side and radiate into a free field. When this requirement cannot be fulfilled, sound field separation techniques make it possible to distinguish between outgoing and incoming waves from the two sides, and thus NAH can be applied. In this paper, a separation method based on the measurement of the particle velocity in two layers and another method based on the measurement of the pressure and the velocity in a single layer are proposed. The two methods use an equivalent source formulation with separate transfer matrices for the outgoing and incoming waves, so that the sound from the two sides of the array can be modeled independently. A weighting scheme is proposed to account for the distance between the equivalent sources and measurement surfaces and for the difference in magnitude between pressure and velocity. Experimental and numerical studies have been conducted to examine the methods. The double layer velocity method seems to be more robust to noise and flanking sound than the combined pressure-velocity method, although it requires an additional measurement surface. On the whole, the separation methods can be useful when the disturbance of the incoming field is significant. Otherwise the direct reconstruction is more accurate and straightforward.

  8. Estimating Equivalency of Explosives Through A Thermochemical Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maienschein, J L

    2002-07-08

    The Cheetah thermochemical computer code provides an accurate method for estimating the TNT equivalency of any explosive, evaluated either with respect to peak pressure or the quasi-static pressure at long time in a confined volume. Cheetah calculates the detonation energy and heat of combustion for virtually any explosive (pure or formulation). Comparing the detonation energy for an explosive with that of TNT allows estimation of the TNT equivalency with respect to peak pressure, while comparison of the heat of combustion allows estimation of TNT equivalency with respect to quasi-static pressure. We discuss the methodology, present results for many explosives, andmore » show comparisons with equivalency data from other sources.« less

  9. Aspiring to Spectral Ignorance in Earth Observation

    NASA Astrophysics Data System (ADS)

    Oliver, S. A.

    2016-12-01

    Enabling robust, defensible and integrated decision making in the Era of Big Earth Data requires the fusion of data from multiple and diverse sensor platforms and networks. While the application of standardised global grid systems provides a common spatial analytics framework that facilitates the computationally efficient and statistically valid integration and analysis of these various data sources across multiple scales, there remains the challenge of sensor equivalency; particularly when combining data from different earth observation satellite sensors (e.g. combining Landsat and Sentinel-2 observations). To realise the vision of a sensor ignorant analytics platform for earth observation we require automation of spectral matching across the available sensors. Ultimately, the aim is to remove the requirement for the user to possess any sensor knowledge in order to undertake analysis. This paper introduces the concept of spectral equivalence and proposes a methodology through which equivalent bands may be sourced from a set of potential target sensors through application of equivalence metrics and thresholds. A number of parameters can be used to determine whether a pair of spectra are equivalent for the purposes of analysis. A baseline set of thresholds for these parameters and how to apply them systematically to enable relation of spectral bands amongst numerous different sensors is proposed. The base unit for comparison in this work is the relative spectral response. From this input, determination of a what may constitute equivalence can be related by a user, based on their own conceptualisation of equivalence.

  10. Can the Equivalent Sphere Model Approximate Organ Doses in Space Radiation Environments?

    NASA Technical Reports Server (NTRS)

    Zi-Wei, Lin

    2007-01-01

    In space radiation calculations it is often useful to calculate the dose or dose equivalent in blood-forming organs (BFO). the skin or the eye. It has been customary to use a 5cm equivalent sphere to approximate the BFO dose. However previous studies have shown that a 5cm sphere gives conservative dose values for BFO. In this study we use a deterministic radiation transport with the Computerized Anatomical Man model to investigate whether the equivalent sphere model can approximate organ doses in space radiation environments. We find that for galactic cosmic rays environments the equivalent sphere model with an organ-specific constant radius parameter works well for the BFO dose equivalent and marginally well for the BFO dose and the dose equivalent of the eye or the skin. For solar particle events the radius parameters for the organ dose equivalent increase with the shielding thickness, and the model works marginally for BFO but is unacceptable for the eye or the skin The ranges of the radius parameters are also shown and the BFO radius parameters are found to be significantly larger than 5 cm in all eases.

  11. Two parametric voice source models and their asymptotic analysis

    NASA Astrophysics Data System (ADS)

    Leonov, A. S.; Sorokin, V. N.

    2014-05-01

    The paper studies the asymptotic behavior of the function for the area of the glottis near moments of its opening and closing for two mathematical voice source models. It is shown that in the first model, the asymptotics of the area function obeys a power law with an exponent of no less that 1. Detailed analysis makes it possible to refine these limits depending on the relative sizes of the intervals of a closed and open glottis. This work also studies another parametric model of the area of the glottis, which is based on a simplified physical-geometrical representation of vocal-fold vibration processes. This is a special variant of the well-known two-mass model and contains five parameters: the period of the main tone, equivalent masses on the lower and upper edge of vocal folds, the coefficient of elastic resistance of the lower vocal fold, and the delay time between openings of the upper and lower folds. It is established that the asymptotics of the obtained function for the area of the glottis obey a power law with an exponent of 1 both for opening and closing.

  12. Using the Gravity Model to Estimate the Spatial Spread of Vector-Borne Diseases

    PubMed Central

    Barrios, José Miguel; Verstraeten, Willem W.; Maes, Piet; Aerts, Jean-Marie; Farifteh, Jamshid; Coppin, Pol

    2012-01-01

    The gravity models are commonly used spatial interaction models. They have been widely applied in a large set of domains dealing with interactions amongst spatial entities. The spread of vector-borne diseases is also related to the intensity of interaction between spatial entities, namely, the physical habitat of pathogens’ vectors and/or hosts, and urban areas, thus humans. This study implements the concept behind gravity models in the spatial spread of two vector-borne diseases, nephropathia epidemica and Lyme borreliosis, based on current knowledge on the transmission mechanism of these diseases. Two sources of information on vegetated systems were tested: the CORINE land cover map and MODIS NDVI. The size of vegetated areas near urban centers and a local indicator of occupation-related exposure were found significant predictors of disease risk. Both the land cover map and the space-borne dataset were suited yet not equivalent input sources to locate and measure vegetated areas of importance for disease spread. The overall results point at the compatibility of the gravity model concept and the spatial spread of vector-borne diseases. PMID:23202882

  13. Using the gravity model to estimate the spatial spread of vector-borne diseases.

    PubMed

    Barrios, José Miguel; Verstraeten, Willem W; Maes, Piet; Aerts, Jean-Marie; Farifteh, Jamshid; Coppin, Pol

    2012-11-30

    The gravity models are commonly used spatial interaction models. They have been widely applied in a large set of domains dealing with interactions amongst spatial entities. The spread of vector-borne diseases is also related to the intensity of interaction between spatial entities, namely, the physical habitat of pathogens’ vectors and/or hosts, and urban areas, thus humans. This study implements the concept behind gravity models in the spatial spread of two vector-borne diseases, nephropathia epidemica and Lyme borreliosis, based on current knowledge on the transmission mechanism of these diseases. Two sources of information on vegetated systems were tested: the CORINE land cover map and MODIS NDVI. The size of vegetated areas near urban centers and a local indicator of occupation-related exposure were found significant predictors of disease risk. Both the land cover map and the space-borne dataset were suited yet not equivalent input sources to locate and measure vegetated areas of importance for disease spread. The overall results point at the compatibility of the gravity model concept and the spatial spread of vector-borne diseases.

  14. Use of a "Super-child" Approach to Assess the Vitamin A Equivalence of Moringa oleifera Leaves, Develop a Compartmental Model for Vitamin A Kinetics, and Estimate Vitamin A Total Body Stores in Young Mexican Children.

    PubMed

    Lopez-Teros, Veronica; Ford, Jennifer Lynn; Green, Michael H; Tang, Guangwen; Grusak, Michael A; Quihui-Cota, Luis; Muzhingi, Tawanda; Paz-Cassini, Mariela; Astiazaran-Garcia, Humberto

    2017-12-01

    Background: Worldwide, an estimated 250 million children <5 y old are vitamin A (VA) deficient. In Mexico, despite ongoing efforts to reduce VA deficiency, it remains an important public health problem; thus, food-based interventions that increase the availability and consumption of provitamin A-rich foods should be considered. Objective: The objectives were to assess the VA equivalence of 2 H-labeled Moringa oleifera (MO) leaves and to estimate both total body stores (TBS) of VA and plasma retinol kinetics in young Mexican children. Methods: β-Carotene was intrinsically labeled by growing MO plants in a 2 H 2 O nutrient solution. Fifteen well-nourished children (17-35 mo old) consumed puréed MO leaves (1 mg β-carotene) and a reference dose of [ 13 C 10 ]retinyl acetate (1 mg) in oil. Blood (2 samples/child) was collected 10 times (2 or 3 children each time) over 35 d. The bioefficacy of MO leaves was calculated from areas under the composite "super-child" plasma isotope response curves, and MO VA equivalence was estimated through the use of these values; a compartmental model was developed to predict VA TBS and retinol kinetics through the use of composite plasma [ 13 C 10 ]retinol data. TBS were also estimated with isotope dilution. Results: The relative bioefficacy of β-carotene retinol activity equivalents from MO was 28%; VA equivalence was 3.3:1 by weight (0.56 μmol retinol:1 μmol β-carotene). Kinetics of plasma retinol indicate more rapid plasma appearance and turnover and more extensive recycling in these children than are observed in adults. Model-predicted mean TBS (823 μmol) was similar to values predicted using a retinol isotope dilution equation applied to data from 3 to 6 d after dosing (mean ± SD: 832 ± 176 μmol; n = 7). Conclusions: The super-child approach can be used to estimate population carotenoid bioefficacy and VA equivalence, VA status, and parameters of retinol metabolism from a composite data set. Our results provide initial estimates of retinol kinetics in well-nourished young children with adequate VA stores and demonstrate that MO leaves may be an important source of VA. © 2017 American Society for Nutrition.

  15. Evaluation of the potential for operating carbon neutral WWTPs in China.

    PubMed

    Hao, Xiaodi; Liu, Ranbin; Huang, Xin

    2015-12-15

    Carbon neutrality is starting to become a hot topic for wastewater treatment plants (WWTPs) all over the world, and carbon neutral operations have emerged in some WWTPs. Although China is still struggling to control its water pollution, carbon neutrality will definitely become a top priority for WWTPs in the near future. In this review, the potential for operating carbon neutral WWTPs in China is technically evaluated. Based on the A(2)/O process of a typical municipal WWTP, an evaluation model is first configured, which couples the COD/nutrient removals (mass balance) with the energy consumption/recovery (energy balance). This model is then applied to evaluate the potential of the organic (COD) energy with regards to carbon neutrality. The model's calculations reveal that anaerobic digestion of excess sludge can only provide some 50% of the total amount of energy consumption. Water source heat pumps (WSHP) can effectively convert the thermal energy contained in wastewater to heat WWTPs and neighbourhood buildings, which can supply a net electrical equivalency of 0.26 kWh when 1 m(3) of the effluent is cooled down by 1 °C. Photovoltaic (PV) technology can generate a limited amount of electricity, barely 10% of the total energy consumption. Moreover, the complexity of installing solar panels on top of tanks makes PV technology almost not worth the effort. Overall, therefore, organic and thermal energy sources can effectively supply enough electrical equivalency for China to approach to its target with regards to carbon neutral operations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Polymer gel water equivalence and relative energy response with emphasis on low photon energy dosimetry in brachytherapy

    NASA Astrophysics Data System (ADS)

    Pantelis, E.; Karlis, A. K.; Kozicki, M.; Papagiannis, P.; Sakelliou, L.; Rosiak, J. M.

    2004-08-01

    The water equivalence and stable relative energy response of polymer gel dosimeters are usually taken for granted in the relatively high x-ray energy range of external beam radiotherapy based on qualitative indices such as mass and electron density and effective atomic number. However, these favourable dosimetric characteristics are questionable in the energy range of interest to brachytherapy especially in the case of lower energy photon sources such as 103Pd and 125I that are currently utilized. In this work, six representative polymer gel formulations as well as the most commonly used experimental set-up of a LiF TLD detector-solid water phantom are discussed on the basis of mass attenuation and energy absorption coefficients calculated in the energy range of 10 keV-10 MeV with regard to their water equivalence as a phantom and detector material. The discussion is also supported by Monte Carlo simulation results. It is found that water equivalence of polymer gel dosimeters is sustained for photon energies down to about 60 keV and no corrections are needed for polymer gel dosimetry of 169Yb or 192Ir sources. For 125I and 103Pd sources, however, a correction that is source-distance dependent is required. Appropriate Monte Carlo results show that at the dosimetric reference distance of 1 cm from a source, these corrections are of the order of 3% for 125I and 2% for 103Pd. These have to be compared with corresponding corrections of up to 35% for 125I and 103Pd and up to 15% even for the 169Yb energies for the experimental set-up of the LiF TLD detector-solid water phantom.

  17. Polymer gel water equivalence and relative energy response with emphasis on low photon energy dosimetry in brachytherapy.

    PubMed

    Pantelis, E; Karlis, A K; Kozicki, M; Papagiannis, P; Sakelliou, L; Rosiak, J M

    2004-08-07

    The water equivalence and stable relative energy response of polymer gel dosimeters are usually taken for granted in the relatively high x-ray energy range of external beam radiotherapy based on qualitative indices such as mass and electron density and effective atomic number. However, these favourable dosimetric characteristics are questionable in the energy range of interest to brachytherapy especially in the case of lower energy photon sources such as 103Pd and 125I that are currently utilized. In this work, six representative polymer gel formulations as well as the most commonly used experimental set-up of a LiF TLD detector-solid water phantom are discussed on the basis of mass attenuation and energy absorption coefficients calculated in the energy range of 10 keV-10 MeV with regard to their water equivalence as a phantom and detector material. The discussion is also supported by Monte Carlo simulation results. It is found that water equivalence of polymer gel dosimeters is sustained for photon energies down to about 60 keV and no corrections are needed for polymer gel dosimetry of 169Yb or 192Ir sources. For 125I and 103Pd sources, however, a correction that is source-distance dependent is required. Appropriate Monte Carlo results show that at the dosimetric reference distance of 1 cm from a source, these corrections are of the order of 3% for 125I and 2% for 103Pd. These have to be compared with corresponding corrections of up to 35% for 125I and 103Pd and up to 15% even for the 169Yb energies for the experimental set-up of the LiF TLD detector-solid water phantom.

  18. PSD Equivalency of Proposed Model Rule for California

    EPA Pesticide Factsheets

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  19. A novel concept for CT with fixed anodes (FACT): Medical imaging based on the feasibility of thermal load capacity.

    PubMed

    Kellermeier, Markus; Bert, Christoph; Müller, Reinhold G

    2015-07-01

    Focussing primarily on thermal load capacity, we describe the performance of a novel fixed anode CT (FACT) compared with a 100 kW reference CT. Being a fixed system, FACT has no focal spot blurring of the X-ray source during projection. Monte Carlo and finite element methods were used to determine the fluence proportional to thermal capacity. Studies of repeated short-time exposures showed that FACT could operate in pulsed mode for an unlimited period. A virtual model for FACT was constructed to analyse various temporal sequences for the X-ray source ring, representing a circular array of 1160 fixed anodes in the gantry. Assuming similar detector properties at a very small integration time, image quality was investigated using an image reconstruction library. Our model showed that approximately 60 gantry rounds per second, i.e. 60 sequential targetings of the 1160 anodes per second, were required to achieve a performance level equivalent to that of the reference CT (relative performance, RP = 1) at equivalent image quality. The optimal projection duration in each direction was about 10 μs. With a beam pause of 1 μs between projections, 78.4 gantry rounds per second with consecutive source activity were thermally possible at a given thermal focal spot. The settings allowed for a 1.3-fold (RP = 1.3) shorter scan time than conventional CT while maintaining radiation exposure and image quality. Based on the high number of rounds, FACT supports a high image frame rate at low doses, which would be beneficial in a wide range of diagnostic and technical applications. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  20. Modelling and Characterization of Effective Thermal Conductivity of Single Hollow Glass Microsphere and Its Powder.

    PubMed

    Liu, Bing; Wang, Hui; Qin, Qing-Hua

    2018-01-14

    Tiny hollow glass microsphere (HGM) can be applied for designing new light-weighted and thermal-insulated composites as high strength core, owing to its hollow structure. However, little work has been found for studying its own overall thermal conductivity independent of any matrix, which generally cannot be measured or evaluated directly. In this study, the overall thermal conductivity of HGM is investigated experimentally and numerically. The experimental investigation of thermal conductivity of HGM powder is performed by the transient plane source (TPS) technique to provide a reference to numerical results, which are obtained by a developed three-dimensional two-step hierarchical computational method. In the present method, three heterogeneous HGM stacking elements representing different distributions of HGMs in the powder are assumed. Each stacking element and its equivalent homogeneous solid counterpart are, respectively, embedded into a fictitious matrix material as fillers to form two equivalent composite systems at different levels, and then the overall thermal conductivity of each stacking element can be numerically determined through the equivalence of the two systems. The comparison of experimental and computational results indicates the present computational modeling can be used for effectively predicting the overall thermal conductivity of single HGM and its powder in a flexible way. Besides, it is necessary to note that the influence of thermal interfacial resistance cannot be removed from the experimental results in the TPS measurement.

  1. PSPICE controlled-source models of analogous circuit for Langevin type piezoelectric transducer

    NASA Astrophysics Data System (ADS)

    Chen, Yeongchin; Wu, Menqjiun; Liu, Weikuo

    2007-02-01

    The design and construction of wide-band and high efficiency acoustical projector has long been considered an art beyond the capabilities of many smaller groups. Langevin type piezoelectric transducers have been the most candidate of sonar array system applied in underwater communication. The transducers are fabricated, by bolting head mass and tail mass on both ends of stacked piezoelectric ceramic, to satisfy the multiple, conflicting design for high power transmitting capability. The aim of this research is to study the characteristics of Langevin type piezoelectric transducer that depend on different metal loading. First, the Mason equivalent circuit is used to model the segmented piezoelectric ceramic, then, the impedance network of tail and head masses is deduced by the Newton’s theory. To obtain the optimal solution to a specific design formulation, PSPICE controlled-source programming techniques can be applied. A valid example of the application of PSPICE models for Langevin type transducer analysis is presented and the simulation results are in good agreement with the experimental measurements.

  2. Thermal Damage Analysis in Biological Tissues Under Optical Irradiation: Application to the Skin

    NASA Astrophysics Data System (ADS)

    Fanjul-Vélez, Félix; Ortega-Quijano, Noé; Solana-Quirós, José Ramón; Arce-Diego, José Luis

    2009-07-01

    The use of optical sources in medical praxis is increasing nowadays. In this study, different approaches using thermo-optical principles that allow us to predict thermal damage in irradiated tissues are analyzed. Optical propagation is studied by means of the radiation transport theory (RTT) equation, solved via a Monte Carlo analysis. Data obtained are included in a bio-heat equation, solved via a numerical finite difference approach. Optothermal properties are considered for the model to be accurate and reliable. Thermal distribution is calculated as a function of optical source parameters, mainly optical irradiance, wavelength and exposition time. Two thermal damage models, the cumulative equivalent minutes (CEM) 43 °C approach and the Arrhenius analysis, are used. The former is appropriate when dealing with dosimetry considerations at constant temperature. The latter is adequate to predict thermal damage with arbitrary temperature time dependence. Both models are applied and compared for the particular application of skin thermotherapy irradiation.

  3. A time reversal algorithm in acoustic media with Dirac measure approximations

    NASA Astrophysics Data System (ADS)

    Bretin, Élie; Lucas, Carine; Privat, Yannick

    2018-04-01

    This article is devoted to the study of a photoacoustic tomography model, where one is led to consider the solution of the acoustic wave equation with a source term writing as a separated variables function in time and space, whose temporal component is in some sense close to the derivative of the Dirac distribution at t  =  0. This models a continuous wave laser illumination performed during a short interval of time. We introduce an algorithm for reconstructing the space component of the source term from the measure of the solution recorded by sensors during a time T all along the boundary of a connected bounded domain. It is based at the same time on the introduction of an auxiliary equivalent Cauchy problem allowing to derive explicit reconstruction formula and then to use of a deconvolution procedure. Numerical simulations illustrate our approach. Finally, this algorithm is also extended to elasticity wave systems.

  4. Safety and control of accelerator-driven subcritical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rief, H.; Takahashi, H.

    1995-10-01

    To study control and safety of accelertor driven nuclear systems, a one point kinetic model was developed and programed. It deals with fast transients as a function of reactivity insertion. Doppler feedback, and the intensity of an external neutron source. The model allows for a simultaneous calculation of an equivalent critical reactor. It was validated by a comparison with a benchmark specified by the Nuclear Energy Agency Committee of Reactor Physics. Additional features are the possibility of inserting a linear or quadratic time dependent reactivity ramp which may account for gravity induced accidents like earthquakes, the possibility to shut downmore » the external neutron source by an exponential decay law of the form exp({minus}t/{tau}), and a graphical display of the power and reactivity changes. The calculations revealed that such boosters behave quite benignly even if they are only slightly subcritical.« less

  5. Research on the equivalent circuit model of a circular flexural-vibration-research on the equivalent circuit model of a circular flexural-vibration-mode piezoelectric transformer with moderate thickness.

    PubMed

    Huang, Yihua; Huang, Wenjin; Wang, Qinglei; Su, Xujian

    2013-07-01

    The equivalent circuit model of a piezoelectric transformer is useful in designing and optimizing the related driving circuits. Based on previous work, an equivalent circuit model for a circular flexural-vibration-mode piezoelectric transformer with moderate thickness is proposed and validated by finite element analysis. The input impedance, voltage gain, and efficiency of the transformer are determined through computation. The basic behaviors of the transformer are shown by numerical results.

  6. A multi-scalar PDF approach for LES of turbulent spray combustion

    NASA Astrophysics Data System (ADS)

    Raman, Venkat; Heye, Colin

    2011-11-01

    A comprehensive joint-scalar probability density function (PDF) approach is proposed for large eddy simulation (LES) of turbulent spray combustion and tests are conducted to analyze the validity and modeling requirements. The PDF method has the advantage that the chemical source term appears closed but requires models for the small scale mixing process. A stable and consistent numerical algorithm for the LES/PDF approach is presented. To understand the modeling issues in the PDF method, direct numerical simulation of a spray flame at three different fuel droplet Stokes numbers and an equivalent gaseous flame are carried out. Assumptions in closing the subfilter conditional diffusion term in the filtered PDF transport equation are evaluated for various model forms. In addition, the validity of evaporation rate models in high Stokes number flows is analyzed.

  7. An empirical model for inverted-velocity-profile jet noise prediction

    NASA Technical Reports Server (NTRS)

    Stone, J. R.

    1977-01-01

    An empirical model for predicting the noise from inverted-velocity-profile coaxial or coannular jets is presented and compared with small-scale static and simulated flight data. The model considered the combined contributions of as many as four uncorrelated constituent sources: the premerged-jet/ambient mixing region, the merged-jet/ambient mixing region, outer-stream shock/turbulence interaction, and inner-stream shock/turbulence interaction. The noise from the merged region occurs at relatively low frequency and is modeled as the contribution of a circular jet at merged conditions and total exhaust area, with the high frequencies attenuated. The noise from the premerged region occurs at high frequency and is modeled as the contribution of an equivalent plug nozzle at outer stream conditions, with the low frequencies attenuated.

  8. Simulation of scattered fields: Some guidelines for the equivalent source method

    NASA Astrophysics Data System (ADS)

    Gounot, Yves J. R.; Musafir, Ricardo E.

    2011-07-01

    Three different approaches of the equivalent source method for simulating scattered fields are compared: two of them deal with monopole sets, the other with multipole expansions. In the first monopole approach, the sources have fixed positions given by specific rules, while in the second one (ESGA), the optimal positions are determined via a genetic algorithm. The 'pros and cons' of each of these approaches are discussed with the aim of providing practical guidelines for the user. It is shown that while both monopole techniques furnish quite good pressure field reconstructions with simple source arrangements, ESGA requires a number of monopoles significantly smaller and, with equal number of sources, yields a better precision. As for the multipole technique, the main advantage is that in principle any precision can be reached, provided the source order is sufficiently high. On the other hand, the results point out that the lack of rules for determining the proper multipole order necessary for a desired precision may constitute a handicap for the user.

  9. Equivalent model construction for a non-linear dynamic system based on an element-wise stiffness evaluation procedure and reduced analysis of the equivalent system

    NASA Astrophysics Data System (ADS)

    Kim, Euiyoung; Cho, Maenghyo

    2017-11-01

    In most non-linear analyses, the construction of a system matrix uses a large amount of computation time, comparable to the computation time required by the solving process. If the process for computing non-linear internal force matrices is substituted with an effective equivalent model that enables the bypass of numerical integrations and assembly processes used in matrix construction, efficiency can be greatly enhanced. A stiffness evaluation procedure (STEP) establishes non-linear internal force models using polynomial formulations of displacements. To efficiently identify an equivalent model, the method has evolved such that it is based on a reduced-order system. The reduction process, however, makes the equivalent model difficult to parameterize, which significantly affects the efficiency of the optimization process. In this paper, therefore, a new STEP, E-STEP, is proposed. Based on the element-wise nature of the finite element model, the stiffness evaluation is carried out element-by-element in the full domain. Since the unit of computation for the stiffness evaluation is restricted by element size, and since the computation is independent, the equivalent model can be constructed efficiently in parallel, even in the full domain. Due to the element-wise nature of the construction procedure, the equivalent E-STEP model is easily characterized by design parameters. Various reduced-order modeling techniques can be applied to the equivalent system in a manner similar to how they are applied in the original system. The reduced-order model based on E-STEP is successfully demonstrated for the dynamic analyses of non-linear structural finite element systems under varying design parameters.

  10. Solution of the comoving-frame equation of transfer in spherically symmetric flows. V - Multilevel atoms. [in early star atmospheres

    NASA Technical Reports Server (NTRS)

    Mihalas, D.; Kunasz, P. B.

    1978-01-01

    The coupled radiative transfer and statistical equilibrium equations for multilevel ionic structures in the atmospheres of early-type stars are solved. Both lines and continua are treated consistently; the treatment is applicable throughout a transonic wind, and allows for the presence of background continuum sources and sinks in the transfer. An equivalent-two-level-atoms approach provides the solution for the equations. Calculations for simplified He (+)-like model atoms in parameterized isothermal wind models indicate that subordinate line profiles are sensitive to the assumed mass-loss rate, and to the assumed structure of the velocity law in the atmospheres.

  11. Revisiting the social cost of carbon

    PubMed Central

    Nordhaus, William D.

    2017-01-01

    The social cost of carbon (SCC) is a central concept for understanding and implementing climate change policies. This term represents the economic cost caused by an additional ton of carbon dioxide emissions or its equivalent. The present study presents updated estimates based on a revised DICE model (Dynamic Integrated model of Climate and the Economy). The study estimates that the SCC is $31 per ton of CO2 in 2010 US$ for the current period (2015). For the central case, the real SCC grows at 3% per year over the period to 2050. The paper also compares the estimates with those from other sources. PMID:28143934

  12. Nature of size effects in compact models of field effect transistors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Torkhov, N. A., E-mail: trkf@mail.ru; Scientific-Research Institute of Semiconductor Devices, Tomsk 634050; Tomsk State University of Control Systems and Radioelectronics, Tomsk 634050

    Investigations have shown that in the local approximation (for sizes L < 100 μm), AlGaN/GaN high electron mobility transistor (HEMT) structures satisfy to all properties of chaotic systems and can be described in the language of fractal geometry of fractional dimensions. For such objects, values of their electrophysical characteristics depend on the linear sizes of the examined regions, which explain the presence of the so-called size effects—dependences of the electrophysical and instrumental characteristics on the linear sizes of the active elements of semiconductor devices. In the present work, a relationship has been established for the linear model parameters of themore » equivalent circuit elements of internal transistors with fractal geometry of the heteroepitaxial structure manifested through a dependence of its relative electrophysical characteristics on the linear sizes of the examined surface areas. For the HEMTs, this implies dependences of their relative static (A/mm, mA/V/mm, Ω/mm, etc.) and microwave characteristics (W/mm) on the width d of the sink-source channel and on the number of sections n that leads to a nonlinear dependence of the retrieved parameter values of equivalent circuit elements of linear internal transistor models on n and d. Thus, it has been demonstrated that the size effects in semiconductors determined by the fractal geometry must be taken into account when investigating the properties of semiconductor objects on the levels less than the local approximation limit and designing and manufacturing field effect transistors. In general, the suggested approach allows a complex of problems to be solved on designing, optimizing, and retrieving the parameters of equivalent circuits of linear and nonlinear models of not only field effect transistors but also any arbitrary semiconductor devices with nonlinear instrumental characteristics.« less

  13. Visual signal detection in structured backgrounds. II. Effects of contrast gain control, background variations, and white noise

    NASA Technical Reports Server (NTRS)

    Eckstein, M. P.; Ahumada, A. J. Jr; Watson, A. B.

    1997-01-01

    Studies of visual detection of a signal superimposed on one of two identical backgrounds show performance degradation when the background has high contrast and is similar in spatial frequency and/or orientation to the signal. To account for this finding, models include a contrast gain control mechanism that pools activity across spatial frequency, orientation and space to inhibit (divisively) the response of the receptor sensitive to the signal. In tasks in which the observer has to detect a known signal added to one of M different backgrounds grounds due to added visual noise, the main sources of degradation are the stochastic noise in the image and the suboptimal visual processing. We investigate how these two sources of degradation (contrast gain control and variations in the background) interact in a task in which the signal is embedded in one of M locations in a complex spatially varying background (structured background). We use backgrounds extracted from patient digital medical images. To isolate effects of the fixed deterministic background (the contrast gain control) from the effects of the background variations, we conduct detection experiments with three different background conditions: (1) uniform background, (2) a repeated sample of structured background, and (3) different samples of structured background. Results show that human visual detection degrades from the uniform background condition to the repeated background condition and degrades even further in the different backgrounds condition. These results suggest that both the contrast gain control mechanism and the background random variations degrade human performance in detection of a signal in a complex, spatially varying background. A filter model and added white noise are used to generate estimates of sampling efficiencies, an equivalent internal noise, an equivalent contrast-gain-control-induced noise, and an equivalent noise due to the variations in the structured background.

  14. The broad-band x ray spectral variability of Mkn 841

    NASA Technical Reports Server (NTRS)

    George, I. M.; Nandra, K.; Fabian, A. C.; Turner, T. J.; Done, C.; Day, C. S. R.

    1992-01-01

    The results of a detailed spectral analysis of four X-ray observations of the luminous Seyfert 1.5 galaxy Mkn 841 performed using the EXOSAT and Ginga satellites over the period June 1984 to July 1990 are reported. Preliminary results from a short ROSAT PSPC observation of Mkn 841 in July 1990 are also presented. Variability is apparent in both the soft (0.1-1.0 keV) and medium (1-20 keV) energy bands. Above 1 keV, the spectra are adequately modelled by a power-law with a strong emission line of equivalent width approximately 450 eV. The energy of the line (approximately 6.4 keV) is indicative of K-shell fluorescence from neutral iron, leading to the interpretation that the line arises via X-ray illumination of cold material surrounding the source. In addition to the flux variability, the continuum shape also changes in a dramatic fashion, with variations in the apparent photon index Delta(Gamma) approximately 0.6. The large equivalent width of the emission line clearly indicates a strongly enhanced reflection component in the source, compared to other Seyferts observed with Ginga. The spectral changes are interpreted in terms of a variable power-law continuum superimposed on a flatter reflection component. For one Ginga observation, the reflected flux appears to dominate the medium energy X-ray emission, resulting in an unusually flat slope (Gamma approximately 1.0). The soft X-ray excess is found to be highly variable by a factor approximately 10. These variations are not correlated with the hard flux, but it seems likely that the soft component arises via reprocessing of the hard X-rays. We find no evidence for intrinsic absorption, with the equivalent hydrogen column density constrained to be less than or equal to few x 10(exp 20) cm(exp -2). The implications of these results for physical models for the emission regions in this and other X-ray bright Seyferts are briefly discussed.

  15. First steps of integrated spatial modeling of titanium, zirconium, and rare earth element resources within the Coastal Plain sediments of the southeastern United States

    USGS Publications Warehouse

    Ellefsen, Karl J.; Van Gosen, Bradley S.; Fey, David L.; Budahn, James R.; Smith, Steven M.; Shah, Anjana K.

    2015-01-01

    The Coastal Plain of the southeastern United States has extensive, unconsolidated sedimentary deposits that are enriched in heavy minerals containing titanium, zirconium, and rare earth element resources. Areas favorable for exploration and development of these resources are being identified by geochemical data, which are supplemented with geological, geophysical, hydrological, and geographical data. The first steps of this analysis have been completed. The concentrations of lanthanum, yttrium, and titanium tend to decrease as distance from the Piedmont (which is the likely source of these resources) increases and are moderately correlated with airborne measurements of equivalent thorium concentration. The concentrations of lanthanum, yttrium, and titanium are relatively high in those watersheds that adjoin the Piedmont, south of the Cape Fear Arch. Although this relation suggests that the concentrations are related to the watersheds, it may be simply an independent regional trend. The concentration of zirconium is unrelated to the distance from the Piedmont, the equivalent thorium concentration, and the watershed. These findings establish a foundation for more sophisticated analyses using integrated spatial modeling.

  16. Modelling of aircrew radiation exposure from galactic cosmic rays and solar particle events.

    PubMed

    Takada, M; Lewis, B J; Boudreau, M; Al Anid, H; Bennett, L G I

    2007-01-01

    Correlations have been developed for implementation into the semi-empirical Predictive Code for Aircrew Radiation Exposure (PCAIRE) to account for effects of extremum conditions of solar modulation and low altitude based on transport code calculations. An improved solar modulation model, as proposed by NASA, has been further adopted to interpolate between the bounding correlations for solar modulation. The conversion ratio of effective dose to ambient dose equivalent, as applied to the PCAIRE calculation (based on measurements) for the legal regulation of aircrew exposure, was re-evaluated in this work to take into consideration new ICRP-92 radiation-weighting factors and different possible irradiation geometries of the source cosmic-radiation field. A computational analysis with Monte Carlo N-Particle eXtended Code was further used to estimate additional aircrew exposure that may result from sporadic solar energetic particle events considering real-time monitoring by the Geosynchronous Operational Environmental Satellite. These predictions were compared with the ambient dose equivalent rates measured on-board an aircraft and to count rate data observed at various ground-level neutron monitors.

  17. Research on the equivalence between digital core and rock physics models

    NASA Astrophysics Data System (ADS)

    Yin, Xingyao; Zheng, Ying; Zong, Zhaoyun

    2017-06-01

    In this paper, we calculate the elastic modulus of 3D digital cores using the finite element method, systematically study the equivalence between the digital core model and various rock physics models, and carefully analyze the conditions of the equivalence relationships. The influences of the pore aspect ratio and consolidation coefficient on the equivalence relationships are also further refined. Theoretical analysis indicates that the finite element simulation based on the digital core is equivalent to the boundary theory and Gassmann model. For pure sandstones, effective medium theory models (SCA and DEM) and the digital core models are equivalent in cases when the pore aspect ratio is within a certain range, and dry frame models (Nur and Pride model) and the digital core model are equivalent in cases when the consolidation coefficient is a specific value. According to the equivalence relationships, the comparison of the elastic modulus results of the effective medium theory and digital rock physics is an effective approach for predicting the pore aspect ratio. Furthermore, the traditional digital core models with two components (pores and matrix) are extended to multiple minerals to more precisely characterize the features and mineral compositions of rocks in underground reservoirs. This paper studies the effects of shale content on the elastic modulus in shaly sandstones. When structural shale is present in the sandstone, the elastic modulus of the digital cores are in a reasonable agreement with the DEM model. However, when dispersed shale is present in the sandstone, the Hill model cannot describe the changes in the stiffness of the pore space precisely. Digital rock physics describes the rock features such as pore aspect ratio, consolidation coefficient and rock stiffness. Therefore, digital core technology can, to some extent, replace the theoretical rock physics models because the results are more accurate than those of the theoretical models.

  18. The importance of being equivalent: Newton's two models of one-body motion

    NASA Astrophysics Data System (ADS)

    Pourciau, Bruce

    2004-05-01

    As an undergraduate at Cambridge, Newton entered into his "Waste Book" an assumption that we have named the Equivalence Assumption (The Younger): "If a body move progressively in some crooked line [about a center of motion] ..., [then this] crooked line may bee conceived to consist of an infinite number of streight lines. Or else in any point of the croked line the motion may bee conceived to be on in the tangent". In this assumption, Newton somewhat imprecisely describes two mathematical models, a "polygonal limit model" and a "tangent deflected model", for "one-body motion", that is, for the motion of a "body in orbit about a fixed center", and then claims that these two models are equivalent. In the first part of this paper, we study the Principia to determine how the elder Newton would more carefully describe the polygonal limit and tangent deflected models. From these more careful descriptions, we then create Equivalence Assumption (The Elder), a precise interpretation of Equivalence Assumption (The Younger) as it might have been restated by Newton, after say 1687. We then review certain portions of the Waste Book and the Principia to make the case that, although Newton never restates nor even alludes to the Equivalence Assumption after his youthful Waste Book entry, still the polygonal limit and tangent deflected models, as well as an unspoken belief in their equivalence, infuse Newton's work on orbital motion. In particular, we show that the persuasiveness of the argument for the Area Property in Proposition 1 of the Principia depends crucially on the validity of Equivalence Assumption (The Elder). After this case is made, we present the mathematical analysis required to establish the validity of the Equivalence Assumption (The Elder). Finally, to illustrate the fundamental nature of the resulting theorem, the Equivalence Theorem as we call it, we present three significant applications: we use the Equivalence Theorem first to clarify and resolve questions related to Leibniz's "polygonal model" of one-body motion; then to repair Newton's argument for the Area Property in Proposition 1; and finally to clarify and resolve questions related to the transition from impulsive to continuous forces in "De motu" and the Principia.

  19. Solving transient acoustic boundary value problems with equivalent sources using a lumped parameter approach.

    PubMed

    Fahnline, John B

    2016-12-01

    An equivalent source method is developed for solving transient acoustic boundary value problems. The method assumes the boundary surface is discretized in terms of triangular or quadrilateral elements and that the solution is represented using the acoustic fields of discrete sources placed at the element centers. Also, the boundary condition is assumed to be specified for the normal component of the surface velocity as a function of time, and the source amplitudes are determined to match the known elemental volume velocity vector at a series of discrete time steps. Equations are given for marching-on-in-time schemes to solve for the source amplitudes at each time step for simple, dipole, and tripole source formulations. Several example problems are solved to illustrate the results and to validate the formulations, including problems with closed boundary surfaces where long-time numerical instabilities typically occur. A simple relationship between the simple and dipole source amplitudes in the tripole source formulation is derived so that the source radiates primarily in the direction of the outward surface normal. The tripole source formulation is shown to eliminate interior acoustic resonances and long-time numerical instabilities.

  20. Source-receptor matrix calculation with a Source-receptor matrix calculation with a backward mode

    NASA Astrophysics Data System (ADS)

    Seibert, P.; Frank, A.

    2003-08-01

    The possibility to calculate linear-source receptor relationships for the transport of atmospheric trace substances with a Lagrangian particle dispersion model (LPDM) running in backward mode is shown and presented with many tests and examples. The derivation includes the action of sources and of any first-order processes (transformation with prescribed rates, dry and wet deposition, radioactive decay, ...). The backward mode is computationally advantageous if the number of receptors is less than the number of sources considered. The combination of an LPDM with the backward (adjoint) methodology is especially attractive for the application to point measurements, which can be handled without artificial numerical diffusion. Practical hints are provided for source-receptor calculations with different settings, both in forward and backward mode. The equivalence of forward and backward calculations is shown in simple tests for release and sampling of particles, pure wet deposition, pure convective redistribution and realistic transport over a short distance. Furthermore, an application example explaining measurements of Cs-137 in Stockholm as transport from areas contaminated heavily in the Chernobyl disaster is included.

  1. External dose-rate conversion factors of radionuclides for air submersion, ground surface contamination and water immersion based on the new ICRP dosimetric setting.

    PubMed

    Yoo, Song Jae; Jang, Han-Ki; Lee, Jai-Ki; Noh, Siwan; Cho, Gyuseong

    2013-01-01

    For the assessment of external doses due to contaminated environment, the dose-rate conversion factors (DCFs) prescribed in Federal Guidance Report 12 (FGR 12) and FGR 13 have been widely used. Recently, there were significant changes in dosimetric models and parameters, which include the use of the Reference Male and Female Phantoms and the revised tissue weighting factors, as well as the updated decay data of radionuclides. In this study, the DCFs for effective and equivalent doses were calculated for three exposure settings: skyshine, groundshine and water immersion. Doses to the Reference Phantoms were calculated by Monte Carlo simulations with the MCNPX 2.7.0 radiation transport code for 26 mono-energy photons between 0.01 and 10 MeV. The transport calculations were performed for the source volume within the cut-off distances practically contributing to the dose rates, which were determined by a simplified calculation model. For small tissues for which the reduction of variances are difficult, the equivalent dose ratios to a larger tissue (with lower statistical errors) nearby were employed to make the calculation efficient. Empirical response functions relating photon energies, and the organ equivalent doses or the effective doses were then derived by the use of cubic-spline fitting of the resulting doses for 26 energy points. The DCFs for all radionuclides considered important were evaluated by combining the photon emission data of the radionuclide and the empirical response functions. Finally, contributions of accompanied beta particles to the skin equivalent doses and the effective doses were calculated separately and added to the DCFs. For radionuclides considered in this study, the new DCFs for the three exposure settings were within ±10 % when compared with DCFs in FGR 13.

  2. External dose-rate conversion factors of radionuclides for air submersion, ground surface contamination and water immersion based on the new ICRP dosimetric setting

    PubMed Central

    Yoo, Song Jae; Jang, Han-Ki; Lee, Jai-Ki; Noh, Siwan; Cho, Gyuseong

    2013-01-01

    For the assessment of external doses due to contaminated environment, the dose-rate conversion factors (DCFs) prescribed in Federal Guidance Report 12 (FGR 12) and FGR 13 have been widely used. Recently, there were significant changes in dosimetric models and parameters, which include the use of the Reference Male and Female Phantoms and the revised tissue weighting factors, as well as the updated decay data of radionuclides. In this study, the DCFs for effective and equivalent doses were calculated for three exposure settings: skyshine, groundshine and water immersion. Doses to the Reference Phantoms were calculated by Monte Carlo simulations with the MCNPX 2.7.0 radiation transport code for 26 mono-energy photons between 0.01 and 10 MeV. The transport calculations were performed for the source volume within the cut-off distances practically contributing to the dose rates, which were determined by a simplified calculation model. For small tissues for which the reduction of variances are difficult, the equivalent dose ratios to a larger tissue (with lower statistical errors) nearby were employed to make the calculation efficient. Empirical response functions relating photon energies, and the organ equivalent doses or the effective doses were then derived by the use of cubic-spline fitting of the resulting doses for 26 energy points. The DCFs for all radionuclides considered important were evaluated by combining the photon emission data of the radionuclide and the empirical response functions. Finally, contributions of accompanied beta particles to the skin equivalent doses and the effective doses were calculated separately and added to the DCFs. For radionuclides considered in this study, the new DCFs for the three exposure settings were within ±10 % when compared with DCFs in FGR 13. PMID:23542764

  3. The oxidative stability of omega-3 oil-in-water nanoemulsion systems suitable for functional food enrichment: A systematic review of the literature.

    PubMed

    Bush, Linda; Stevenson, Leo; Lane, Katie E

    2017-10-23

    There is growing demand for functional food products enriched with long chain omega-3 polyunsaturated fatty acids (LCω3PUFA). Nanoemulsions, systems with extremely small droplet sizes have been shown to increase LCω3PUFA bioavailability. However, nanoemulsion creation and processing methods may impact on the oxidative stability of these systems. The present systematic review collates information from studies that evaluated the oxidative stability of LCω3PUFA nanoemulsions suitable for use in functional foods. The systematic search identified seventeen articles published during the last 10 years. Researchers used a range of surfactants and antioxidants to create systems which were evaluated from 7 to 100 days of storage. Nanoemulsions were created using synthetic and natural emulsifiers, with natural sources offering equivalent or increased oxidative stability compared to synthetic sources, which is useful as consumers are demanding natural, cleaner label food products. Equivalent vegetarian sources of LCω3PUFA found in fish oils such as algal oils are promising as they provide direct sources without the need for conversion in the human metabolic pathway. Quillaja saponin is a promising natural emulsifier that can produce nanoemulsion systems with equivalent/increased oxidative stability in comparison to other emulsifiers. Further studies to evaluate the oxidative stability of quillaja saponin nanoemulsions combined with algal sources of LCω3PUFA are warranted.

  4. Measuring noise equivalent irradiance of a digital short-wave infrared imaging system using a broadband source to simulate the night spectrum

    NASA Astrophysics Data System (ADS)

    Green, John R.; Robinson, Timothy

    2015-05-01

    There is a growing interest in developing helmet-mounted digital imaging systems (HMDIS) for integration into military aircraft cockpits. This interest stems from the multiple advantages of digital vs. analog imaging such as image fusion from multiple sensors, data processing to enhance the image contrast, superposition of non-imaging data over the image, and sending images to remote location for analysis. There are several properties an HMDIS must have in order to aid the pilot during night operations. In addition to the resolution, image refresh rate, dynamic range, and sensor uniformity over the entire Focal Plane Array (FPA); the imaging system must have the sensitivity to detect the limited night light available filtered through cockpit transparencies. Digital sensor sensitivity is generally measured monochromatically using a laser with a wavelength near the peak detector quantum efficiency, and is generally reported as either the Noise Equivalent Power (NEP) or Noise Equivalent Irradiance (NEI). This paper proposes a test system that measures NEI of Short-Wave Infrared (SWIR) digital imaging systems using a broadband source that simulates the night spectrum. This method has a few advantages over a monochromatic method. Namely, the test conditions provide spectrum closer to what is experienced by the end-user, and the resulting NEI may be compared directly to modeled night glow irradiance calculation. This comparison may be used to assess the Technology Readiness Level of the imaging system for the application. The test system is being developed under a Cooperative Research and Development Agreement (CRADA) with the Air Force Research Laboratory.

  5. Virtual welding equipment for simulation of GMAW processes with integration of power source regulation

    NASA Astrophysics Data System (ADS)

    Reisgen, Uwe; Schleser, Markus; Mokrov, Oleg; Zabirov, Alexander

    2011-06-01

    A two dimensional transient numerical analysis and computational module for simulation of electrical and thermal characteristics during electrode melting and metal transfer involved in Gas-Metal-Arc-Welding (GMAW) processes is presented. Solution of non-linear transient heat transfer equation is carried out using a control volume finite difference technique. The computational module also includes controlling and regulation algorithms of industrial welding power sources. The simulation results are the current and voltage waveforms, mean voltage drops at different parts of circuit, total electric power, cathode, anode and arc powers and arc length. We describe application of the model for normal process (constant voltage) and for pulsed processes with U/I and I/I-modulation modes. The comparisons with experimental waveforms of current and voltage show that the model predicts current, voltage and electric power with a high accuracy. The model is used in simulation package SimWeld for calculation of heat flux into the work-piece and the weld seam formation. From the calculated heat flux and weld pool sizes, an equivalent volumetric heat source according to Goldak model, can be generated. The method was implemented and investigated with the simulation software SimWeld developed by the ISF at RWTH Aachen University.

  6. Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes

    PubMed Central

    Zhang, Hong; Pei, Yun

    2016-01-01

    Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions. PMID:27529266

  7. Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes.

    PubMed

    Zhang, Hong; Pei, Yun

    2016-08-12

    Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions.

  8. A Model for Temperature Fluctuations in a Buoyant Plume

    NASA Astrophysics Data System (ADS)

    Bisignano, A.; Devenish, B. J.

    2015-11-01

    We present a hybrid Lagrangian stochastic model for buoyant plume rise from an isolated source that includes the effects of temperature fluctuations. The model is based on that of Webster and Thomson (Atmos Environ 36:5031-5042, 2002) in that it is a coupling of a classical plume model in a crossflow with stochastic differential equations for the vertical velocity and temperature (which are themselves coupled). The novelty lies in the addition of the latter stochastic differential equation. Parametrizations of the plume turbulence are presented that are used as inputs to the model. The root-mean-square temperature is assumed to be proportional to the difference between the centreline temperature of the plume and the ambient temperature. The constant of proportionality is tuned by comparison with equivalent statistics from large-eddy simulations (LES) of buoyant plumes in a uniform crossflow and linear stratification. We compare plume trajectories for a wide range of crossflow velocities and find that the model generally compares well with the equivalent LES results particularly when added mass is included in the model. The exception occurs when the crossflow velocity component becomes very small. Comparison of the scalar concentration, both in terms of the height of the maximum concentration and its vertical spread, shows similar behaviour. The model is extended to allow for realistic profiles of ambient wind and temperature and the results are compared with LES of the plume that emanated from the explosion and fire at the Buncefield oil depot in 2005.

  9. 40 CFR 430.57 - Pretreatment standards for new sources (PSNS).

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... when POTWs find it necessary to impose mass effluent standards, equivalent mass standards are provided... 40 Protection of Environment 31 2012-07-01 2012-07-01 false Pretreatment standards for new sources...) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY...

  10. 40 CFR 430.57 - Pretreatment standards for new sources (PSNS).

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... when POTWs find it necessary to impose mass effluent standards, equivalent mass standards are provided... 40 Protection of Environment 30 2014-07-01 2014-07-01 false Pretreatment standards for new sources...) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY...

  11. 40 CFR 430.57 - Pretreatment standards for new sources (PSNS).

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... when POTWs find it necessary to impose mass effluent standards, equivalent mass standards are provided... 40 Protection of Environment 31 2013-07-01 2013-07-01 false Pretreatment standards for new sources...) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY...

  12. Evaluation of Earth's Geobiosphere Emergy Baseline and the Emergy of Crustal Cycling

    NASA Astrophysics Data System (ADS)

    De Vilbiss, Chris

    This dissertation quantitatively analyzed the exergy supporting the nucleosynthesis of the heavy isotopes, Earth's geobiosphere, and its crustal cycling. Exergy is that portion of energy that is available to drive work. The exergy sources that drive the geobiosphere are sunlight, Earth's rotational kinetic energy and relic heat, and radionuclides in Earth's interior. These four exergy sources were used to compute the Earth's geobiosphere emergy baseline (GEB), expressed as a single unit, solar equivalent joules (seJ). The seJ of radionuclides were computed by determining the quantity of gravitational exergy that dissipated in the production of both sunlight and heavy isotopes. This is a new method of computing solar equivalences also was applied to Earth's relic heat and rotational energy. The equivalent quantities of these four exergy sources were then added to express the GEB. This new baseline was compared with several other contemporary GEB methods. The new GEB is modeled as the support to Earth's crustal cycle and ultimately to the economical mineral deposits used in the US economy. Given the average annual cycling of crustal material and its average composition, specific emergies were calculated to express the average emergy per mass of particular crustal minerals. Chemical exergies of the minerals were used to develop transformities and specific emergies of minerals at heightened concentrations, i.e. minable concentrations. The effect of these new mineral emergy values were examined using the US economy as an example. The final result is an 83% reduction in the emergy of limestone, a 91% reduction in the aggregated emergy of all other minerals, and a 23% reduction in the emergy of the US economy. This dissertation explored three unique and innovative methods to compute the emergy of Earth's exergy sources and resources. First was a method for computing the emergy of radionuclides. Second was a method to evaluate the Earth's relic heat and dissipation of gravitational exergy that uses forward computation. Third is a more consistent method to compute the emergy value of crustal minerals based on their chemical exergy.

  13. Linear models for sound from supersonic reacting mixing layers

    NASA Astrophysics Data System (ADS)

    Chary, P. Shivakanth; Samanta, Arnab

    2016-12-01

    We perform a linearized reduced-order modeling of the aeroacoustic sound sources in supersonic reacting mixing layers to explore their sensitivities to some of the flow parameters in radiating sound. Specifically, we investigate the role of outer modes as the effective flow compressibility is raised, when some of these are expected to dominate over the traditional Kelvin-Helmholtz (K-H) -type central mode. Although the outer modes are known to be of lesser importance in the near-field mixing, how these radiate to the far-field is uncertain, on which we focus. On keeping the flow compressibility fixed, the outer modes are realized via biasing the respective mean densities of the fast (oxidizer) or slow (fuel) side. Here the mean flows are laminar solutions of two-dimensional compressible boundary layers with an imposed composite (turbulent) spreading rate, which we show to significantly alter the growth of instability waves by saturating them earlier, similar to in nonlinear calculations, achieved here via solving the linear parabolized stability equations. As the flow parameters are varied, instability of the slow modes is shown to be more sensitive to heat release, potentially exceeding equivalent central modes, as these modes yield relatively compact sound sources with lesser spreading of the mixing layer, when compared to the corresponding fast modes. In contrast, the radiated sound seems to be relatively unaffected when the mixture equivalence ratio is varied, except for a lean mixture which is shown to yield a pronounced effect on the slow mode radiation by reducing its modal growth.

  14. Progress Toward Improving Jet Noise Predictions in Hot Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Kenzakowski, Donald C.

    2007-01-01

    An acoustic analogy methodology for improving noise predictions in hot round jets is presented. Past approaches have often neglected the impact of temperature fluctuations on the predicted sound spectral density, which could be significant for heated jets, and this has yielded noticeable acoustic under-predictions in such cases. The governing acoustic equations adopted here are a set of linearized, inhomogeneous Euler equations. These equations are combined into a single third order linear wave operator when the base flow is considered as a locally parallel mean flow. The remaining second-order fluctuations are regarded as the equivalent sources of sound and are modeled. It is shown that the hot jet effect may be introduced primarily through a fluctuating velocity/enthalpy term. Modeling this additional source requires specialized inputs from a RANS-based flowfield simulation. The information is supplied using an extension to a baseline two equation turbulence model that predicts total enthalpy variance in addition to the standard parameters. Preliminary application of this model to a series of unheated and heated subsonic jets shows significant improvement in the acoustic predictions at the 90 degree observer angle.

  15. The numerical simulation of heat transfer during a hybrid laser-MIG welding using equivalent heat source approach

    NASA Astrophysics Data System (ADS)

    Bendaoud, Issam; Matteï, Simone; Cicala, Eugen; Tomashchuk, Iryna; Andrzejewski, Henri; Sallamand, Pierre; Mathieu, Alexandre; Bouchaud, Fréderic

    2014-03-01

    The present study is dedicated to the numerical simulation of an industrial case of hybrid laser-MIG welding of high thickness duplex steel UR2507Cu with Y-shaped chamfer geometry. It consists in simulation of heat transfer phenomena using heat equivalent source approach and implementing in finite element software COMSOL Multiphysics. A numerical exploratory designs method is used to identify the heat sources parameters in order to obtain a minimal required difference between the numerical results and the experiment which are the shape of the welded zone and the temperature evolution in different locations. The obtained results were found in good correspondence with experiment, both for melted zone shape and thermal history.

  16. Infrasound radiated by the Gerdec and Chelopechene explosions: propagation along unexpected paths

    NASA Astrophysics Data System (ADS)

    Green, David N.; Vergoz, Julien; Gibson, Robert; Le Pichon, Alexis; Ceranna, Lars

    2011-05-01

    Infrasound propagation paths through the atmosphere are controlled by the temporally and spatially varying sound speed and wind speed amplitudes. Because of the complexity of atmospheric acoustic propagation it is often difficult to reconcile observed infrasonic arrivals with the sound speed profiles predicted by meteorological specifications. This paper provides analyses of unexpected arrivals recorded in Europe and north Africa from two series of accidental munitions dump explosions, recorded at ranges greater than 1000 km: two explosions at Gerdec, Albania, on 2008 March 15 and four explosions at Chelopechene, Bulgaria, on 2008 July 3. The recorded signal characteristics include multiple pulsed arrivals, celerities between 0.24 and 0.34 km s-1 and some signal frequency content above 1 Hz. Often such characteristics are associated with waves that have propagated within a ground-to-stratosphere waveguide, although the observed celerities extend both above and below the conventional range for stratospheric arrivals. However, state-of-the-art meteorological specifications indicate that either weak, or no, ground-to-stratosphere waveguides are present along the source-to-receiver paths. By incorporating realistic gravity-wave induced horizontal velocity fluctuations into time-domain Parabolic Equation models the pulsed nature of the signals is simulated, and arrival times are predicted to within 30 s of the observed values (<1 per cent of the source-to-receiver transit time). Modelling amplitudes is highly dependent upon estimates of the unknown acoustic source strength (or equivalent chemical explosive yield). Current empirical explosive yield relationships, derived from infrasonic amplitude measurements from point-source chemical explosions, suggest that the equivalent chemical yield of the largest Gerdec explosion was of the order of 1 kt and the largest Chelopechene explosion was of the order of 100 t. When incorporating these assumed yields, the Parabolic Equation simulations predict peak signal amplitudes to within an order of magnitude of the observed values. As gravity wave velocity perturbations can significantly influence both infrasonic arrival times and signal amplitudes they need to be accounted for in source location and yield estimation routines, both of which are important for explosion monitoring, especially in the context of the Comprehensive Nuclear-Test-Ban Treaty.

  17. Non-Gaussian probabilistic MEG source localisation based on kernel density estimation☆

    PubMed Central

    Mohseni, Hamid R.; Kringelbach, Morten L.; Woolrich, Mark W.; Baker, Adam; Aziz, Tipu Z.; Probert-Smith, Penny

    2014-01-01

    There is strong evidence to suggest that data recorded from magnetoencephalography (MEG) follows a non-Gaussian distribution. However, existing standard methods for source localisation model the data using only second order statistics, and therefore use the inherent assumption of a Gaussian distribution. In this paper, we present a new general method for non-Gaussian source estimation of stationary signals for localising brain activity from MEG data. By providing a Bayesian formulation for MEG source localisation, we show that the source probability density function (pdf), which is not necessarily Gaussian, can be estimated using multivariate kernel density estimators. In the case of Gaussian data, the solution of the method is equivalent to that of widely used linearly constrained minimum variance (LCMV) beamformer. The method is also extended to handle data with highly correlated sources using the marginal distribution of the estimated joint distribution, which, in the case of Gaussian measurements, corresponds to the null-beamformer. The proposed non-Gaussian source localisation approach is shown to give better spatial estimates than the LCMV beamformer, both in simulations incorporating non-Gaussian signals, and in real MEG measurements of auditory and visual evoked responses, where the highly correlated sources are known to be difficult to estimate. PMID:24055702

  18. Radiogenic heat production in sedimentary rocks of the Gulf of Mexico Basin, south Texas

    USGS Publications Warehouse

    McKenna, T.E.; Sharp, J.M.

    1998-01-01

    Radiogenic heat production within the sedimentary section of the Gulf of Mexico basin is a significant source of heat. Radiogenic heat should be included in thermal models of this basin (and perhaps other sedimentary basins). We calculate that radiogenic heat may contribute up to 26% of the overall surface heat-flow density for an area in south Texas. Based on measurements of the radioactive decay rate of ??-particles, potassium concentration, and bulk density, we calculate radiogenic heat production for Stuart City (Lower Cretaceous) limestones, Wilcox (Eocene) sandstones and mudrocks, and Frio (Oligocene) sandstones and mudrocks from south Texas. Heat production rates range from a low of 0.07 ?? 0.01 ??W/m3 in clean Stuart City limestones to 2.21 ?? 0.24??W/m3 in Frio mudrocks. Mean heat production rates for Wilcox sandstones, Frio sandstones, Wilcox mudrocks, and Frio mudrocks are 0.88, 1.19, 1.50, and 1.72 ??W/m3, respectively. In general, the mudrocks produce about 30-40% more heat than stratigraphically equivalent sandstones. Frio rocks produce about 15% more heat than Wilcox rocks per unit volume of clastic rock (sandstone/mudrock). A one-dimensional heat-conduction model indicates that this radiogenic heat source has a significant effect on subsurface temperatures. If a thermal model were calibrated to observed temperatures by optimizing basal heat-flow density and ignoring sediment heat production, the extrapolated present-day temperature of a deeply buried source rock would be overestimated.Radiogenic heat production within the sedimentary section of the Gulf of Mexico basin is a significant source of heat. Radiogenic heat should be included in thermal models of this basin (and perhaps other sedimentary basins). We calculate that radiogenic heat may contribute up to 26% of the overall surface heat-flow density for an area in south Texas. Based on measurements of the radioactive decay rate of ??-particles, potassium concentration, and bulk density, we calculate radiogenic heat production for Stuart City (Lower Cretaceous) limestones, Wilcox (Eocene) sandstones and mudrocks, and Frio (Oligocene) sandstones and mudrocks from south Texas. Heat production rates range from a low of 0.07??0.01 ??W/m3 in clean Stuart City limestones to 2.21??0.24 ??W/m3 in Frio mudrocks. Mean heat production rates for Wilcox sandstones, Frio sandstones, Wilcox mudrocks, and Frio mudrocks are 0.88, 1.19, 1.50, and 1.72 ??W/m3, respectively. In general, the mudrocks produce about 30-40% more heat than stratigraphically equivalent sandstones. Frio rocks produce about 15% more heat than Wilcox rocks per unit volume of clastic rock (sandstone/mudrock). A one-dimensional heat-conduction model indicates that this radiogenic heat source has a significant effect on subsurface temperatures. If a thermal model were calibrated to observed temperatures by optimizing basal heat-flow density and ignoring sediment heat production, the extrapolated present-day temperature of a deeply buried source rock would be overestimated.

  19. 77 FR 11039 - Proposed Confidentiality Determinations for the Petroleum and Natural Gas Systems Source Category...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-02-24

    ... CO 2 carbon dioxide CO 2 e carbon dioxide equivalent CBI confidential business information CFR Code... RFA Regulatory Flexibility Act T-D transmission--distribution UIC Underground Injection Control UMRA... to or greater than 25,000 metric tons carbon dioxide equivalent (mtCO 2 e). The proposed...

  20. Whatever Gave You That Idea? False Memories Following Equivalence Training: A Behavioral Account of the Misinformation Effect

    PubMed Central

    Challies, Danna M; Hunt, Maree; Garry, Maryanne; Harper, David N

    2011-01-01

    The misinformation effect is a term used in the cognitive psychological literature to describe both experimental and real-world instances in which misleading information is incorporated into an account of an historical event. In many real-world situations, it is not possible to identify a distinct source of misinformation, and it appears that the witness may have inferred a false memory by integrating information from a variety of sources. In a stimulus equivalence task, a small number of trained relations between some members of a class of arbitrary stimuli result in a large number of untrained, or emergent relations, between all members of the class. Misleading information was introduced into a simple memory task between a learning phase and a recognition test by means of a match-to-sample stimulus equivalence task that included both stimuli from the original learning task and novel stimuli. At the recognition test, participants given equivalence training were more likely to misidentify patterns than those who were not given such training. The misinformation effect was distinct from the effects of prior stimulus exposure, or partial stimulus control. In summary, stimulus equivalence processes may underlie some real-world manifestations of the misinformation effect. PMID:22084495

  1. A Note on the Equivalence between Observed and Expected Information Functions with Polytomous IRT Models

    ERIC Educational Resources Information Center

    Magis, David

    2015-01-01

    The purpose of this note is to study the equivalence of observed and expected (Fisher) information functions with polytomous item response theory (IRT) models. It is established that observed and expected information functions are equivalent for the class of divide-by-total models (including partial credit, generalized partial credit, rating…

  2. Alternative Fuels Data Center: Delaware Transportation Data for Alternative

    Science.gov Websites

    local stakeholders. Gasoline Diesel Natural Gas Transportation Fuel Consumption Source: State Energy Plants 1 Renewable Power Plant Capacity (nameplate, MW) 2 Source: BioFuels Atlas from the National /gallon $2.66/GGE Source: Average prices per gasoline gallon equivalent (GGE) for the Central Atlantic

  3. Jewish Studies: A Guide to Reference Sources.

    ERIC Educational Resources Information Center

    McGill Univ., Montreal (Quebec). McLennan Library.

    An annotated bibliography to the reference sources for Jewish Studies in the McLennan Library of McGill University (Canada) is presented. Any titles in Hebrew characters are listed by their transliterated equivalents. There is also a list of relevant Library of Congress Subject Headings. General reference sources listed are: encyclopedias,…

  4. Cost-effectiveness Analysis with Influence Diagrams.

    PubMed

    Arias, M; Díez, F J

    2015-01-01

    Cost-effectiveness analysis (CEA) is used increasingly in medicine to determine whether the health benefit of an intervention is worth the economic cost. Decision trees, the standard decision modeling technique for non-temporal domains, can only perform CEA for very small problems. To develop a method for CEA in problems involving several dozen variables. We explain how to build influence diagrams (IDs) that explicitly represent cost and effectiveness. We propose an algorithm for evaluating cost-effectiveness IDs directly, i.e., without expanding an equivalent decision tree. The evaluation of an ID returns a set of intervals for the willingness to pay - separated by cost-effectiveness thresholds - and, for each interval, the cost, the effectiveness, and the optimal intervention. The algorithm that evaluates the ID directly is in general much more efficient than the brute-force method, which is in turn more efficient than the expansion of an equivalent decision tree. Using OpenMarkov, an open-source software tool that implements this algorithm, we have been able to perform CEAs on several IDs whose equivalent decision trees contain millions of branches. IDs can perform CEA on large problems that cannot be analyzed with decision trees.

  5. Analyzing and modeling gravity and magnetic anomalies using the SPHERE program and Magsat data

    NASA Technical Reports Server (NTRS)

    Braile, L. W.; Hinze, W. J.; Vonfrese, R. R. B. (Principal Investigator)

    1981-01-01

    Computer codes were completed, tested, and documented for analyzing magnetic anomaly vector components by equivalent point dipole inversion. The codes are intended for use in inverting the magnetic anomaly due to a spherical prism in a horizontal geomagnetic field and for recomputing the anomaly in a vertical geomagnetic field. Modeling of potential fields at satellite elevations that are derived from three dimensional sources by program SPHERE was made significantly more efficient by improving the input routines. A preliminary model of the Andean subduction zone was used to compute the anomaly at satellite elevations using both actual geomagnetic parameters and vertical polarization. Program SPHERE is also being used to calculate satellite level magnetic and gravity anomalies from the Amazon River Aulacogen.

  6. The extended Beer-Lambert theory for ray tracing modeling of LED chip-scaled packaging application with multiple luminescence materials

    NASA Astrophysics Data System (ADS)

    Yuan, Cadmus C. A.

    2015-12-01

    Optical ray tracing modeling applied Beer-Lambert method in the single luminescence material system to model the white light pattern from blue LED light source. This paper extends such algorithm to a mixed multiple luminescence material system by introducing the equivalent excitation and emission spectrum of individual luminescence materials. The quantum efficiency numbers of individual material and self-absorption of the multiple luminescence material system are considered as well. By this combination, researchers are able to model the luminescence characteristics of LED chip-scaled packaging (CSP), which provides simple process steps and the freedom of the luminescence material geometrical dimension. The method will be first validated by the experimental results. Afterward, a further parametric investigation has been then conducted.

  7. Localization from near-source quasi-static electromagnetic fields

    NASA Astrophysics Data System (ADS)

    Mosher, J. C.

    1993-09-01

    A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. The nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUltiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.

  8. Localization from near-source quasi-static electromagnetic fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, John Compton

    1993-09-01

    A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. Themore » nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUtiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.« less

  9. Satellite and surface geophysical expression of anomalous crustal structure in Kentucky and Tennessee

    NASA Technical Reports Server (NTRS)

    Mayhew, M. A.; Thomas, H. H.; Wasilewski, P. J.

    1981-01-01

    An equivalent layer magnetization model is discussed. Inversion of long wavelength satellite magnetic anomaly data indicates a very magnetic source region centered in south central Kentucky. Refraction profiles suggest that the source of the gravity anomaly is a large mass of rock occupying much of the crustal thickness. The outline of the source delineated by gravity contours is also discernible in aeromagnetic anomaly patterns. The mafic plutonic complex, and several lines of evidence are consistent with a rift association. The body is, however, clearly related to the inferred position of the Grenville Front. It is bounded on the north by the fault zones of the 38th Parallel Lineament. It is suggested that such magnetization levels are achieved with magnetic mineralogies produced by normal oxidation and metamorphic processes and enhanced by viscous build-up, especially in mafic rocks of alkaline character.

  10. A Model for Semantic Equivalence Discovery for Harmonizing Master Data

    NASA Astrophysics Data System (ADS)

    Piprani, Baba

    IT projects often face the challenge of harmonizing metadata and data so as to have a "single" version of the truth. Determining equivalency of multiple data instances against the given type, or set of types, is mandatory in establishing master data legitimacy in a data set that contains multiple incarnations of instances belonging to the same semantic data record . The results of a real-life application define how measuring criteria and equivalence path determination were established via a set of "probes" in conjunction with a score-card approach. There is a need for a suite of supporting models to help determine master data equivalency towards entity resolution—including mapping models, transform models, selection models, match models, an audit and control model, a scorecard model, a rating model. An ORM schema defines the set of supporting models along with their incarnation into an attribute based model as implemented in an RDBMS.

  11. Suzaku Observation of Two Ultraluminous X-ray Sources in NGC 1313

    NASA Technical Reports Server (NTRS)

    Mizuno, T.; Miyawaki, R.; Ebisawa, K.; Kubota, A.; Miyamoto, M.; Winter, L.; Ueda, Y.; Isobe, N.; Dewangan, G.; Mushotzky, R.F.; hide

    2007-01-01

    TA study was made of two ultraluminous X-ray sources (ULXs) in the nearby faceon, late-type Sb galaxy NGC 1313 using data from Suzaku, the 5th Japanese X-ray satellite. Within the 90 ks observation, both sources named X-1 and X-2 exhibited luminosity change by about 50%. The o.4-10keV X-ray luminosity was measured. For X-1, the spectrum exhibited a strong power-law component with a high energy cutoff which is thought to arise from strong Comptonization by a disk corona, suggesting the source was in a very high state. Absorption line features with equivalent widths of 40-80 eV found at 7.00 keV and 7.8 keV in the X-1 spectrum support the presence of a highly ionized plasma and a high mass accretion rate on the system. The spectrum of X-2 in fainter phase is presented by a multicolor disk blackbody model.

  12. Families of miocene monterey crude oil, seep, and tarball samples, coastal California

    USGS Publications Warehouse

    Peters, K.E.; Hostettler, F.D.; Lorenson, T.D.; Rosenbauer, R.J.

    2008-01-01

    Biomarker and stable carbon isotope ratios were used to infer the age, lithology, organic matter input, and depositional environment of the source rocks for 388 samples of produced crude oil, seep oil, and tarballs to better assess their origins and distributions in coastal California. These samples were used to construct a chemometric (multivariate statistical) decision tree to classify 288 additional samples. The results identify three tribes of 13C-rich oil samples inferred to originate from thermally mature equivalents of the clayey-siliceous, carbonaceous marl and lower calcareous-siliceous members of the Monterey Formation at Naples Beach near Santa Barbara. An attempt to correlate these families to rock extracts from these members in the nearby COST (continental offshore stratigraphic test) (OCS-Cal 78-164) well failed, at least in part because the rocks are thermally immature. Geochemical similarities among the oil tribes and their widespread distribution support the prograding margin model or the banktop-slope-basin model instead of the ridge-and-basin model for the deposition of the Monterey Formation. Tribe 1 contains four oil families having geochemical traits of clay-rich marine shale source rock deposited under suboxic conditions with substantial higher plant input. Tribe 2 contains four oil families with traits intermediate between tribes 1 and 3, except for abundant 28,30-bisnorhopane, indicating suboxic to anoxic marine marl source rock with hemipelagic input. Tribe 3 contains five oil families with traits of distal marine carbonate source rock deposited under anoxic conditions with pelagic but little or no higher plant input. Tribes 1 and 2 occur mainly south of Point Conception in paleogeographic settings where deep burial of the Monterey source rock favored petroleum generation from all three members or their equivalents. In this area, oil from the clayey-siliceous and carbonaceous marl members (tribes 1 and 2) may overwhelm that from the lower calcareous-siliceous member (tribe 3) because the latter is thinner and less oil-prone than the overlying members. Tribe 3 occurs mainly north of Point Conception where shallow burial caused preferential generation from the underlying lower calcareous-siliceous member or another unit with similar characteristics. In a test of the decision tree, 10 tarball samples collected from beaches in Monterey and San Mateo counties in early 2007 were found to originate from natural seeps representing different organofacies of Monterey Formation source rock instead from one anthropogenic pollution event. The seeps apparently became more active because of increased storm activity. Copyright ?? 2008. The American Association of Petroleum Geologists. All rights reserved.

  13. Equivalent-Continuum Modeling With Application to Carbon Nanotubes

    NASA Technical Reports Server (NTRS)

    Odegard, Gregory M.; Gates, Thomas S.; Nicholson, Lee M.; Wise, Kristopher E.

    2002-01-01

    A method has been proposed for developing structure-property relationships of nano-structured materials. This method serves as a link between computational chemistry and solid mechanics by substituting discrete molecular structures with equivalent-continuum models. It has been shown that this substitution may be accomplished by equating the vibrational potential energy of a nano-structured material with the strain energy of representative truss and continuum models. As important examples with direct application to the development and characterization of single-walled carbon nanotubes and the design of nanotube-based devices, the modeling technique has been applied to determine the effective-continuum geometry and bending rigidity of a graphene sheet. A representative volume element of the chemical structure of graphene has been substituted with equivalent-truss and equivalent continuum models. As a result, an effective thickness of the continuum model has been determined. This effective thickness has been shown to be significantly larger than the interatomic spacing of graphite. The effective thickness has been shown to be significantly larger than the inter-planar spacing of graphite. The effective bending rigidity of the equivalent-continuum model of a graphene sheet was determined by equating the vibrational potential energy of the molecular model of a graphene sheet subjected to cylindrical bending with the strain energy of an equivalent continuum plate subjected to cylindrical bending.

  14. Equivalent Electromagnetic Constants for Microwave Application to Composite Materials for the Multi-Scale Problem

    PubMed Central

    Fujisaki, Keisuke; Ikeda, Tomoyuki

    2013-01-01

    To connect different scale models in the multi-scale problem of microwave use, equivalent material constants were researched numerically by a three-dimensional electromagnetic field, taking into account eddy current and displacement current. A volume averaged method and a standing wave method were used to introduce the equivalent material constants; water particles and aluminum particles are used as composite materials. Consumed electrical power is used for the evaluation. Water particles have the same equivalent material constants for both methods; the same electrical power is obtained for both the precise model (micro-model) and the homogeneous model (macro-model). However, aluminum particles have dissimilar equivalent material constants for both methods; different electric power is obtained for both models. The varying electromagnetic phenomena are derived from the expression of eddy current. For small electrical conductivity such as water, the macro-current which flows in the macro-model and the micro-current which flows in the micro-model express the same electromagnetic phenomena. However, for large electrical conductivity such as aluminum, the macro-current and micro-current express different electromagnetic phenomena. The eddy current which is observed in the micro-model is not expressed by the macro-model. Therefore, the equivalent material constant derived from the volume averaged method and the standing wave method is applicable to water with a small electrical conductivity, although not applicable to aluminum with a large electrical conductivity. PMID:28788395

  15. High precision test of the equivalence principle

    NASA Astrophysics Data System (ADS)

    Schlamminger, Stephan; Wagner, Todd; Choi, Ki-Young; Gundlach, Jens; Adelberger, Eric

    2007-05-01

    The equivalence principle is the underlying foundation of General Relativity. Many modern quantum theories of gravity predict violations of the equivalence principle. We are using a rotating torsion balance to search for a new equivalence principle violating, long range interaction. A sensitive torsion balance is mounted on a turntable rotating with constant angular velocity. On the torsion pendulum beryllium and titanium test bodies are installed in a composition dipole configuration. A violation of the equivalence principle would yield to a differential acceleration of the two materials towards a source mass. I will present measurements with a differential acceleration sensitivity of 3x10-15;m/s^2. To cite this abstract, use the following reference: http://meetings.aps.org/link/BAPS.2007.NWS07.B3.5

  16. Skyshine at neutron energies less than or equal to 400 MeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alsmiller, A.G. Jr.; Barish, J.; Childs, R.L.

    1980-10-01

    The dose equivalent at an air-ground interface as a function of distance from an assumed azimuthally symmetric point source of neutrons can be calculated as a double integral. The integration is over the source strength as a function of energy and polar angle weighted by an importance function that depends on the source variables and on the distance from the source to the filed point. The neutron importance function for a source 15 m above the ground emitting only into the upper hemisphere has been calculated using the two-dimensional discrete ordinates code, DOT, and the first collision source code, GRTUNCL,more » in the adjoint mode. This importance function is presented for neutron energies less than or equal to 400 MeV, for source cosine intervals of 1 to .8, .8 to .6 to .4, .4 to .2 and .2 to 0, and for various distances from the source to the field point. As part of the adjoint calculations a photon importance function is also obtained. This importance function for photon energies less than or equal to 14 MEV and for various source cosine intervals and source-to-field point distances is also presented. These importance functions may be used to obtain skyshine dose equivalent estimates for any known source energy-angle distribution.« less

  17. Lineal energy calibration of mini tissue-equivalent gas-proportional counters (TEPC)

    NASA Astrophysics Data System (ADS)

    Conte, V.; Moro, D.; Grosswendt, B.; Colautti, P.

    2013-07-01

    Mini TEPCs are cylindrical gas proportional counters of 1 mm or less of sensitive volume diameter. The lineal energy calibration of these tiny counters can be performed with an external gamma-ray source. However, to do that, first a method to get a simple and precise spectral mark has to be found and then the keV/μm value of this mark. A precise method (less than 1% of uncertainty) to identify this markis described here, and the lineal energy value of this mark has been measured for different simulated site sizes by using a 137Cs gamma source and a cylindrical TEPC equipped with a precision internal 244Cm alpha-particle source, and filled with propane-based tissue-equivalent gas mixture. Mini TEPCs can be calibrated in terms of lineal energy, by exposing them to 137Cesium sources, with an overall uncertainty of about 5%.

  18. Urea and urine are a viable and cost-effective nitrogen source for Yarrowia lipolytica biomass and lipid accumulation.

    PubMed

    Brabender, Matthew; Hussain, Murtaza Shabbir; Rodriguez, Gabriel; Blenner, Mark A

    2018-03-01

    Yarrowia lipolytica is an industrial yeast that has been used in the sustainable production of fatty acid-derived and lipid compounds due to its high growth capacity, genetic tractability, and oleaginous properties. This investigation examines the possibility of utilizing urea or urine as an alternative to ammonium sulfate as a nitrogen source to culture Y. lipolytica. The use of a stoichiometrically equivalent concentration of urea in lieu of ammonium sulfate significantly increased cell growth when glucose was used as the carbon source. Furthermore, Y. lipolytica growth was equally improved when grown with synthetic urine and real human urine. Equivalent or better lipid production was achieved when cells are grown on urea or urine. The successful use of urea and urine as nitrogen sources for Y. lipolytica growth highlights the potential of using cheaper media components as well as exploiting and recycling non-treated human waste streams for biotechnology processes.

  19. New equivalent-electrical circuit model and a practical measurement method for human body impedance.

    PubMed

    Chinen, Koyu; Kinjo, Ichiko; Zamami, Aki; Irei, Kotoyo; Nagayama, Kanako

    2015-01-01

    Human body impedance analysis is an effective tool to extract electrical information from tissues in the human body. This paper presents a new measurement method of impedance using armpit electrode and a new equivalent circuit model for the human body. The lowest impedance was measured by using an LCR meter and six electrodes including armpit electrodes. The electrical equivalent circuit model for the cell consists of resistance R and capacitance C. The R represents electrical resistance of the liquid of the inside and outside of the cell, and the C represents high frequency conductance of the cell membrane. We propose an equivalent circuit model which consists of five parallel high frequency-passing CR circuits. The proposed equivalent circuit represents alpha distribution in the impedance measured at a lower frequency range due to ion current of the outside of the cell, and beta distribution at a high frequency range due to the cell membrane and the liquid inside cell. The calculated values by using the proposed equivalent circuit model were consistent with the measured values for the human body impedance.

  20. Can the Equivalent Sphere Model Approximate Organ Doses in Space?

    NASA Technical Reports Server (NTRS)

    Lin, Zi-Wei

    2007-01-01

    For space radiation protection it is often useful to calculate dose or dose,equivalent in blood forming organs (BFO). It has been customary to use a 5cm equivalent sphere to. simulate the BFO dose. However, many previous studies have concluded that a 5cm sphere gives very different dose values from the exact BFO values. One study [1] . concludes that a 9 cm sphere is a reasonable approximation for BFO'doses in solar particle event environments. In this study we use a deterministic radiation transport [2] to investigate the reason behind these observations and to extend earlier studies. We take different space radiation environments, including seven galactic cosmic ray environments and six large solar particle events, and calculate the dose and dose equivalent in the skin, eyes and BFO using their thickness distribution functions from the CAM (Computerized Anatomical Man) model [3] The organ doses have been evaluated with a water or aluminum shielding of an areal density from 0 to 20 g/sq cm. We then compare with results from the equivalent sphere model and determine in which cases and at what radius parameters the equivalent sphere model is a reasonable approximation. Furthermore, we address why the equivalent sphere model is not a good approximation in some cases. For solar particle events, we find that the radius parameters for the organ dose equivalent increase significantly with the shielding thickness, and the model works marginally for BFO but is unacceptable for the eye or the skin. For galactic cosmic rays environments, the equivalent sphere model with an organ-specific constant radius parameter works well for the BFO dose equivalent, marginally well for the BFO dose and the dose equivalent of the eye or the skin, but is unacceptable for the dose of the eye or the skin. The ranges of the radius parameters are also being investigated, and the BFO radius parameters are found to be significantly, larger than 5 cm in all cases, consistent with the conclusion of an earlier study [I]. The radius parameters for the dose equivalent in GCR environments are approximately between 10 and I I cm for the BFO, 3.7 to 4.8 cm for the eye, and 3.5 to 5.6 cm for the skin; while the radius parameters are between 10 and 13 cm for the BFO dose.

  1. Technical note: Equivalent genomic models with a residual polygenic effect.

    PubMed

    Liu, Z; Goddard, M E; Hayes, B J; Reinhardt, F; Reents, R

    2016-03-01

    Routine genomic evaluations in animal breeding are usually based on either a BLUP with genomic relationship matrix (GBLUP) or single nucleotide polymorphism (SNP) BLUP model. For a multi-step genomic evaluation, these 2 alternative genomic models were proven to give equivalent predictions for genomic reference animals. The model equivalence was verified also for young genotyped animals without phenotypes. Due to incomplete linkage disequilibrium of SNP markers to genes or causal mutations responsible for genetic inheritance of quantitative traits, SNP markers cannot explain all the genetic variance. A residual polygenic effect is normally fitted in the genomic model to account for the incomplete linkage disequilibrium. In this study, we start by showing the proof that the multi-step GBLUP and SNP BLUP models are equivalent for the reference animals, when they have a residual polygenic effect included. Second, the equivalence of both multi-step genomic models with a residual polygenic effect was also verified for young genotyped animals without phenotypes. Additionally, we derived formulas to convert genomic estimated breeding values of the GBLUP model to its components, direct genomic values and residual polygenic effect. Third, we made a proof that the equivalence of these 2 genomic models with a residual polygenic effect holds also for single-step genomic evaluation. Both the single-step GBLUP and SNP BLUP models lead to equal prediction for genotyped animals with phenotypes (e.g., reference animals), as well as for (young) genotyped animals without phenotypes. Finally, these 2 single-step genomic models with a residual polygenic effect were proven to be equivalent for estimation of SNP effects, too. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  2. Open source data assimilation framework for hydrological modeling

    NASA Astrophysics Data System (ADS)

    Ridler, Marc; Hummel, Stef; van Velzen, Nils; Katrine Falk, Anne; Madsen, Henrik

    2013-04-01

    An open-source data assimilation framework is proposed for hydrological modeling. Data assimilation (DA) in hydrodynamic and hydrological forecasting systems has great potential to improve predictions and improve model result. The basic principle is to incorporate measurement information into a model with the aim to improve model results by error minimization. Great strides have been made to assimilate traditional in-situ measurements such as discharge, soil moisture, hydraulic head and snowpack into hydrologic models. More recently, remotely sensed data retrievals of soil moisture, snow water equivalent or snow cover area, surface water elevation, terrestrial water storage and land surface temperature have been successfully assimilated in hydrological models. The assimilation algorithms have become increasingly sophisticated to manage measurement and model bias, non-linear systems, data sparsity (time & space) and undetermined system uncertainty. It is therefore useful to use a pre-existing DA toolbox such as OpenDA. OpenDA is an open interface standard for (and free implementation of) a set of tools to quickly implement DA and calibration for arbitrary numerical models. The basic design philosophy of OpenDA is to breakdown DA into a set of building blocks programmed in object oriented languages. To implement DA, a model must interact with OpenDA to create model instances, propagate the model, get/set variables (or parameters) and free the model once DA is completed. An open-source interface for hydrological models exists capable of all these tasks: OpenMI. OpenMI is an open source standard interface already adopted by key hydrological model providers. It defines a universal approach to interact with hydrological models during simulation to exchange data during runtime, thus facilitating the interactions between models and data sources. The interface is flexible enough so that models can interact even if the model is coded in a different language, represent processes from a different domain or have different spatial and temporal resolutions. An open source framework that bridges OpenMI and OpenDA is presented. The framework provides a generic and easy means for any OpenMI compliant model to assimilate observation measurements. An example test case will be presented using MikeSHE, and OpenMI compliant fully coupled integrated hydrological model that can accurately simulate the feedback dynamics of overland flow, unsaturated zone and saturated zone.

  3. Equivalent mechanical model of large-amplitude liquid sloshing under time-dependent lateral excitations in low-gravity conditions

    NASA Astrophysics Data System (ADS)

    Nan, Miao; Junfeng, Li; Tianshu, Wang

    2017-01-01

    Subjected to external lateral excitations, large-amplitude sloshing may take place in propellant tanks, especially for spacecraft in low-gravity conditions, such as landers in the process of hover and obstacle avoidance during lunar soft landing. Due to lateral force of the order of gravity in magnitude, the amplitude of liquid sloshing becomes too big for the traditional equivalent model to be accurate. Therefore, a new equivalent mechanical model, denominated the "composite model", that can address large-amplitude lateral sloshing in partially filled spherical tanks is established in this paper, with both translational and rotational excitations considered. The hypothesis of liquid equilibrium position following equivalent gravity is first proposed. By decomposing the large-amplitude motion of a liquid into bulk motion following the equivalent gravity and additional small-amplitude sloshing, a better simulation of large-amplitude liquid sloshing is presented. The effectiveness and accuracy of the model are verified by comparing the slosh forces and moments to results of the traditional model and CFD software.

  4. A note on the revised galactic neutron spectrum of the Ames collaborative study

    NASA Technical Reports Server (NTRS)

    Schaefer, H. J.

    1980-01-01

    Energy distributions of the neutron dose equivalents in the 0.1 to 300 Mev interval for the Ames and Hess spectra are compared. The Ames spectrum shows no evaporation peak, moves the bulk of the flux away from the region of elastic collision and spreads it more evenly over higher energies. The neutron spectrum in space does not seem to hear out the Ames model. Emulsion findings on all manned missions of the past consistently indicate that evaporation events are a prolific source of neutrons in space.

  5. Modeling of Imploded Annular Plasmas.

    DTIC Science & Technology

    1981-05-01

    field penetration into the imploding, time- varying plasma influences the thickness of the current-carrying region and the ratio of classical (core... body of the same size (a T e / r r0 ) is plotted for comparison. Using this e(nJ, Te) as a source term the net radiative loss from any plasma density...plasma with n, 1.0- 1019 cm-31 * Prad from an equivalent black body 10 19 1018 / P P Fe 1o17 L $ .01 .02 .03 .04 .06 .08 .10 .2 .3 .4 .6 .8 1.0 2 3

  6. MEMS 3-DoF gyroscope design, modeling and simulation through equivalent circuit lumped parameter model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mian, Muhammad Umer, E-mail: umermian@gmail.com; Khir, M. H. Md.; Tang, T. B.

    Pre-fabrication, behavioural and performance analysis with computer aided design (CAD) tools is a common and fabrication cost effective practice. In light of this we present a simulation methodology for a dual-mass oscillator based 3 Degree of Freedom (3-DoF) MEMS gyroscope. 3-DoF Gyroscope is modeled through lumped parameter models using equivalent circuit elements. These equivalent circuits consist of elementary components which are counterpart of their respective mechanical components, used to design and fabricate 3-DoF MEMS gyroscope. Complete designing of equivalent circuit model, mathematical modeling and simulation are being presented in this paper. Behaviors of the equivalent lumped models derived for themore » proposed device design are simulated in MEMSPRO T-SPICE software. Simulations are carried out with the design specifications following design rules of the MetalMUMPS fabrication process. Drive mass resonant frequencies simulated by this technique are 1.59 kHz and 2.05 kHz respectively, which are close to the resonant frequencies found by the analytical formulation of the gyroscope. The lumped equivalent circuit modeling technique proved to be a time efficient modeling technique for the analysis of complex MEMS devices like 3-DoF gyroscopes. The technique proves to be an alternative approach to the complex and time consuming couple field analysis Finite Element Analysis (FEA) previously used.« less

  7. Make Your Own Paint Chart: A Realistic Context for Developing Proportional Reasoning with Ratios

    ERIC Educational Resources Information Center

    Beswick, Kim

    2011-01-01

    Proportional reasoning has been recognised as a crucial focus of mathematics in the middle years and also as a frequent source of difficulty for students (Lamon, 2007). Proportional reasoning concerns the equivalence of pairs of quantities that are related multiplicatively; that is, equivalent ratios including those expressed as fractions and…

  8. A Complete Multimode Equivalent-Circuit Theory for Electrical Design

    PubMed Central

    Williams, Dylan F.; Hayden, Leonard A.; Marks, Roger B.

    1997-01-01

    This work presents a complete equivalent-circuit theory for lossy multimode transmission lines. Its voltages and currents are based on general linear combinations of standard normalized modal voltages and currents. The theory includes new expressions for transmission line impedance matrices, symmetry and lossless conditions, source representations, and the thermal noise of passive multiports. PMID:27805153

  9. Equivalent modeling of PMSG-based wind power plants considering LVRT capabilities: electromechanical transients in power systems.

    PubMed

    Ding, Ming; Zhu, Qianlong

    2016-01-01

    Hardware protection and control action are two kinds of low voltage ride-through technical proposals widely used in a permanent magnet synchronous generator (PMSG). This paper proposes an innovative clustering concept for the equivalent modeling of a PMSG-based wind power plant (WPP), in which the impacts of both the chopper protection and the coordinated control of active and reactive powers are taken into account. First, the post-fault DC link voltage is selected as a concentrated expression of unit parameters, incoming wind and electrical distance to a fault point to reflect the transient characteristics of PMSGs. Next, we provide an effective method for calculating the post-fault DC link voltage based on the pre-fault wind energy and the terminal voltage dip. Third, PMSGs are divided into groups by analyzing the calculated DC link voltages without any clustering algorithm. Finally, PMSGs of the same group are equivalent as one rescaled PMSG to realize the transient equivalent modeling of the PMSG-based WPP. Using the DIgSILENT PowerFactory simulation platform, the efficiency and accuracy of the proposed equivalent model are tested against the traditional equivalent WPP and the detailed WPP. The simulation results show the proposed equivalent model can be used to analyze the offline electromechanical transients in power systems.

  10. Mechanistic equivalent circuit modelling of a commercial polymer electrolyte membrane fuel cell

    NASA Astrophysics Data System (ADS)

    Giner-Sanz, J. J.; Ortega, E. M.; Pérez-Herranz, V.

    2018-03-01

    Electrochemical impedance spectroscopy (EIS) has been widely used in the fuel cell field since it allows deconvolving the different physic-chemical processes that affect the fuel cell performance. Typically, EIS spectra are modelled using electric equivalent circuits. In this work, EIS spectra of an individual cell of a commercial PEM fuel cell stack were obtained experimentally. The goal was to obtain a mechanistic electric equivalent circuit in order to model the experimental EIS spectra. A mechanistic electric equivalent circuit is a semiempirical modelling technique which is based on obtaining an equivalent circuit that does not only correctly fit the experimental spectra, but which elements have a mechanistic physical meaning. In order to obtain the aforementioned electric equivalent circuit, 12 different models with defined physical meanings were proposed. These equivalent circuits were fitted to the obtained EIS spectra. A 2 step selection process was performed. In the first step, a group of 4 circuits were preselected out of the initial list of 12, based on general fitting indicators as the determination coefficient and the fitted parameter uncertainty. In the second step, one of the 4 preselected circuits was selected on account of the consistency of the fitted parameter values with the physical meaning of each parameter.

  11. Tc(VII) and Cr(VI) Interaction with Naturally Reduced Ferruginous Smectite from a Redox Transition Zone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qafoku, Odeta; Pearce, Carolyn I.; Neumann, Anke

    Fe(II)-rich clay minerals found in subsurface redox transition zones (RTZs) can serve as important source of electron equivalents limiting the transport of redox active contaminants. While most laboratory reactivity studies are based on reduced model clays, the reactivity of naturally reduced clays in field samples remains poorly explored. Characterization of the clay size fraction of a fine-grained unit from RTZ interface at the Hanford site, Washington, including mineralogy, crystal chemistry, and Fe(II)/(III) content, indicates that ferruginous montmorillonite is the dominant mineralogical component. Oxic and anoxic fractions differ significantly in Fe(II) concentration, but FeTOTAL remains constant demonstrating no Fe loss duringmore » reduction-oxidation cycling. At its native pH of 8.6, the anoxic fraction despite its significant Fe(II) (~23% of FeTOTAL), exhibits minimal reactivity with TcO4- and CrO42- and much slower reaction kinetics than that measured in studies with biologically/chemically reduced model clays. Reduction capacity is enhanced by added Fe(II) (if Fe(II)SORBED >8% clay Fe(II)LABILE), however the kinetics of this conceptually surface-mediated reaction remain sluggish. Surface-sensitive Fe L-edge X-ray absorption spectroscopy shows that Fe(II)SORBED and the resulting reducing equivalents are not available in the outermost few nanometers of clay surfaces. Slow kinetics thus appear related to diffusion-limited access to electron equivalents retained within clay mineral.« less

  12. Net emissions of CH4 and CO2 in Alaska: Implications for the region's greenhouse gas budget

    USGS Publications Warehouse

    Zhuang, Q.; Melillo, J.M.; McGuire, A.D.; Kicklighter, D.W.; Prinn, R.G.; Steudler, P.A.; Felzer, B.S.; Hu, S.

    2007-01-01

    We used a biogeochemistry model, the Terrestrial Ecosystem Model (TEM), to study the net methane (CH4) fluxes between Alaskan ecosystems and the atmosphere. We estimated that the current net emissions of CH4 (emissions minus consumption) from Alaskan soils are ???3 Tg CH 4/yr. Wet tundra ecosystems are responsible for 75% of the region's net emissions, while dry tundra and upland boreal forests are responsible for 50% and 45% of total consumption over the region, respectively. In response to climate change over the 21st century, our simulations indicated that CH 4 emissions from wet soils would be enhanced more than consumption by dry soils of tundra and boreal forests. As a consequence, we projected that net CH4 emissions will almost double by the end of the century in response to high-latitude warming and associated climate changes. When we placed these CH4 emissions in the context of the projected carbon budget (carbon dioxide [CO2] and CH4) for Alaska at the end of the 21st century, we estimated that Alaska will be a net source of greenhouse gases to the atmosphere of 69 Tg CO2 equivalents/yr, that is, a balance between net methane emissions of 131 Tg CO2 equivalents/yr and carbon sequestration of 17 Tg C/yr (62 Tg CO2 equivalents/yr). ?? 2007 by the Ecological Society of America.

  13. Tc(VII) and Cr(VI) Interaction with Naturally Reduced Ferruginous Smectite from a Redox Transition Zone.

    PubMed

    Qafoku, Odeta; Pearce, Carolyn I; Neumann, Anke; Kovarik, Libor; Zhu, Mengqiang; Ilton, Eugene S; Bowden, Mark E; Resch, Charles T; Arey, Bruce W; Arenholz, Elke; Felmy, Andrew R; Rosso, Kevin M

    2017-08-15

    Fe(II)-rich clay minerals found in subsurface redox transition zones (RTZs) can serve as important sources of electron equivalents limiting the transport of redox-active contaminants. While most laboratory reactivity studies are based on reduced model clays, the reactivity of naturally reduced field samples remains poorly explored. Characterization of the clay size fraction of a fine-grained unit from the RTZ interface at the Hanford site, Washington, including mineralogy, crystal chemistry, and Fe(II)/(III) content, indicates that ferruginous montmorillonite is the dominant mineralogical component. Oxic and anoxic fractions differ significantly in Fe(II) natural content, but Fe TOTAL remains constant, demonstrating no Fe loss during its reduction-oxidation cyclings. At native pH of 8.6, the anoxic fraction, despite its significant Fe(II), ∼23% of Fe TOTAL , exhibits minimal reactivity with TcO 4 - and CrO 4 2- and much slower reaction kinetics than those measured in studies with biologically/chemically reduced model clays. Reduction capacity is enhanced by added/sorbed Fe(II) (if Fe(II) SORBED > 8% clay Fe(II) LABILE ); however, the kinetics of this conceptually surface-mediated reaction remain sluggish. Surface-sensitive Fe L-edge X-ray absorption spectroscopy shows that Fe(II) SORBED and the resulting reducing equivalents are not available in the outermost few nanometers of clay surfaces. Slow kinetics thus appear related to diffusion-limited access to electron equivalents retained within the clay mineral structure.

  14. Generalized cable equation model for myelinated nerve fiber.

    PubMed

    Einziger, Pinchas D; Livshitz, Leonid M; Mizrahi, Joseph

    2005-10-01

    Herein, the well-known cable equation for nonmyelinated axon model is extended analytically for myelinated axon formulation. The myelinated membrane conductivity is represented via the Fourier series expansion. The classical cable equation is thereby modified into a linear second order ordinary differential equation with periodic coefficients, known as Hill's equation. The general internal source response, expressed via repeated convolutions, uniformly converges provided that the entire periodic membrane is passive. The solution can be interpreted as an extended source response in an equivalent nonmyelinated axon (i.e., the response is governed by the classical cable equation). The extended source consists of the original source and a novel activation function, replacing the periodic membrane in the myelinated axon model. Hill's equation is explicitly integrated for the specific choice of piecewise constant membrane conductivity profile, thereby resulting in an explicit closed form expression for the transmembrane potential in terms of trigonometric functions. The Floquet's modes are recognized as the nerve fiber activation modes, which are conventionally associated with the nonlinear Hodgkin-Huxley formulation. They can also be incorporated in our linear model, provided that the periodic membrane point-wise passivity constraint is properly modified. Indeed, the modified condition, enforcing the periodic membrane passivity constraint on the average conductivity only leads, for the first time, to the inclusion of the nerve fiber activation modes in our novel model. The validity of the generalized transmission-line and cable equation models for a myelinated nerve fiber, is verified herein through a rigorous Green's function formulation and numerical simulations for transmembrane potential induced in three-dimensional myelinated cylindrical cell. It is shown that the dominant pole contribution of the exact modal expansion is the transmembrane potential solution of our generalized model.

  15. Impedance loading and radiation of finite aperture multipole sources in fluid filled boreholes

    NASA Astrophysics Data System (ADS)

    Geerits, Tim W.; Kranz, Burkhard

    2017-04-01

    In the exploration of oil and gas finite aperture multipole borehole acoustic sources are commonly used to excite borehole modes in a fluid-filled borehole surrounded by a (poro-) elastic formation. Due to the mutual interaction of the constituent sources and their immediate proximity to the formation it has been unclear how and to what extent these effects influence radiator performance. We present a theory, based on the equivalent surface source formulation for fluid-solid systems that incorporates these 'loading' effects and allows for swift computation of the multipole source dimensionless impedance, the associated radiator motion and the resulting radiated wave field in borehole fluid and formation. Dimensionless impedance results are verified through a comparison with finite element modeling results in the cases of a logging while drilling tool submersed in an unbounded fluid and a logging while drilling tool submersed in a fluid filled borehole surrounded by a fast and a slow formation. In all these cases we consider a monopole, dipole and quadrupole excitation, as these cases are relevant to many borehole acoustic applications. Overall, we obtain a very good agreement.

  16. Open-Source Software for Modeling of Nanoelectronic Devices

    NASA Technical Reports Server (NTRS)

    Oyafuso, Fabiano; Hua, Hook; Tisdale, Edwin; Hart, Don

    2004-01-01

    The Nanoelectronic Modeling 3-D (NEMO 3-D) computer program has been upgraded to open-source status through elimination of license-restricted components. The present version functions equivalently to the version reported in "Software for Numerical Modeling of Nanoelectronic Devices" (NPO-30520), NASA Tech Briefs, Vol. 27, No. 11 (November 2003), page 37. To recapitulate: NEMO 3-D performs numerical modeling of the electronic transport and structural properties of a semiconductor device that has overall dimensions of the order of tens of nanometers. The underlying mathematical model represents the quantum-mechanical behavior of the device resolved to the atomistic level of granularity. NEMO 3-D solves the applicable quantum matrix equation on a Beowulf-class cluster computer by use of a parallel-processing matrix vector multiplication algorithm coupled to a Lanczos and/or Rayleigh-Ritz algorithm that solves for eigenvalues. A prior upgrade of NEMO 3-D incorporated a capability for a strain treatment, parameterized for bulk material properties of GaAs and InAs, for two tight-binding submodels. NEMO 3-D has been demonstrated in atomistic analyses of effects of disorder in alloys and, in particular, in bulk In(x)Ga(1-x)As and in In(0.6)Ga(0.4)As quantum dots.

  17. Alternative Fuels Data Center: Maine Transportation Data for Alternative

    Science.gov Websites

    connect with other local stakeholders. Gasoline Diesel Natural Gas Transportation Fuel Consumption Source Renewable Power Plants 58 Renewable Power Plant Capacity (nameplate, MW) 984 Source: BioFuels Atlas from the $2.96/gallon $2.66/GGE Source: Average prices per gasoline gallon equivalent (GGE) for the New England

  18. Alternative Fuels Data Center: West Virginia Transportation Data for

    Science.gov Websites

    Transportation Fuel Consumption Source: State Energy Data System based on beta data converted to gasoline gallon (bbl/day) 20,000 Renewable Power Plants 13 Renewable Power Plant Capacity (nameplate, MW) 751 Source Source: Average prices per gasoline gallon equivalent (GGE) for the Lower Atlantic PADD from the

  19. Alternative Fuels Data Center: Hawaii Transportation Data for Alternative

    Science.gov Websites

    Diesel Natural Gas Transportation Fuel Consumption Source: State Energy Data System based on beta data Plant Capacity (nameplate, MW) 145 Source: BioFuels Atlas from the National Renewable Energy Laboratory $2.96/gallon $2.66/GGE Source: Average prices per gasoline gallon equivalent (GGE) for the West Coast

  20. Alternative Fuels Data Center: Oklahoma Transportation Data for Alternative

    Science.gov Websites

    Fuel Consumption Source: State Energy Data System based on beta data converted to gasoline gallon ) 2,573 Source: BioFuels Atlas from the National Renewable Energy Laboratory Case Studies Video thumbnail Source: Average prices per gasoline gallon equivalent (GGE) for the Midwest PADD from the Alternative

  1. Alternative Fuels Data Center: Nevada Transportation Data for Alternative

    Science.gov Websites

    . Gasoline Diesel Natural Gas Electricity Transportation Fuel Consumption Source: State Energy Data System Renewable Power Plant Capacity (nameplate, MW) 1,684 Source: BioFuels Atlas from the National Renewable Source: Average prices per gasoline gallon equivalent (GGE) for the West Coast PADD from the Alternative

  2. Alternative Fuels Data Center: Montana Transportation Data for Alternative

    Science.gov Websites

    . Gasoline Diesel Natural Gas Transportation Fuel Consumption Source: State Energy Data System based on beta Renewable Power Plant Capacity (nameplate, MW) 2,955 Source: BioFuels Atlas from the National Renewable /gallon $2.66/GGE Source: Average prices per gasoline gallon equivalent (GGE) for the Rocky Mountain PADD

  3. 40 CFR 63.55 - Maximum achievable control technology (MACT) determinations for affected sources subject to case...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 10 2013-07-01 2013-07-01 false Maximum achievable control technology (MACT) determinations for affected sources subject to case-by-case determination of equivalent emission... Requirements for Control Technology Determinations for Major Sources in Accordance With Clean Air Act Sections...

  4. 40 CFR 63.55 - Maximum achievable control technology (MACT) determinations for affected sources subject to case...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 10 2012-07-01 2012-07-01 false Maximum achievable control technology (MACT) determinations for affected sources subject to case-by-case determination of equivalent emission... Requirements for Control Technology Determinations for Major Sources in Accordance With Clean Air Act Sections...

  5. 40 CFR 63.55 - Maximum achievable control technology (MACT) determinations for affected sources subject to case...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 9 2011-07-01 2011-07-01 false Maximum achievable control technology (MACT) determinations for affected sources subject to case-by-case determination of equivalent emission... Requirements for Control Technology Determinations for Major Sources in Accordance With Clean Air Act Sections...

  6. 40 CFR 63.55 - Maximum achievable control technology (MACT) determinations for affected sources subject to case...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 10 2014-07-01 2014-07-01 false Maximum achievable control technology (MACT) determinations for affected sources subject to case-by-case determination of equivalent emission... Requirements for Control Technology Determinations for Major Sources in Accordance With Clean Air Act Sections...

  7. Qualification tests for {sup 192}Ir sealed sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iancso, Georgeta, E-mail: georgetaiancso@yahoo.com; Iliescu, Elena, E-mail: georgetaiancso@yahoo.com; Iancu, Rodica, E-mail: georgetaiancso@yahoo.com

    This paper describes the results of qualification tests for {sup 192}Ir sealed sources, available in Testing and Nuclear Expertise Laboratory of National Institute for Physics and Nuclear Engineering 'Horia Hulubei' (I.F.I.N.-HH), Romania. These sources had to be produced in I.F.I.N.-HH and were tested in order to obtain the authorization from The National Commission for Nuclear Activities Control (CNCAN). The sources are used for gammagraphy procedures or in gammadefectoscopy equipments. Tests, measurement methods and equipments used, comply with CNCAN, AIEA and International Quality Standards and regulations. The qualification tests are: 1. Radiological tests and measurements: dose equivalent rate at 1 m;more » tightness; dose equivalent rate at the surface of the transport and storage container; external unfixed contamination of the container surface. 2. Mechanical and climatic tests: thermal shock; external pressure; mechanic shock; vibrations; boring; thermal conditions for storage and transportation. Passing all tests, it was obtained the Radiological Security Authorization for producing the {sup 192}Ir sealed sources. Now IFIN-HH can meet many demands for this sealed sources, as the only manufacturer in Romania.« less

  8. 1-D/3-D geologic model of the Western Canada Sedimentary Basin

    USGS Publications Warehouse

    Higley, D.K.; Henry, M.; Roberts, L.N.R.; Steinshouer, D.W.

    2005-01-01

    The 3-D geologic model of the Western Canada Sedimentary Basin comprises 18 stacked intervals from the base of the Devonian Woodbend Group and age equivalent formations to ground surface; it includes an estimated thickness of eroded sediments based on 1-D burial history reconstructions for 33 wells across the study area. Each interval for the construction of the 3-D model was chosen on the basis of whether it is primarily composed of petroleum system elements of reservoir, hydrocarbon source, seal, overburden, or underburden strata, as well as the quality and areal distribution of well and other data. Preliminary results of the modeling support the following interpretations. Long-distance migration of hydrocarbons east of the Rocky Mountains is indicated by oil and gas accumulations in areas within which source rocks are thermally immature for oil and (or) gas. Petroleum systems in the basin are segmented by the northeast-trending Sweetgrass Arch; hydrocarbons west of the arch were from source rocks lying near or beneath the Rocky Mountains, whereas oil and gas east of the arch were sourced from the Williston Basin. Hydrocarbon generation and migration are primarily due to increased burial associated with the Laramide Orogeny. Hydrocarbon sources and migration were also influenced by the Lower Cretaceous sub-Mannville unconformity. In the Peace River Arch area of northern Alberta, Jurassic and older formations exhibit high-angle truncations against the unconformity. Potential Paleozoic though Mesozoic hydrocarbon source rocks are in contact with overlying Mannville Group reservoir facies. In contrast, in Saskatchewan and southern Alberta the contacts are parallel to sub-parallel, with the result that hydrocarbon source rocks are separated from the Mannville Group by seal-forming strata within the Jurassic. Vertical and lateral movement of hydrocarbons along the faults in the Rocky Mountains deformed belt probably also resulted in mixing of oil and gas from numerous source rocks in Alberta.

  9. Modelling Ni-mH battery using Cauer and Foster structures

    NASA Astrophysics Data System (ADS)

    Kuhn, E.; Forgez, C.; Lagonotte, P.; Friedrich, G.

    This paper deals with dynamic models of Ni-mH battery and focuses on the development of the equivalent electric models. We propose two equivalent electric models, using Cauer and Foster structures, able to relate both dynamic and energetic behavior of the battery. These structures are well adapted to real time applications (e.g. Battery Management Systems) or system simulations. A special attention will be brought to the influence of the complexity of the equivalent electric scheme on the precision of the model. Experimental validations allow to discuss about performances of proposed models.

  10. Equivalent circuit model of Ge/Si separate absorption charge multiplication avalanche photodiode

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Chen, Ting; Yan, Linshu; Bao, Xiaoyuan; Xu, Yuanyuan; Wang, Guang; Wang, Guanyu; Yuan, Jun; Li, Junfeng

    2018-03-01

    The equivalent circuit model of Ge/Si Separate Absorption Charge Multiplication Avalanche Photodiode (SACM-APD) is proposed. Starting from the carrier rate equations in different regions of device and considering the influences of non-uniform electric field, noise, parasitic effect and some other factors, the equivalent circuit model of SACM-APD device is established, in which the steady-state and transient current voltage characteristics can be described exactly. In addition, the proposed Ge/Si SACM APD equivalent circuit model is embedded in PSpice simulator. The important characteristics of Ge/Si SACM APD such as dark current, frequency response, shot noise are simulated, the simulation results show that the simulation with the proposed model are in good agreement with the experimental results.

  11. Thermal comparison of buried-heterostructure and shallow-ridge lasers

    NASA Astrophysics Data System (ADS)

    Rustichelli, V.; Lemaître, F.; Ambrosius, H. P. M. M.; Brenot, R.; Williams, K. A.

    2018-02-01

    We present finite difference thermal modeling to predict temperature distribution, heat flux, and thermal resistance inside lasers with different waveguide geometries. We provide a quantitative experimental and theoretical comparison of the thermal behavior of shallow-ridge (SR) and buried-heterostructure (BH) lasers. We investigate the influence of a split heat source to describe p-layer Joule heating and nonradiative energy loss in the active layer and the heat-sinking from top as well as bottom when quantifying thermal impedance. From both measured values and numerical modeling we can quantify the thermal resistance for BH lasers and SR lasers, showing an improved thermal performance from 50K/W to 30K/W for otherwise equivalent BH laser designs.

  12. Assessing Knowledge of Mathematical Equivalence: A Construct-Modeling Approach

    ERIC Educational Resources Information Center

    Rittle-Johnson, Bethany; Matthews, Percival G.; Taylor, Roger S.; McEldoon, Katherine L.

    2011-01-01

    Knowledge of mathematical equivalence, the principle that 2 sides of an equation represent the same value, is a foundational concept in algebra, and this knowledge develops throughout elementary and middle school. Using a construct-modeling approach, we developed an assessment of equivalence knowledge. Second through sixth graders (N = 175)…

  13. Improving gridded snow water equivalent products in British Columbia, Canada: multi-source data fusion by neural network models

    NASA Astrophysics Data System (ADS)

    Snauffer, Andrew M.; Hsieh, William W.; Cannon, Alex J.; Schnorbus, Markus A.

    2018-03-01

    Estimates of surface snow water equivalent (SWE) in mixed alpine environments with seasonal melts are particularly difficult in areas of high vegetation density, topographic relief, and snow accumulations. These three confounding factors dominate much of the province of British Columbia (BC), Canada. An artificial neural network (ANN) was created using as predictors six gridded SWE products previously evaluated for BC. Relevant spatiotemporal covariates were also included as predictors, and observations from manual snow surveys at stations located throughout BC were used as target data. Mean absolute errors (MAEs) and interannual correlations for April surveys were found using cross-validation. The ANN using the three best-performing SWE products (ANN3) had the lowest mean station MAE across the province. ANN3 outperformed each product as well as product means and multiple linear regression (MLR) models in all of BC's five physiographic regions except for the BC Plains. Subsequent comparisons with predictions generated by the Variable Infiltration Capacity (VIC) hydrologic model found ANN3 to better estimate SWE over the VIC domain and within most regions. The superior performance of ANN3 over the individual products, product means, MLR, and VIC was found to be statistically significant across the province.

  14. Flexural bending of the Zagros foreland basin

    NASA Astrophysics Data System (ADS)

    Pirouz, Mortaza; Avouac, Jean-Philippe; Gualandi, Adriano; Hassanzadeh, Jamshid; Sternai, Pietro

    2017-09-01

    We constrain and model the geometry of the Zagros foreland to assess the equivalent elastic thickness of the northern edge of the Arabian plate and the loads that have originated due to the Arabia-Eurasia collision. The Oligo-Miocene Asmari formation, and its equivalents in Iraq and Syria, is used to estimate the post-collisional subsidence as they separate passive margin sediments from the younger foreland deposits. The depth to these formations is obtained by synthesizing a large database of well logs, seismic profiles and structural sections from the Mesopotamian basin and the Persian Gulf. The foreland depth varies along strike of the Zagros wedge between 1 and 6 km. The foreland is deepest beneath the Dezful embayment, in southwest Iran, and becomes shallower towards both ends. We investigate how the geometry of the foreland relates to the range topography loading based on simple flexural models. Deflection of the Arabian plate is modelled using point load distribution and convolution technique. The results show that the foreland depth is well predicted with a flexural model which assumes loading by the basin sedimentary fill, and thickened crust of the Zagros. The model also predicts a Moho depth consistent with Free-Air anomalies over the foreland and Zagros wedge. The equivalent elastic thickness of the flexed Arabian lithosphere is estimated to be ca. 50 km. We conclude that other sources of loading of the lithosphere, either related to the density variations (e.g. due to a possible lithospheric root) or dynamic origin (e.g. due to sublithospheric mantle flow or lithospheric buckling) have a negligible influence on the foreland geometry, Moho depth and topography of the Zagros. We calculate the shortening across the Zagros assuming conservation of crustal mass during deformation, trapping of all the sediments eroded from the range in the foreland, and an initial crustal thickness of 38 km. This calculation implies a minimum of 126 ± 18 km of crustal shortening due to ophiolite obduction and post-collisional shortening.

  15. The AMSR2 Satellite-based Microwave Snow Algorithm (SMSA) to estimate regional to global snow depth and snow water equivalent

    NASA Astrophysics Data System (ADS)

    Kelly, R. E. J.; Saberi, N.; Li, Q.

    2017-12-01

    With moderate to high spatial resolution (<1 km) regional to global snow water equivalent (SWE) observation approaches yet to be fully scoped and developed, the long-term satellite passive microwave record remains an important tool for cryosphere-climate diagnostics. A new satellite microwave remote sensing approach is described for estimating snow depth (SD) and snow water equivalent (SWE). The algorithm, called the Satellite-based Microwave Snow Algorithm (SMSA), uses Advanced Microwave Scanning Radiometer - 2 (AMSR2) observations aboard the Global Change Observation Mission - Water mission launched by the Japan Aerospace Exploration Agency in 2012. The approach is unique since it leverages observed brightness temperatures (Tb) with static ancillary data to parameterize a physically-based retrieval without requiring parameter constraints from in situ snow depth observations or historical snow depth climatology. After screening snow from non-snow surface targets (water bodies [including freeze/thaw state], rainfall, high altitude plateau regions [e.g. Tibetan plateau]), moderate and shallow snow depths are estimated by minimizing the difference between Dense Media Radiative Transfer model estimates (Tsang et al., 2000; Picard et al., 2011) and AMSR2 Tb observations to retrieve SWE and SD. Parameterization of the model combines a parsimonious snow grain size and density approach originally developed by Kelly et al. (2003). Evaluation of the SMSA performance is achieved using in situ snow depth data from a variety of standard and experiment data sources. Results presented from winter seasons 2012-13 to 2016-17 illustrate the improved performance of the new approach in comparison with the baseline AMSR2 algorithm estimates and approach the performance of the model assimilation-based approach of GlobSnow. Given the variation in estimation power of SWE by different land surface/climate models and selected satellite-derived passive microwave approaches, SMSA provides SWE estimates that are independent of real or near real-time in situ and model data.

  16. Main Sources and Doses of Space Radiation during Mars Missions and Total Radiation Risk for Cosmonauts

    NASA Astrophysics Data System (ADS)

    Mitrikas, Victor; Aleksandr, Shafirkin; Shurshakov, Vyacheslav

    This work contains calculation data of generalized doses and dose equivalents in critical organs and tissues of cosmonauts produces by galactic cosmic rays (GCR), solar cosmic rays (SCR) and the Earth’s radiation belts (ERB) that will impact crewmembers during a flight to Mars, while staying in the landing module and on the Martian surface, and during the return to Earth. Also calculated total radiation risk values during whole life of cosmonauts after the flight are presented. Radiation risk (RR) calculations are performed on the basis of a radiobiological model of radiation damage to living organisms, while taking into account reparation processes acting during continuous long-term exposure at various dose rates and under acute recurrent radiation impact. The calculations of RR are performed for crewmembers of various ages implementing a flight to Mars over 2 - 3 years in maximum and minimum of the solar cycle. The total carcinogenic and non-carcinogenic RR and possible life-span shortening are estimated on the basis of a model of the radiation death probability for mammals. This model takes into account the decrease in compensatory reserve of an organism as well as the increase in mortality rate and descent of the subsequent lifetime of the cosmonaut. The analyzed dose distributions in the shielding and body areas are applied to making model calculations of tissue equivalent spherical and anthropomorphic phantoms.

  17. Numerical dissipation vs. subgrid-scale modelling for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Dairay, Thibault; Lamballais, Eric; Laizet, Sylvain; Vassilicos, John Christos

    2017-05-01

    This study presents an alternative way to perform large eddy simulation based on a targeted numerical dissipation introduced by the discretization of the viscous term. It is shown that this regularisation technique is equivalent to the use of spectral vanishing viscosity. The flexibility of the method ensures high-order accuracy while controlling the level and spectral features of this purely numerical viscosity. A Pao-like spectral closure based on physical arguments is used to scale this numerical viscosity a priori. It is shown that this way of approaching large eddy simulation is more efficient and accurate than the use of the very popular Smagorinsky model in standard as well as in dynamic version. The main strength of being able to correctly calibrate numerical dissipation is the possibility to regularise the solution at the mesh scale. Thanks to this property, it is shown that the solution can be seen as numerically converged. Conversely, the two versions of the Smagorinsky model are found unable to ensure regularisation while showing a strong sensitivity to numerical errors. The originality of the present approach is that it can be viewed as implicit large eddy simulation, in the sense that the numerical error is the source of artificial dissipation, but also as explicit subgrid-scale modelling, because of the equivalence with spectral viscosity prescribed on a physical basis.

  18. Discharge processes and an electrical model of atmospheric pressure plasma jets in argon

    NASA Astrophysics Data System (ADS)

    Fang, Zhi; Shao, Tao; Yang, Jing; Zhang, Cheng

    2016-01-01

    In this paper, an atmospheric pressure plasma discharge in argon was generated using a needle-to-ring electrode configuration driven by a sinusoidal excitation voltage. The electric discharge processes and discharge characteristics were investigated by inspecting the voltage-current waveforms, Lissajous curves and lighting emission images. The change in discharge mode with applied voltage amplitude was studied and characterised, and three modes of corona discharge, dielectric barrier discharge (DBD) and jet discharge were identified, which appeared in turn with increasing applied voltage and can be distinguished clearly from the measured voltage-current waveforms, light-emission images and the changing gradient of discharge power with applied voltage. Based on the experimental results and discharge mechanism analysis, an equivalent electrical model and the corresponding equivalent circuit for characterising the whole discharge processes accurately was proposed, and the three discharge stages were characterised separately. A voltage-controlled current source (VCCS) associated with a resistance and a capacitance were used to represent the DBD stage, and the plasma plume and corona discharge were modelled by a variable capacitor in series with a variable resistor. Other factors that can influence the discharge, such as lead and stray capacitance values of the circuit, were also considered in the proposed model. Contribution to the Topical Issue "Recent Breakthroughs in Microplasma Science and Technology", edited by Kurt Becker, Jose Lopez, David Staack, Klaus-Dieter Weltmann and Wei Dong Zhu.

  19. Constraining the Dust Opacity Law in Three Small and Isolated Molecular Clouds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Webb, K. A.; Thanjavur, K.; Di Francesco, J.

    Density profiles of isolated cores derived from thermal dust continuum emission rely on models of dust properties, such as mass opacity, that are poorly constrained. With complementary measures from near-infrared extinction maps, we can assess the reliability of commonly used dust models. In this work, we compare Herschel -derived maps of the optical depth with equivalent maps derived from CFHT WIRCAM near-infrared observations for three isolated cores: CB 68, L 429, and L 1552. We assess the dust opacities provided from four models: OH1a, OH5a, Orm1, and Orm4. Although the consistency of the models differs between the three sources, themore » results suggest that the optical properties of dust in the envelopes of the cores are best described by either silicate and bare graphite grains (e.g., Orm1) or carbonaceous grains with some coagulation and either thin or no ice mantles (e.g., OH5a). None of the models, however, individually produced the most consistent optical depth maps for every source. The results suggest that either the dust in the cores is not well-described by any one dust property model, the application of the dust models cannot be extended beyond the very center of the cores, or more complex SED fitting functions are necessary.« less

  20. The carbon footprint of Australian ambulance operations.

    PubMed

    Brown, Lawrence H; Canyon, Deon V; Buettner, Petra G; Crawford, J Mac; Judd, Jenni

    2012-12-01

    To determine the greenhouse gas emissions associated with the energy consumption of Australian ambulance operations, and to identify the predominant energy sources that contribute to those emissions. A two-phase study of operational and financial data from a convenience sample of Australian ambulance operations to inventory their energy consumption and greenhouse gas emissions for 1 year. State- and territory-based ambulance systems serving 58% of Australia's population and performing 59% of Australia's ambulance responses provided data for the study. Emissions for the participating systems totalled 67 390 metric tons of carbon dioxide equivalents. For ground ambulance operations, emissions averaged 22 kg of carbon dioxide equivalents per ambulance response, 30 kg of carbon dioxide equivalents per patient transport and 3 kg of carbon dioxide equivalents per capita. Vehicle fuels accounted for 58% of the emissions from ground ambulance operations, with the remainder primarily attributable to electricity consumption. Emissions from air ambulance transport were nearly 200 times those for ground ambulance transport. On a national level, emissions from Australian ambulance operations are estimated to be between 110 000 and 120 000 tons of carbon dioxide equivalents each year. Vehicle fuels are the primary source of emissions for ground ambulance operations. Emissions from air ambulance transport are substantially higher than those for ground ambulance transport. © 2012 The Authors. EMA © 2012 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.

  1. Age-dependence of the average and equivalent refractive indices of the crystalline lens

    PubMed Central

    Charman, W. Neil; Atchison, David A.

    2013-01-01

    Lens average and equivalent refractive indices are required for purposes such as lens thickness estimation and optical modeling. We modeled the refractive index gradient as a power function of the normalized distance from lens center. Average index along the lens axis was estimated by integration. Equivalent index was estimated by raytracing through a model eye to establish ocular refraction, and then backward raytracing to determine the constant refractive index yielding the same refraction. Assuming center and edge indices remained constant with age, at 1.415 and 1.37 respectively, average axial refractive index increased (1.408 to 1.411) and equivalent index decreased (1.425 to 1.420) with age increase from 20 to 70 years. These values agree well with experimental estimates based on different techniques, although the latter show considerable scatter. The simple model of index gradient gives reasonable estimates of average and equivalent lens indices, although refinements in modeling and measurements are required. PMID:24466474

  2. In vitro assessment of thyroid hormone disrupting activities in drinking water sources along the Yangtze River.

    PubMed

    Hu, Xinxin; Shi, Wei; Zhang, Fengxian; Cao, Fu; Hu, Guanjiu; Hao, Yingqun; Wei, Si; Wang, Xinru; Yu, Hongxia

    2013-02-01

    The thyroid hormone disrupting activities of drinking water sources from the lower reaches of Yangtze River were examined using a reporter gene assay based on African green monkey kidney fibroblast (CV-1) cells. None of the eleven tested samples showed thyroid receptor (TR) agonist activity. Nine water samples exhibited TR antagonist activities with the equivalents referring to Di-n-butyl phthalate (DNBP) (TR antagonist activity equivalents, ATR-EQ(50)s) ranging from 6.92 × 10(1) to 2.85 × 10(2) μg DNBP/L. The ATR-EQ(50)s and TR antagonist equivalent ranges (ATR-EQ(30-80) ranges) for TR antagonist activities indicated that the water sample from site WX-8 posed the greatest health risks. The ATR-EQ(80)s of the water samples ranging from 1.56 × 10(3) to 6.14 × 10(3) μg DNBP/L were higher than the NOEC of DNBP. The results from instrumental analysis showed that DNBP might be responsible for the TR antagonist activities in these water samples. Water sources along Yangtze River had thyroid hormone disrupting potential. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. A Probabilistic Tsunami Hazard Study of the Auckland Region, Part I: Propagation Modelling and Tsunami Hazard Assessment at the Shoreline

    NASA Astrophysics Data System (ADS)

    Power, William; Wang, Xiaoming; Lane, Emily; Gillibrand, Philip

    2013-09-01

    Regional source tsunamis represent a potentially devastating threat to coastal communities in New Zealand, yet are infrequent events for which little historical information is available. It is therefore essential to develop robust methods for quantitatively estimating the hazards posed, so that effective mitigation measures can be implemented. We develop a probabilistic model for the tsunami hazard posed to the Auckland region of New Zealand from the Kermadec Trench and the southern New Hebrides Trench subduction zones. An innovative feature of our model is the systematic analysis of uncertainty regarding the magnitude-frequency distribution of earthquakes in the source regions. The methodology is first used to estimate the tsunami hazard at the coastline, and then used to produce a set of scenarios that can be applied to produce probabilistic maps of tsunami inundation for the study region; the production of these maps is described in part II. We find that the 2,500 year return period regional source tsunami hazard for the densely populated east coast of Auckland is dominated by events originating in the Kermadec Trench, while the equivalent hazard to the sparsely populated west coast is approximately equally due to events on the Kermadec Trench and the southern New Hebrides Trench.

  4. Mission Connect Mild TBI Translational Research Consortium

    DTIC Science & Technology

    2009-08-01

    and only minimally effective in-vivo. Initially, we identified carbon nanomaterials as potent antioxidants using a chemical ORAC assay. We...radical absorbency capacity ( ORAC ) assay using a chemical source for the oxygen radical (,′-azodiisobutyramidine dihydrochloride) (Table 1). We...nanomaterials determined with a chemically based ORAC assay.a Nanomaterial Trolox Equivalents (TE) Trolox Mass Equivalents (TME) p-SWCNT 14046 5.02

  5. Development of a traffic noise prediction model for an urban environment.

    PubMed

    Sharma, Asheesh; Bodhe, G L; Schimak, G

    2014-01-01

    The objective of this study is to develop a traffic noise model under diverse traffic conditions in metropolitan cities. The model has been developed to calculate equivalent traffic noise based on four input variables i.e. equivalent traffic flow (Q e ), equivalent vehicle speed (S e ) and distance (d) and honking (h). The traffic data is collected and statistically analyzed in three different cases for 15-min during morning and evening rush hours. Case I represents congested traffic where equivalent vehicle speed is <30 km/h while case II represents free-flowing traffic where equivalent vehicle speed is >30 km/h and case III represents calm traffic where no honking is recorded. The noise model showed better results than earlier developed noise model for Indian traffic conditions. A comparative assessment between present and earlier developed noise model has also been presented in the study. The model is validated with measured noise levels and the correlation coefficients between measured and predicted noise levels were found to be 0.75, 0.83 and 0.86 for case I, II and III respectively. The noise model performs reasonably well under different traffic conditions and could be implemented for traffic noise prediction at other region as well.

  6. Natural products, their derivatives, mimics and synthetic equivalents: role in agrochemical discovery.

    PubMed

    Sparks, Thomas C; Hahn, Donald R; Garizi, Negar V

    2017-04-01

    Natural products (NPs) have a long history as a source of, and inspiration for, novel agrochemicals. Many of the existing herbicides, fungicides, and insecticides have their origins in a wide range of NPs from a variety of sources. Owing to the changing needs of agriculture, shifts in pest spectrum, development of resistance, and evolving regulatory requirements, the need for new agrochemical tools remains as critical as ever. As such, NPs continue to be an important source of models and templates for the development of new agrochemicals, demonstrated by the fact that NP models exist for many of the pest control agents that were discovered by other means. Interestingly, there appear to be distinct differences in the success of different NP sources for different pesticide uses. Although a few microbial NPs have been important starting points in recent discoveries of some insecticidal agrochemicals, historically plant sources have contributed the most to the discovery of new insecticides. In contrast, fungi have been the most important NP sources for new fungicides. Like insecticides, plant-sourced NPs have made the largest contribution to herbicide discovery. Available data on 2014 global sales and numbers of compounds in each class of pesticides indicate that the overall impact of NPs to the discovery of herbicides has been relatively modest compared to the impact observed for fungicides and insecticides. However, as new sourcing and approaches to NP discovery evolve, the impact of NPs in all agrochemical arenas will continue to expand. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  7. Efficient techniques for wave-based sound propagation in interactive applications

    NASA Astrophysics Data System (ADS)

    Mehra, Ravish

    Sound propagation techniques model the effect of the environment on sound waves and predict their behavior from point of emission at the source to the final point of arrival at the listener. Sound is a pressure wave produced by mechanical vibration of a surface that propagates through a medium such as air or water, and the problem of sound propagation can be formulated mathematically as a second-order partial differential equation called the wave equation. Accurate techniques based on solving the wave equation, also called the wave-based techniques, are too expensive computationally and memory-wise. Therefore, these techniques face many challenges in terms of their applicability in interactive applications including sound propagation in large environments, time-varying source and listener directivity, and high simulation cost for mid-frequencies. In this dissertation, we propose a set of efficient wave-based sound propagation techniques that solve these three challenges and enable the use of wave-based sound propagation in interactive applications. Firstly, we propose a novel equivalent source technique for interactive wave-based sound propagation in large scenes spanning hundreds of meters. It is based on the equivalent source theory used for solving radiation and scattering problems in acoustics and electromagnetics. Instead of using a volumetric or surface-based approach, this technique takes an object-centric approach to sound propagation. The proposed equivalent source technique generates realistic acoustic effects and takes orders of magnitude less runtime memory compared to prior wave-based techniques. Secondly, we present an efficient framework for handling time-varying source and listener directivity for interactive wave-based sound propagation. The source directivity is represented as a linear combination of elementary spherical harmonic sources. This spherical harmonic-based representation of source directivity can support analytical, data-driven, rotating or time-varying directivity function at runtime. Unlike previous approaches, the listener directivity approach can be used to compute spatial audio (3D audio) for a moving, rotating listener at interactive rates. Lastly, we propose an efficient GPU-based time-domain solver for the wave equation that enables wave simulation up to the mid-frequency range in tens of minutes on a desktop computer. It is demonstrated that by carefully mapping all the components of the wave simulator to match the parallel processing capabilities of the graphics processors, significant improvement in performance can be achieved compared to the CPU-based simulators, while maintaining numerical accuracy. We validate these techniques with offline numerical simulations and measured data recorded in an outdoor scene. We present results of preliminary user evaluations conducted to study the impact of these techniques on user's immersion in virtual environment. We have integrated these techniques with the Half-Life 2 game engine, Oculus Rift head-mounted display, and Xbox game controller to enable users to experience high-quality acoustics effects and spatial audio in the virtual environment.

  8. Seasonal changes, identification and source apportionment of PAH in PM1.0

    NASA Astrophysics Data System (ADS)

    Agudelo-Castañeda, Dayana Milena; Teixeira, Elba Calesso

    2014-10-01

    The objective of this research was to evaluate the seasonal variation of PAHs in PM1.0, as well as to identify and quantify the contributions of each source profile using the PMF receptor model. PM1.0 samples were collected on PTFE filters from August 2011 to July 2013 in the Metropolitan Area of Porto Alegre, Rio Grande do Sul, Brazil. The samples were extracted using the EPA method TO-13A and 16 Polycyclic Aromatic Hydrocarbons (PAHs) were analyzed using a gaseous chromatograph coupled with a mass spectrometer (GC-MS). Also, the data discussed in this study were analyzed to identify the relations of the PAHs concentrations with NOx, NO, O3 and meteorological parameters (temperature, solar radiation, wind speed, relative humidity). The results showed that in winter, concentrations of total PAHs were significantly higher than in summer, thus showing their seasonal variation. The identification of emission sources by applying diagnostic ratios confirmed that PAHs in the study area originate from mobile sources, especially, from diesel and gasoline emissions. The analysis by PMF receptor model showed the contribution of these two main sources of emissions, too, followed by coal combustion, incomplete combustion/unburned petroleum and wood combustion. The toxic equivalent factors were calculated to characterize the risk of cancer from PAH exposure to PM1.0 samples, and BaP and DahA dominated BaPeq levels.

  9. Assessing Measurement Equivalence in Ordered-Categorical Data

    ERIC Educational Resources Information Center

    Elosua, Paula

    2011-01-01

    Assessing measurement equivalence in the framework of the common factor linear models (CFL) is known as factorial invariance. This methodology is used to evaluate the equivalence among the parameters of a measurement model among different groups. However, when dichotomous, Likert, or ordered responses are used, one of the assumptions of the CFL is…

  10. Heliophysics Data and Modeling Research Using VSPO

    NASA Technical Reports Server (NTRS)

    Roberts, D. Aaron; Hesse, Michael; Cornwell, Carl

    2007-01-01

    The primary advantage of Virtual Observatories in scientific research is efficiency: rapid searches for and access to data in convenient forms makes it possible to explore scientific questions without spending days or weeks on ancilary tasks. The Virtual Space Physics Observatory provides a general portal to Heliophysics data for this task. Here we will illustrate the advantages of the VO approach by examining specific geomagnetically active times and tracing the activity through the Sun-Earth system. In addition to previous and additional data sources, we will demonstrate an extension of the capabilities to allow searching for model run results from the range of CCMC models. This approach allows the user to quickly compare models and observations at a qualitative level; considerably more work will be needed to develop more seamless connections to data streams and the equivalent numerical output from simulations.

  11. Lumped-parameters equivalent circuit for condenser microphones modeling.

    PubMed

    Esteves, Josué; Rufer, Libor; Ekeom, Didace; Basrour, Skandar

    2017-10-01

    This work presents a lumped parameters equivalent model of condenser microphone based on analogies between acoustic, mechanical, fluidic, and electrical domains. Parameters of the model were determined mainly through analytical relations and/or finite element method (FEM) simulations. Special attention was paid to the air gap modeling and to the use of proper boundary condition. Corresponding lumped-parameters were obtained as results of FEM simulations. Because of its simplicity, the model allows a fast simulation and is readily usable for microphone design. This work shows the validation of the equivalent circuit on three real cases of capacitive microphones, including both traditional and Micro-Electro-Mechanical Systems structures. In all cases, it has been demonstrated that the sensitivity and other related data obtained from the equivalent circuit are in very good agreement with available measurement data.

  12. Active room compensation for sound reinforcement using sound field separation techniques.

    PubMed

    Heuchel, Franz M; Fernandez-Grande, Efren; Agerkvist, Finn T; Shabalina, Elena

    2018-03-01

    This work investigates how the sound field created by a sound reinforcement system can be controlled at low frequencies. An indoor control method is proposed which actively absorbs the sound incident on a reflecting boundary using an array of secondary sources. The sound field is separated into incident and reflected components by a microphone array close to the secondary sources, enabling the minimization of reflected components by means of optimal signals for the secondary sources. The method is purely feed-forward and assumes constant room conditions. Three different sound field separation techniques for the modeling of the reflections are investigated based on plane wave decomposition, equivalent sources, and the Spatial Fourier transform. Simulations and an experimental validation are presented, showing that the control method performs similarly well at enhancing low frequency responses with the three sound separation techniques. Resonances in the entire room are reduced, although the microphone array and secondary sources are confined to a small region close to the reflecting wall. Unlike previous control methods based on the creation of a plane wave sound field, the investigated method works in arbitrary room geometries and primary source positions.

  13. Alternative Fuels Data Center: District of Columbia Transportation Data for

    Science.gov Websites

    Electricity Transportation Fuel Consumption Source: State Energy Data System based on beta data converted to (nameplate, MW) 0 Source: BioFuels Atlas from the National Renewable Energy Laboratory Videos Text Version /GGE $2.96/gallon $2.66/GGE Source: Average prices per gasoline gallon equivalent (GGE) for the Central

  14. 78 FR 53020 - Branch Technical Position on the Import of Non-U.S. Origin Radioactive Sources

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-28

    ... produced radioisotopes or Radium- 226 which can be disposed of in non-Part 61 or equivalent facilities'' as... Import of Non-U.S. Origin Radioactive Sources AGENCY: U.S. Nuclear Regulatory Commission. ACTION: Final... Non-U.S. Origin Sources to provide additional guidance on the application of this exclusion in the...

  15. Verification and Validation of the New Dynamic Mooring Modules Available in FAST v8: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian; Robertson, Amy; Jonkman, Jason

    2016-08-01

    The open-source aero-hydro-servo-elastic wind turbine simulation software, FAST v8, was recently coupled to two newly developed mooring dynamics modules: MoorDyn and FEAMooring. MoorDyn is a lumped-mass-based mooring dynamics module developed by the University of Maine, and FEAMooring is a finite-element-based mooring dynamics module developed by Texas A&M University. This paper summarizes the work performed to verify and validate these modules against other mooring models and measured test data to assess their reliability and accuracy. The quality of the fairlead load predictions by the open-source mooring modules MoorDyn and FEAMooring appear to be largely equivalent to what is predicted by themore » commercial tool OrcaFlex. Both mooring dynamic model predictions agree well with the experimental data, considering the given limitations in the accuracy of the platform hydrodynamic load calculation and the quality of the measurement data.« less

  16. Verification and Validation of the New Dynamic Mooring Modules Available in FAST v8

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian F.; Andersen, Morten T.; Robertson, Amy N.

    2016-07-01

    The open-source aero-hydro-servo-elastic wind turbine simulation software, FAST v8, was recently coupled to two newly developed mooring dynamics modules: MoorDyn and FEAMooring. MoorDyn is a lumped-mass-based mooring dynamics module developed by the University of Maine, and FEAMooring is a finite-element-based mooring dynamics module developed by Texas A&M University. This paper summarizes the work performed to verify and validate these modules against other mooring models and measured test data to assess their reliability and accuracy. The quality of the fairlead load predictions by the open-source mooring modules MoorDyn and FEAMooring appear to be largely equivalent to what is predicted by themore » commercial tool OrcaFlex. Both mooring dynamic model predictions agree well with the experimental data, considering the given limitations in the accuracy of the platform hydrodynamic load calculation and the quality of the measurement data.« less

  17. Heavy ion contributions to organ dose equivalent for the 1977 galactic cosmic ray spectrum

    NASA Astrophysics Data System (ADS)

    Walker, Steven A.; Townsend, Lawrence W.; Norbury, John W.

    2013-05-01

    Estimates of organ dose equivalents for the skin, eye lens, blood forming organs, central nervous system, and heart of female astronauts from exposures to the 1977 solar minimum galactic cosmic radiation spectrum for various shielding geometries involving simple spheres and locations within the Space Transportation System (space shuttle) and the International Space Station (ISS) are made using the HZETRN 2010 space radiation transport code. The dose equivalent contributions are broken down by charge groups in order to better understand the sources of the exposures to these organs. For thin shields, contributions from ions heavier than alpha particles comprise at least half of the organ dose equivalent. For thick shields, such as the ISS locations, heavy ions contribute less than 30% and in some cases less than 10% of the organ dose equivalent. Secondary neutron production contributions in thick shields also tend to be as large, or larger, than the heavy ion contributions to the organ dose equivalents.

  18. An open ecosystem engagement strategy through the lens of global food safety

    PubMed Central

    Stacey, Paul; Fons, Garin; Bernardo, Theresa M

    2015-01-01

    The Global Food Safety Partnership (GFSP) is a public/private partnership established through the World Bank to improve food safety systems through a globally coordinated and locally-driven approach. This concept paper aims to establish a framework to help GFSP fully leverage the potential of open models. In preparing this paper the authors spoke to many different GFSP stakeholders who asked questions about open models such as: what is it?what’s in it for me?why use an open rather than a proprietary model?how will open models generate equivalent or greater sustainable revenue streams compared to the current “traditional” approaches?  This last question came up many times with assertions that traditional service providers need to see opportunity for equivalent or greater revenue dollars before they will buy-in. This paper identifies open value propositions for GFSP stakeholders and proposes a framework for creating and structuring that value. Open Educational Resources (OER) were the primary open practice GFSP partners spoke to us about, as they provide a logical entry point for collaboration. Going forward, funders should consider requiring that educational resources and concomitant data resulting from their sponsorship should be open, as a public good. There are, however, many other forms of open practice that bring value to the GFSP. Nine different open strategies and tactics (Appendix A) are described, including: open content (including OER and open courseware), open data, open access (research), open government, open source software, open standards, open policy, open licensing and open hardware. It is recommended that all stakeholders proactively pursue "openness" as an operating principle. This paper presents an overall GFSP Open Ecosystem Engagement Strategy within which specific local case examples can be situated. Two different case examples, China and Colombia, are presented to show both project-based and crowd-sourced, direct-to-public paths through this ecosystem. PMID:26213614

  19. X-ray reflection from cold white dwarfs in magnetic cataclysmic variables

    NASA Astrophysics Data System (ADS)

    Hayashi, Takayuki; Kitaguchi, Takao; Ishida, Manabu

    2018-02-01

    We model X-ray reflection from white dwarfs (WDs) in magnetic cataclysmic variables (mCVs) using a Monte Carlo simulation. A point source with a power-law spectrum or a realistic post-shock accretion column (PSAC) source irradiates a cool and spherical WD. The PSAC source emits thermal spectra of various temperatures stratified along the column according to the PSAC model. In the point-source simulation, we confirm the following: a source harder and nearer to the WD enhances the reflection; higher iron abundance enhances the equivalent widths (EWs) of fluorescent iron Kα1, 2 lines and their Compton shoulder, and increases the cut-off energy of a Compton hump; significant reflection appears from an area that is more than 90° apart from the position right under the point X-ray source because of the WD curvature. The PSAC simulation reveals the following: a more massive WD basically enhances the intensities of the fluorescent iron Kα1, 2 lines and the Compton hump, except for some specific accretion rate, because the more massive WD makes a hotter PSAC from which higher-energy X-rays are preferentially emitted; a larger specific accretion rate monotonically enhances the reflection because it makes a hotter and shorter PSAC; the intrinsic thermal component hardens by occultation of the cool base of the PSAC by the WD. We quantitatively estimate the influences of the parameters on the EWs and the Compton hump with both types of source. We also calculate X-ray modulation profiles brought about by the WD spin. These depend on the angles of the spin axis from the line of sight and from the PSAC, and on whether the two PSACs can be seen. The reflection spectral model and the modulation model involve the fluorescent lines and the Compton hump and can directly be compared to the data, which allows us to estimate these geometrical parameters with unprecedented accuracy.

  20. Lidar cross-sections of soot fractal aggregates: Assessment of equivalent-sphere models

    NASA Astrophysics Data System (ADS)

    Ceolato, Romain; Gaudfrin, Florian; Pujol, Olivier; Riviere, Nicolas; Berg, Matthew J.; Sorensen, Christopher M.

    2018-06-01

    This work assesses the ability of equivalent-sphere models to reproduce the optical properties of soot aggregates relevant for lidar remote sensing, i.e. the backscattering and extinction cross sections. Lidar cross-sections are computed with a spectral discrete dipole approximation model over the visible-to-infrared (400-5000 nm) spectrum and compared with equivalent-sphere approximations. It is shown that the equivalent-sphere approximation, applied to fractal aggregates, has a limited ability to calculate such cross-sections well. The approximation should thus be used with caution for the computation of broadband lidar cross-sections, especially backscattering, at small and intermediate wavelengths (e.g. UV to visible).

  1. TU-D-209-07: Monte Carlo Assessment of Dose to the Lens of the Eye of Radiologist Using Realistic Phantoms and Eyeglass Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, X; Lin, H; Gao, Y

    Purpose: To study how eyeglass design features and postures of the interventional radiologist affect the radiation dose to the lens of the eye. Methods: A mesh-based deformable phantom, consisting of an ultra-fine eye model, was used to simulate postures of a radiologist in fluoroscopically guided interventional procedure (facing the patient, 45 degree to the left, and 45 degree to the right). Various eyewear design features were studied, including the shape, lead-equivalent thickness, and separation from the face. The MCNPX Monte Carlo code was used to simulate the X-ray source used for the transcatheter arterial chemoembolization procedure (The X-ray tube ismore » located 35 cm from the ground, emitting X-rays toward to the ceiling; Field size is 40cm X 40cm; X-ray tube voltage is 90 kVp). Experiments were also performed using dosimeter placed on a physical phantom behind eyeglasses. Results: Without protective eyewear, the radiologist’s eye lens can receive an annual dose equivalent of about 80 mSv. When wearing a pair of lead eyeglasses with lead-equivalent of 0.5-mm Pb, the annual dose equivalent of the eye lens is reduced to 31.47 mSv, but both exceed the new ICRP limit of 20 mSv. A face shield with a lead-equivalent of 0.125-mm Pb in the shape of a semi-cylinder (13cm in radius and 20-cm in height) would further reduce the exposure to the lens of the eye. Examination of postures and eyeglass features reveal surprising information, including that the glass-to-eye separation also plays an important role in the dose to the eye lens from scattered X-ray from underneath and the side. Results are in general agreement with measurements. Conclusion: There is an urgent need to further understand the relationship between the radiation environment and the radiologist’s eyewear and posture in order to provide necessary protection to the interventional radiologists under newly reduced dose limits.« less

  2. Cloud-In-Cell modeling of shocked particle-laden flows at a ``SPARSE'' cost

    NASA Astrophysics Data System (ADS)

    Taverniers, Soren; Jacobs, Gustaaf; Sen, Oishik; Udaykumar, H. S.

    2017-11-01

    A common tool for enabling process-scale simulations of shocked particle-laden flows is Eulerian-Lagrangian Particle-Source-In-Cell (PSIC) modeling where each particle is traced in its Lagrangian frame and treated as a mathematical point. Its dynamics are governed by Stokes drag corrected for high Reynolds and Mach numbers. The computational burden is often reduced further through a ``Cloud-In-Cell'' (CIC) approach which amalgamates groups of physical particles into computational ``macro-particles''. CIC does not account for subgrid particle fluctuations, leading to erroneous predictions of cloud dynamics. A Subgrid Particle-Averaged Reynolds-Stress Equivalent (SPARSE) model is proposed that incorporates subgrid interphase velocity and temperature perturbations. A bivariate Gaussian source distribution, whose covariance captures the cloud's deformation to first order, accounts for the particles' momentum and energy influence on the carrier gas. SPARSE is validated by conducting tests on the interaction of a particle cloud with the accelerated flow behind a shock. The cloud's average dynamics and its deformation over time predicted with SPARSE converge to their counterparts computed with reference PSIC models as the number of Gaussians is increased from 1 to 16. This work was supported by AFOSR Grant No. FA9550-16-1-0008.

  3. Properties of transported African mineral dust aerosols in the Mediterranean region

    NASA Astrophysics Data System (ADS)

    Denjean, Cyrielle; Chevaillier, Servanne; Gaimoz, Cécile; Grand, Noel; Triquet, Sylvain; Zapf, Pascal; Loisil, Rodrigue; Bourrianne, Thierry; Freney, Evelyn; Dupuy, Regis; Sellegri, Karine; Schwarzenbock, Alfons; Torres, Benjamin; Mallet, Marc; Cassola, Federico; Prati, Paolo; Formenti, Paola

    2015-04-01

    The transport of mineral dust aerosols is a global phenomenon with strong climate implications. Depending on the travel distance over source regions, the atmospheric conditions and the residence time in the atmosphere, various transformation processes (size-selective sedimentation, mixing, condensation of gaseous species, and weathering) can modify the physical and chemical properties of mineral dust, which, in turn, can change the dust's optical properties. The model predictions of the radiative effect by mineral dust still suffer of the lack of certainty of these properties, and their temporal evolution with transport time. Within the frame of the ChArMex project (Chemistry-Aerosol Mediterranean experiment, http://charmex.lsce.ipsl.fr/), one intensive airborne campaign (ADRIMED, Aerosol Direct Radiative Impact in the regional climate in the MEDiterranean region, 06 June - 08 July 2013) has been performed over the Central and Western Mediterranean, one of the two major transport pathways of African mineral dust. In this study we have set up a systematic strategy to determine the optical, physical and optical properties of mineral dust to be compared to an equivalent dataset for dust close to source regions in Africa. This study is based on airborne observations onboard the SAFIRE ATR-42 aircraft, equipped with state of the art in situ instrumentation to measure the particle scattering and backscattering coefficients (nephelometer at 450, 550, and 700 nm), the absorption coefficient (PSAP at 467, 530, and 660 nm), the extinction coefficient (CAPS at 530 nm), the aerosol optical depth (PLASMA at 340 to 1640 nm), the size distribution in the extended range 40 nm - 30 µm by the combination of different particle counters (SMPS, USHAS, FSSP, GRIMM) and the chemical composition obtained by filter sampling. The chemistry and transport model CHIMERE-Dust have been used to classify the air masses according to the dust origin and transport. Case studies of dust transport from known but differing origins (source regions in Tunisia, Algeria, and Mauritania) and at different times after transport, will be presented. Results will be compared to equivalent measurements over source regions interpreted in terms of the evolution of the particle size distribution, chemical composition and optical properties.

  4. Development of an Algorithm for Automatic Analysis of the Impedance Spectrum Based on a Measurement Model

    NASA Astrophysics Data System (ADS)

    Kobayashi, Kiyoshi; Suzuki, Tohru S.

    2018-03-01

    A new algorithm for the automatic estimation of an equivalent circuit and the subsequent parameter optimization is developed by combining the data-mining concept and complex least-squares method. In this algorithm, the program generates an initial equivalent-circuit model based on the sampling data and then attempts to optimize the parameters. The basic hypothesis is that the measured impedance spectrum can be reproduced by the sum of the partial-impedance spectra presented by the resistor, inductor, resistor connected in parallel to a capacitor, and resistor connected in parallel to an inductor. The adequacy of the model is determined by using a simple artificial-intelligence function, which is applied to the output function of the Levenberg-Marquardt module. From the iteration of model modifications, the program finds an adequate equivalent-circuit model without any user input to the equivalent-circuit model.

  5. Limits to Mercury's Magnesium Exosphere from MESSENGER Second Flyby Observations

    NASA Technical Reports Server (NTRS)

    Sarantos, Menelaos; Killen, Rosemary M.; McClintock, William E.; Bradley, E. Todd; Vervack, Ronald J., Jr.; Benna, Mehdi; Slavin, James A.

    2011-01-01

    The discovery measurements of Mercury's exospheric magnesium, obtained by the MErcury Surface. Space ENvironment, GEochemistry. and Ranging (MESSENGER) probe during its second Mercury flyby, are modeled to constrain the source and loss processes for this neutral species. Fits to a Chamberlain exosphere reveal that at least two source temperatures are required to reconcile the distribution of magnesium measured far from and near the planet: a hot ejection process at the equivalent temperature of several tens of thousands of degrees K, and a competing, cooler source at temperatures as low as 400 K. For the energetic component, our models indicate that the column abundance that can be attributed to sputtering under constant southward interplanetary magnetic field (IMF) conditions is at least a factor of five less than the rate dictated by the measurements, Although highly uncertain, this result suggests that another energetic process, such as the rapid dissociation of exospheric MgO, may be the main source of the distant neutral component. If meteoroid and micrometeoroid impacts eject mainly molecules, the total amount of magnesium at altitudes exceeding approximately 100 km is found to be consistent with predictions by impact vaporization models for molecule lifetimes of no more than two minutes. Though a sharp increase in emission observed near the dawn terminator region can be reproduced if a single meteoroid enhanced the impact vapor at equatorial dawn, it is much more likely that observations in this region, which probe heights increasingly near the surface, indicate a reservoir of volatile Mg being acted upon by lower-energy source processes.

  6. Estimation of ambient dose equivalent distribution in the 18F-FDG administration room using Monte Carlo simulation.

    PubMed

    Nagamine, Shuji; Fujibuchi, Toshioh; Umezu, Yoshiyuki; Himuro, Kazuhiko; Awamoto, Shinichi; Tsutsui, Yuji; Nakamura, Yasuhiko

    2017-03-01

    In this study, we estimated the ambient dose equivalent rate (hereafter "dose rate") in the fluoro-2-deoxy-D-glucose (FDG) administration room in our hospital using Monte Carlo simulations, and examined the appropriate medical-personnel locations and a shielding method to reduce the dose rate during FDG injection using a lead glass shield. The line source was assumed to be the FDG feed tube and the patient a cube source. The dose rate distribution was calculated with a composite source that combines the line and cube sources. The dose rate distribution was also calculated when a lead glass shield was placed in the rear section of the lead-acrylic shield. The dose rate behind the automatic administration device decreased by 87 % with respect to that behind the lead-acrylic shield. Upon positioning a 2.8-cm-thick lead glass shield, the dose rate behind the lead-acrylic shield decreased by 67 %.

  7. CORRECTIONS ASSOCIATED WITH ON-PHANTOM CALIBRATIONS OF NEUTRON PERSONAL DOSEMETERS.

    PubMed

    Hawkes, N P; Thomas, D J; Taylor, G C

    2016-09-01

    The response of neutron personal dosemeters as a function of neutron energy and angle of incidence is typically measured by mounting the dosemeters on a slab phantom and exposing them to neutrons from an accelerator-based or radionuclide source. The phantom is placed close to the source (75 cm) so that the effect of scattered neutrons is negligible. It is usual to mount several dosemeters on the phantom together. Because the source is close, the source distance and the neutron incidence angle vary significantly over the phantom face, and each dosemeter may receive a different dose equivalent. This is particularly important when the phantom is angled away from normal incidence. With accelerator-produced neutrons, the neutron energy and fluence vary with emission angle relative to the charged particle beam that produces the neutrons, contributing further to differences in dose equivalent, particularly when the phantom is located at other than the straight-ahead position (0° to the beam). Corrections for these effects are quantified and discussed in this article. © Crown copyright 2015.

  8. Identification of Low Order Equivalent System Models From Flight Test Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2000-01-01

    Identification of low order equivalent system dynamic models from flight test data was studied. Inputs were pilot control deflections, and outputs were aircraft responses, so the models characterized the total aircraft response including bare airframe and flight control system. Theoretical investigations were conducted and related to results found in the literature. Low order equivalent system modeling techniques using output error and equation error parameter estimation in the frequency domain were developed and validated on simulation data. It was found that some common difficulties encountered in identifying closed loop low order equivalent system models from flight test data could be overcome using the developed techniques. Implications for data requirements and experiment design were discussed. The developed methods were demonstrated using realistic simulation cases, then applied to closed loop flight test data from the NASA F-18 High Alpha Research Vehicle.

  9. A geostatistical approach for quantification of contaminant mass discharge uncertainty using multilevel sampler measurements

    NASA Astrophysics Data System (ADS)

    Li, K. Betty; Goovaerts, Pierre; Abriola, Linda M.

    2007-06-01

    Contaminant mass discharge across a control plane downstream of a dense nonaqueous phase liquid (DNAPL) source zone has great potential to serve as a metric for the assessment of the effectiveness of source zone treatment technologies and for the development of risk-based source-plume remediation strategies. However, too often the uncertainty of mass discharge estimated in the field is not accounted for in the analysis. In this paper, a geostatistical approach is proposed to estimate mass discharge and to quantify its associated uncertainty using multilevel transect measurements of contaminant concentration (C) and hydraulic conductivity (K). The approach adapts the p-field simulation algorithm to propagate and upscale the uncertainty of mass discharge from the local uncertainty models of C and K. Application of this methodology to numerically simulated transects shows that, with a regular sampling pattern, geostatistics can provide an accurate model of uncertainty for the transects that are associated with low levels of source mass removal (i.e., transects that have a large percentage of contaminated area). For high levels of mass removal (i.e., transects with a few hot spots and large areas of near-zero concentration), a total sampling area equivalent to 6˜7% of the transect is required to achieve accurate uncertainty modeling. A comparison of the results for different measurement supports indicates that samples taken with longer screen lengths may lead to less accurate models of mass discharge uncertainty. The quantification of mass discharge uncertainty, in the form of a probability distribution, will facilitate risk assessment associated with various remediation strategies.

  10. Folk Theorems on the Correspondence between State-Based and Event-Based Systems

    NASA Astrophysics Data System (ADS)

    Reniers, Michel A.; Willemse, Tim A. C.

    Kripke Structures and Labelled Transition Systems are the two most prominent semantic models used in concurrency theory. Both models are commonly believed to be equi-expressive. One can find many ad-hoc embeddings of one of these models into the other. We build upon the seminal work of De Nicola and Vaandrager that firmly established the correspondence between stuttering equivalence in Kripke Structures and divergence-sensitive branching bisimulation in Labelled Transition Systems. We show that their embeddings can also be used for a range of other equivalences of interest, such as strong bisimilarity, simulation equivalence, and trace equivalence. Furthermore, we extend the results by De Nicola and Vaandrager by showing that there are additional translations that allow one to use minimisation techniques in one semantic domain to obtain minimal representatives in the other semantic domain for these equivalences.

  11. Contactless magnetocardiographic mapping in anesthetized Wistar rats: evidence of age-related changes of cardiac electrical activity.

    PubMed

    Brisinda, Donatella; Caristo, Maria Emiliana; Fenici, Riccardo

    2006-07-01

    Magnetocardiography (MCG) is the recording of the magnetic field (MF) generated by cardiac electrophysiological activity. Because it is a contactless method, MCG is ideal for noninvasive cardiac mapping of small experimental animals. The aim of this study was to assess age-related changes of cardiac intervals and ventricular repolarization (VR) maps in intact rats by means of MCG mapping. Twenty-four adult Wistar rats (12 male and 12 female) were studied, under anesthesia, with the same unshielded 36-channel MCG instrumentation used for clinical recordings. Two sets of measurements were obtained from each animal: 1) at 5 mo of age (297.5 +/- 21 g body wt) and 2) at 14 mo of age (516.8 +/- 180 g body wt). RR and PR intervals, QRS segment, and QTpeak, QTend, JTpeak, JTend, and Tpeak-end were measured from MCG waveforms. MCG imaging was automatically obtained as MF maps and as inverse localization of cardiac sources with equivalent current dipole and effective magnetic dipole models. After 300 s of continuous recording were averaged, the signal-to-noise ratio was adequate for study of atrial and ventricular MF maps and for three-dimensional localization of the underlying cardiac sources. Clear-cut age-related differences in VR duration were demonstrated by significantly longer QTend, JTend, and Tpeak-end in older Wistar rats. Reproducible multisite noninvasive cardiac mapping of anesthetized rats is simpler with MCG methodology than with ECG recording. In addition, MCG mapping provides new information based on quantitative analysis of MF and equivalent sources. In this study, statistically significant age-dependent variations in VR intervals were found.

  12. An Equivalent cross-section Framework for improving computational efficiency in Distributed Hydrologic Modelling

    NASA Astrophysics Data System (ADS)

    Khan, Urooj; Tuteja, Narendra; Ajami, Hoori; Sharma, Ashish

    2014-05-01

    While the potential uses and benefits of distributed catchment simulation models is undeniable, their practical usage is often hindered by the computational resources they demand. To reduce the computational time/effort in distributed hydrological modelling, a new approach of modelling over an equivalent cross-section is investigated where topographical and physiographic properties of first-order sub-basins are aggregated to constitute modelling elements. To formulate an equivalent cross-section, a homogenization test is conducted to assess the loss in accuracy when averaging topographic and physiographic variables, i.e. length, slope, soil depth and soil type. The homogenization test indicates that the accuracy lost in weighting the soil type is greatest, therefore it needs to be weighted in a systematic manner to formulate equivalent cross-sections. If the soil type remains the same within the sub-basin, a single equivalent cross-section is formulated for the entire sub-basin. If the soil type follows a specific pattern, i.e. different soil types near the centre of the river, middle of hillslope and ridge line, three equivalent cross-sections (left bank, right bank and head water) are required. If the soil types are complex and do not follow any specific pattern, multiple equivalent cross-sections are required based on the number of soil types. The equivalent cross-sections are formulated for a series of first order sub-basins by implementing different weighting methods of topographic and physiographic variables of landforms within the entire or part of a hillslope. The formulated equivalent cross-sections are then simulated using a 2-dimensional, Richards' equation based distributed hydrological model. The simulated fluxes are multiplied by the weighted area of each equivalent cross-section to calculate the total fluxes from the sub-basins. The simulated fluxes include horizontal flow, transpiration, soil evaporation, deep drainage and soil moisture. To assess the accuracy of equivalent cross-section approach, the sub-basins are also divided into equally spaced multiple hillslope cross-sections. These cross-sections are simulated in a fully distributed settings using the 2-dimensional, Richards' equation based distributed hydrological model. The simulated fluxes are multiplied by the contributing area of each cross-section to get total fluxes from each sub-basin referred as reference fluxes. The equivalent cross-section approach is investigated for seven first order sub-basins of the McLaughlin catchment of the Snowy River, NSW, Australia, and evaluated in Wagga-Wagga experimental catchment. Our results show that the simulated fluxes using an equivalent cross-section approach are very close to the reference fluxes whereas computational time is reduced of the order of ~4 to ~22 times in comparison to the fully distributed settings. The transpiration and soil evaporation are the dominant fluxes and constitute ~85% of actual rainfall. Overall, the accuracy achieved in dominant fluxes is higher than the other fluxes. The simulated soil moistures from equivalent cross-section approach are compared with the in-situ soil moisture observations in the Wagga-Wagga experimental catchment in NSW, and results found to be consistent. Our results illustrate that the equivalent cross-section approach reduces the computational time significantly while maintaining the same order of accuracy in predicting the hydrological fluxes. As a result, this approach provides a great potential for implementation of distributed hydrological models at regional scales.

  13. Combining virtual observatory and equivalent source dipole approaches to describe the geomagnetic field with Swarm measurements

    NASA Astrophysics Data System (ADS)

    Saturnino, Diana; Langlais, Benoit; Amit, Hagay; Civet, François; Mandea, Mioara; Beucler, Éric

    2018-03-01

    A detailed description of the main geomagnetic field and of its temporal variations (i.e., the secular variation or SV) is crucial to understanding the geodynamo. Although the SV is known with high accuracy at ground magnetic observatory locations, the globally uneven distribution of the observatories hampers the determination of a detailed global pattern of the SV. Over the past two decades, satellites have provided global surveys of the geomagnetic field which have been used to derive global spherical harmonic (SH) models through some strict data selection schemes to minimise external field contributions. However, discrepancies remain between ground measurements and field predictions by these models; indeed the global models do not reproduce small spatial scales of the field temporal variations. To overcome this problem we propose to directly extract time series of the field and its temporal variation from satellite measurements as it is done at observatory locations. We follow a Virtual Observatory (VO) approach and define a global mesh of VOs at satellite altitude. For each VO and each given time interval we apply an Equivalent Source Dipole (ESD) technique to reduce all measurements to a unique location. Synthetic data are first used to validate the new VO-ESD approach. Then, we apply our scheme to data from the first two years of the Swarm mission. For the first time, a 2.5° resolution global mesh of VO time series is built. The VO-ESD derived time series are locally compared to ground observations as well as to satellite-based model predictions. Our approach is able to describe detailed temporal variations of the field at local scales. The VO-ESD time series are then used to derive global spherical harmonic models. For a simple SH parametrization the model describes well the secular trend of the magnetic field both at satellite altitude and at the surface. As more data will be made available, longer VO-ESD time series can be derived and consequently used to study sharp temporal variation features, such as geomagnetic jerks.

  14. Spontaneous cell sorting of fibroblasts and keratinocytes creates an organotypic human skin equivalent.

    PubMed

    Wang, C K; Nelson, C F; Brinkman, A M; Miller, A C; Hoeffler, W K

    2000-04-01

    We show that an inherent ability of two distinct cell types, keratinocytes and fibroblasts, can be relied upon to accurately reconstitute full-thickness human skin including the dermal-epidermal junction by a cell-sorting mechanism. A cell slurry containing both cell types added to silicone chambers implanted on the backs of severe combined immunodeficient mice sorts out to reconstitute a clearly defined dermis and stratified epidermis within 2 wk, forming a cell-sorted skin equivalent. Immunostaining of the cell-sorted skin equivalent with human cell markers showed patterns similar to those of normal full-thickness skin. We compared the cell-sorted skin equivalent model with a composite skin model also made on severe combined immunodeficient mice. The composite grafts were constructed from partially differentiated keratinocyte sheets placed on top of a dermal equivalent constructed of devitalized dermis. Electron microscopy revealed that both models formed ample numbers of normal appearing hemidesmosomes. The cell-sorted skin equivalent model, however, had greater numbers of keratin intermediate filaments within the basal keratinocytes that connected to hemidesmosomes, and on the dermal side both collagen filaments and anchoring fibril connections to the lamina densa were more numerous compared with the composite model. Our results may provide some insight into why, in clinical applications for treating burns and other wounds, composite grafts may exhibit surface instability and blistering for up to a year following grafting, and suggest the possible usefulness of the cell-sorted skin equivalent in future grafting applications.

  15. Estimating the sources of global sea level rise with data assimilation techniques.

    PubMed

    Hay, Carling C; Morrow, Eric; Kopp, Robert E; Mitrovica, Jerry X

    2013-02-26

    A rapidly melting ice sheet produces a distinctive geometry, or fingerprint, of sea level (SL) change. Thus, a network of SL observations may, in principle, be used to infer sources of meltwater flux. We outline a formalism, based on a modified Kalman smoother, for using tide gauge observations to estimate the individual sources of global SL change. We also report on a series of detection experiments based on synthetic SL data that explore the feasibility of extracting source information from SL records. The Kalman smoother technique iteratively calculates the maximum-likelihood estimate of Greenland ice sheet (GIS) and West Antarctic ice sheet (WAIS) melt at each time step, and it accommodates data gaps while also permitting the estimation of nonlinear trends. Our synthetic tests indicate that when all tide gauge records are used in the analysis, it should be possible to estimate GIS and WAIS melt rates greater than ∼0.3 and ∼0.4 mm of equivalent eustatic sea level rise per year, respectively. We have also implemented a multimodel Kalman filter that allows us to account rigorously for additional contributions to SL changes and their associated uncertainty. The multimodel filter uses 72 glacial isostatic adjustment models and 3 ocean dynamic models to estimate the most likely models for these processes given the synthetic observations. We conclude that our modified Kalman smoother procedure provides a powerful method for inferring melt rates in a warming world.

  16. Estimating the sources of global sea level rise with data assimilation techniques

    PubMed Central

    Hay, Carling C.; Morrow, Eric; Kopp, Robert E.; Mitrovica, Jerry X.

    2013-01-01

    A rapidly melting ice sheet produces a distinctive geometry, or fingerprint, of sea level (SL) change. Thus, a network of SL observations may, in principle, be used to infer sources of meltwater flux. We outline a formalism, based on a modified Kalman smoother, for using tide gauge observations to estimate the individual sources of global SL change. We also report on a series of detection experiments based on synthetic SL data that explore the feasibility of extracting source information from SL records. The Kalman smoother technique iteratively calculates the maximum-likelihood estimate of Greenland ice sheet (GIS) and West Antarctic ice sheet (WAIS) melt at each time step, and it accommodates data gaps while also permitting the estimation of nonlinear trends. Our synthetic tests indicate that when all tide gauge records are used in the analysis, it should be possible to estimate GIS and WAIS melt rates greater than ∼0.3 and ∼0.4 mm of equivalent eustatic sea level rise per year, respectively. We have also implemented a multimodel Kalman filter that allows us to account rigorously for additional contributions to SL changes and their associated uncertainty. The multimodel filter uses 72 glacial isostatic adjustment models and 3 ocean dynamic models to estimate the most likely models for these processes given the synthetic observations. We conclude that our modified Kalman smoother procedure provides a powerful method for inferring melt rates in a warming world. PMID:22543163

  17. Spatiotemporal reconstruction of auditory steady-state responses to acoustic amplitude modulations: Potential sources beyond the auditory pathway.

    PubMed

    Farahani, Ehsan Darestani; Goossens, Tine; Wouters, Jan; van Wieringen, Astrid

    2017-03-01

    Investigating the neural generators of auditory steady-state responses (ASSRs), i.e., auditory evoked brain responses, with a wide range of screening and diagnostic applications, has been the focus of various studies for many years. Most of these studies employed a priori assumptions regarding the number and location of neural generators. The aim of this study is to reconstruct ASSR sources with minimal assumptions in order to gain in-depth insight into the number and location of brain regions that are activated in response to low- as well as high-frequency acoustically amplitude modulated signals. In order to reconstruct ASSR sources, we applied independent component analysis with subsequent equivalent dipole modeling to single-subject EEG data (young adults, 20-30 years of age). These data were based on white noise stimuli, amplitude modulated at 4, 20, 40, or 80Hz. The independent components that exhibited a significant ASSR were clustered among all participants by means of a probabilistic clustering method based on a Gaussian mixture model. Results suggest that a widely distributed network of sources, located in cortical as well as subcortical regions, is active in response to 4, 20, 40, and 80Hz amplitude modulated noises. Some of these sources are located beyond the central auditory pathway. Comparison of brain sources in response to different modulation frequencies suggested that the identified brain sources in the brainstem, the left and the right auditory cortex show a higher responsiveness to 40Hz than to the other modulation frequencies. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Flexible feature interface for multimedia sources

    DOEpatents

    Coffland, Douglas R [Livermore, CA

    2009-06-09

    A flexible feature interface for multimedia sources system that includes a single interface for the addition of features and functions to multimedia sources and for accessing those features and functions from remote hosts. The interface utilizes the export statement: export "C" D11Export void FunctionName(int argc, char ** argv,char * result, SecureSession *ctrl) or the binary equivalent of the export statement.

  19. Natural hybridization within seed sources of shortleaf pine (Pinus echinata Mill.) and loblolly pine (Pinus taeda L.)

    Treesearch

    Shiqin Xu; C.G. Tauer; C. Dana Nelson

    2008-01-01

    Shortleaf and loblolly pine trees (n=93 and 102, respectively) from 22 seed sources of the Southwide Southern Pine Seed Source Study plantings or equivalent origin were evaluated for amplified fragment length polymorphism (AFLP) variation. These sampled trees represent shortleaf pine and loblolly pine, as they existed across their native geographic ranges before...

  20. 32 CFR 806.20 - Records of non-U.S. government source.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 32 National Defense 6 2012-07-01 2012-07-01 false Records of non-U.S. government source. 806.20... ADMINISTRATION AIR FORCE FREEDOM OF INFORMATION ACT PROGRAM § 806.20 Records of non-U.S. government source. (a... notify their MAJCOM (or equivalent) FOIA office, in writing, via fax or e-mail when the Department of...

  1. 32 CFR 806.20 - Records of non-U.S. government source.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 32 National Defense 6 2014-07-01 2014-07-01 false Records of non-U.S. government source. 806.20... ADMINISTRATION AIR FORCE FREEDOM OF INFORMATION ACT PROGRAM § 806.20 Records of non-U.S. government source. (a... notify their MAJCOM (or equivalent) FOIA office, in writing, via fax or e-mail when the Department of...

  2. Measurements of soot formation and hydroxyl concentration in near critical equivalence ratio premixed ethylene flame

    NASA Technical Reports Server (NTRS)

    Inbody, Michael Andrew

    1993-01-01

    The testing and development of existing global and detailed chemical kinetic models for soot formation requires measurements of soot and radical concentrations in flames. A clearer understanding of soot particle inception relies upon the evaluation and refinement of these models in comparison with such measurements. We present measurements of soot formation and hydroxyl (OH) concentration in sequences of flat premixed atmospheric-pressure C2H4/O2/N2 flames and 80-torr C2H4/O2 flames for a unique range of equivalence ratios bracketting the critical equivalence ratio (phi(sub c)) and extending to more heavily sooting conditions. Soot volume fraction and number density profiles are measured using a laser scattering-extinction apparatus capable of resolving a 0.1 percent absorption. Hydroxyl number density profiles are measured using laser-induced fluorescence (LIF) with broadband detection. Temperature profiles are obtained from Rayleigh scattering measurements. The relative volume fraction and number density profiles of the richer sooting flames exhibit the expected trends in soot formation. In near-phi(sub c) visibility sooting flames, particle scattering and extinction are not detected, but an LIF signal due to polycyclic aromatic hydrocarbons (PAH's) can be detected upon excitation with an argon-ion laser. A linear correlation between the argon-ion LIF and the soot volume fraction implies a common mechanistic source for the growth of PAH's and soot particles. The peak OH number density in both the atmospheric and 80-torr flames declines with increasing equivalence ratio, but the profile shape remains unchanged in the transition to sooting, implying that the primary reaction pathways for OH remain unchanged over this transition. Chemical kinetic modeling is demonstrated by comparing predictions using two current reaction mechanisms with the atmospheric flame data. The measured and predicted OH number density profiles show good agreement. The predicted benzene number density profiles correlate with the measured trends in soot formation, although anomalies in the benzene profiles for the richer and cooler sooting flames suggest a need for the inclusion of benzene oxidation reactions.

  3. Simulating the Snow Water Equivalent and its changing pattern over Nepal

    NASA Astrophysics Data System (ADS)

    Niroula, S.; Joseph, J.; Ghosh, S.

    2016-12-01

    Snow fall in the Himalayan region is one of the primary sources of fresh water, which accounts around 10% of total precipitation of Nepal. Snow water is an intricate variable in terms of its global and regional estimates whose complexity is favored by spatial variability linked with rugged topography. The study is primarily focused on simulation of Snow Water Equivalent (SWE) by the use of a macroscale hydrologic model, Variable Infiltration Capacity (VIC). As whole Nepal including its Himalayas lies under the catchment of Ganga River in India, contributing at least 40% of annual discharge of Ganges, this model was run in the entire watershed that covers part of Tibet and Bangladesh as well. Meteorological inputs for 29 years (1979-2007) are drawn from ERA-INTERIM and APHRODITE dataset for horizontal resolution of 0.25 degrees. The analysis was performed to study temporal variability of SWE in the Himalayan region of Nepal. The model was calibrated by observed stream flows of the tributaries of the Gandaki River in Nepal which ultimately feeds river Ganga. Further, the simulated SWE is used to estimate stream flow in this river basin. Since Nepal has a greater snow cover accumulation in monsoon season than in winter at high altitudes, seasonality fluctuations in SWE affecting the stream flows are known. The model provided fair estimates of SWE and stream flow as per statistical analysis. Stream flows are known to be sensitive to the changes in snow water that can bring a negative impact on power generation in a country which has huge hydroelectric potential. In addition, our results on simulated SWE in second largest snow-fed catchment of the country will be helpful for reservoir management, flood forecasting and other water resource management issues. Keywords: Hydrology, Snow Water Equivalent, Variable Infiltration Capacity, Gandaki River Basin, Stream Flow

  4. Electrothermal Equivalent Three-Dimensional Finite-Element Model of a Single Neuron.

    PubMed

    Cinelli, Ilaria; Destrade, Michel; Duffy, Maeve; McHugh, Peter

    2018-06-01

    We propose a novel approach for modelling the interdependence of electrical and mechanical phenomena in nervous cells, by using electrothermal equivalences in finite element (FE) analysis so that existing thermomechanical tools can be applied. First, the equivalence between electrical and thermal properties of the nerve materials is established, and results of a pure heat conduction analysis performed in Abaqus CAE Software 6.13-3 are validated with analytical solutions for a range of steady and transient conditions. This validation includes the definition of equivalent active membrane properties that enable prediction of the action potential. Then, as a step toward fully coupled models, electromechanical coupling is implemented through the definition of equivalent piezoelectric properties of the nerve membrane using the thermal expansion coefficient, enabling prediction of the mechanical response of the nerve to the action potential. Results of the coupled electromechanical model are validated with previously published experimental results of deformation for squid giant axon, crab nerve fibre, and garfish olfactory nerve fibre. A simplified coupled electromechanical modelling approach is established through an electrothermal equivalent FE model of a nervous cell for biomedical applications. One of the key findings is the mechanical characterization of the neural activity in a coupled electromechanical domain, which provides insights into the electromechanical behaviour of nervous cells, such as thinning of the membrane. This is a first step toward modelling three-dimensional electromechanical alteration induced by trauma at nerve bundle, tissue, and organ levels.

  5. A Statistical Review of Alternative Zinc and Copper Extraction from Mineral Fertilizers and Industrial By-Products.

    PubMed

    Cenciani de Souza, Camila Prado; Aparecida de Abreu, Cleide; Coscione, Aline Renée; Alberto de Andrade, Cristiano; Teixeira, Luiz Antonio Junqueira; Consolini, Flavia

    2018-01-01

    Rapid, accurate, and low-cost alternative analytical methods for micronutrient quantification in fertilizers are fundamental in QC. The purpose of this study was to evaluate whether zinc (Zn) and copper (Cu) content in mineral fertilizers and industrial by-products determined by the alternative methods USEPA 3051a, 10% HCl, and 10% H2SO4 are statistically equivalent to the standard method, consisting of hot-plate digestion using concentrated HCl. The commercially marketed Zn and Cu sources in Brazil consisted of oxides, carbonate, and sulfate fertilizers and by-products consisting of galvanizing ash, galvanizing sludge, brass ash, and brass or scrap slag. The contents of sources ranged from 15 to 82% and 10 to 45%, respectively, for Zn and Cu. The Zn and Cu contents refer to the variation of the elements found in the different sources evaluated with the concentrated HCl method as shown in Table 1. A protocol based on the following criteria was used for the statistical analysis assessment of the methods: F-test modified by Graybill, t-test for the mean error, and linear correlation coefficient analysis. In terms of equivalents, 10% HCl extraction was equivalent to the standard method for Zn, and the results of the USEPA 3051a and 10% HCl methods indicated that these methods were equivalents for Cu. Therefore, these methods can be considered viable alternatives to the standard method of determination for Cu and Zn in mineral fertilizers and industrial by-products in future research for their complete validation.

  6. Origin of soluble chemical species in bulk precipitation collected in Tokyo, Japan: Statistical evaluation of source materials

    NASA Astrophysics Data System (ADS)

    Tsurumi, Makoto; Takahashi, Akira; Ichikuni, Masami

    An iterative least-squares method with a receptor model was applied to the analytical data of the precipitation samples collected at 23 points in the suburban area of Tokyo, and the number and composition of the source materials were determined. Thirty-nine monthly bulk precipitation samples were collected in the spring and summer of 1987 from the hilly and mountainous area of Tokyo and analyzed for Na +, K +, NH 4+, Mg 2+, Ca 2+, F -, Cl -, Br -, NO 3- and SO 42- by atomic absorption spectrometry and ion chromatography. The pH of the samples was also measured. A multivariate ion balance approach (Tsurumi, 1982, Anal. Chim. Acta138, 177-182) showed that the solutes in the precipitation were derived from just three major sources; sea salt, acid substance (a mixture of 53% HNO 3, 39% H 2SO 4 and 8% HCl in equivalent) and CaSO 4. The contributions of each source to the precipitation were calculated for every sampling site. Variations of the contributions with the distance from the coast were also discussed.

  7. Expression of ZO-1 and claudin-1 in a 3D epidermal equivalent using canine progenitor epidermal keratinocytes.

    PubMed

    Teramoto, Keiji; Asahina, Ryota; Nishida, Hidetaka; Kamishina, Hiroaki; Maeda, Sadatoshi

    2018-05-21

    Previous studies indicate that tight junctions are involved in the pathogenesis of canine atopic dermatitis (cAD). An in vitro skin model is needed to elucidate the specific role of tight junctions in cAD. A 3D epidermal equivalent model using canine progenitor epidermal keratinocytes (CPEK) has been established; the expression of tight junctions within this model is uncharacterized. To investigate the expression of tight junctions in the 3D epidermal equivalent. Two normal laboratory beagle dogs served as donors of full-thickness skin biopsy samples for comparison to the in vitro model. Immunohistochemical techniques were employed to investigate the expression of tight junctions including zonula occludens (ZO)-1 and claudin-1 in normal canine skin, and in the CPEK 3D epidermal equivalent. Results demonstrated the expression of ZO-1 and claudin-1 in the CPEK 3D epidermal equivalent, with staining patterns that were similar to those in normal canine skin. The CPEK 3D epidermal equivalent has the potential to be a suitable in vitro research tool for clarifying the specific role of tight junctions in cAD. © 2018 ESVD and ACVD.

  8. On the Relation between the Linear Factor Model and the Latent Profile Model

    ERIC Educational Resources Information Center

    Halpin, Peter F.; Dolan, Conor V.; Grasman, Raoul P. P. P.; De Boeck, Paul

    2011-01-01

    The relationship between linear factor models and latent profile models is addressed within the context of maximum likelihood estimation based on the joint distribution of the manifest variables. Although the two models are well known to imply equivalent covariance decompositions, in general they do not yield equivalent estimates of the…

  9. 40 CFR 466.24 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... pretreatment standards the following equivalent mass standards are provided. (1) There shall be no discharge of... 40 Protection of Environment 30 2014-07-01 2014-07-01 false Pretreatment standards for existing...) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) PORCELAIN ENAMELING POINT SOURCE CATEGORY Cast Iron Basis...

  10. 40 CFR 466.24 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... pretreatment standards the following equivalent mass standards are provided. (1) There shall be no discharge of... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Pretreatment standards for existing...) EFFLUENT GUIDELINES AND STANDARDS PORCELAIN ENAMELING POINT SOURCE CATEGORY Cast Iron Basis Material...

  11. 40 CFR 466.24 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... pretreatment standards the following equivalent mass standards are provided. (1) There shall be no discharge of... 40 Protection of Environment 31 2012-07-01 2012-07-01 false Pretreatment standards for existing...) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) PORCELAIN ENAMELING POINT SOURCE CATEGORY Cast Iron Basis...

  12. 40 CFR 466.24 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... pretreatment standards the following equivalent mass standards are provided. (1) There shall be no discharge of... 40 Protection of Environment 30 2011-07-01 2011-07-01 false Pretreatment standards for existing...) EFFLUENT GUIDELINES AND STANDARDS PORCELAIN ENAMELING POINT SOURCE CATEGORY Cast Iron Basis Material...

  13. 40 CFR 466.24 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... pretreatment standards the following equivalent mass standards are provided. (1) There shall be no discharge of... 40 Protection of Environment 31 2013-07-01 2013-07-01 false Pretreatment standards for existing...) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) PORCELAIN ENAMELING POINT SOURCE CATEGORY Cast Iron Basis...

  14. Noise analysis in air-coupled PVDF ultrasonic sensors.

    PubMed

    Fiorillo, A S

    2000-01-01

    In this paper we analyze the noise generated in a piezo-polymer based sensor for low frequency ultrasound in air. The sensor includes two curved PVDF transducers for medium and short range applications. A lumped RLC equivalent circuit was derived from the measurement of the transducer's electrical admittance, in air, by taking into account both mechanical and dielectric losses, which we suppose are the major sources of noise in similar devices. The electrical model was used to study and optimize the noise performance of a 61 kHz transducer and to simulate the electrical behavior of the complete transmitter-receiver system. The validity of the overall electrical model with low noise was confirmed after verifying, with Pspice, agreement of the practical and theoretical results.

  15. Optimization of decoupling performance of underwater acoustic coating with cavities via equivalent fluid model

    NASA Astrophysics Data System (ADS)

    Huang, Lingzhi; Xiao, Yong; Wen, Jihong; Zhang, Hao; Wen, Xisen

    2018-07-01

    Acoustic coatings with periodically arranged internal cavities have been successfully applied in submarines for the purpose of decoupling water from vibration of underwater structures, and thus reducing underwater sound radiation. Previous publications on decoupling acoustic coatings with cavities are mainly focused on the case of coatings with specific shaped cavities, including cylindrical and conical cavities. To explore better decoupling performance, an optimal design of acoustic coating with complex shaped cavities is attempted in this paper. An equivalent fluid model is proposed to characterize coatings with general axisymmetrical cavities. By employing the equivalent fluid model, an analytical vibroacoustic model is further developed for the prediction of sound radiation from an infinite plate covered with an equivalent fluid layer (as a replacement of original coating) and immersed in water. Numerical examples are provided to verify the equivalent fluid model. Based on a combining use of the analytical vibroacoustic model and a differential evolution algorithm, optimal designs for acoustic coatings with cavities are conducted. Numerical results demonstrate that the decoupling performance of acoustic coating can be significantly improved by employing special axisymmetrical cavities as compared to traditional cylindrical cavities.

  16. Equivalent Air Spring Suspension Model for Quarter-Passive Model of Passenger Vehicles.

    PubMed

    Abid, Haider J; Chen, Jie; Nassar, Ameen A

    2015-01-01

    This paper investigates the GENSIS air spring suspension system equivalence to a passive suspension system. The SIMULINK simulation together with the OptiY optimization is used to obtain the air spring suspension model equivalent to passive suspension system, where the car body response difference from both systems with the same road profile inputs is used as the objective function for optimization (OptiY program). The parameters of air spring system such as initial pressure, volume of bag, length of surge pipe, diameter of surge pipe, and volume of reservoir are obtained from optimization. The simulation results show that the air spring suspension equivalent system can produce responses very close to the passive suspension system.

  17. State-space reduction and equivalence class sampling for a molecular self-assembly model.

    PubMed

    Packwood, Daniel M; Han, Patrick; Hitosugi, Taro

    2016-07-01

    Direct simulation of a model with a large state space will generate enormous volumes of data, much of which is not relevant to the questions under study. In this paper, we consider a molecular self-assembly model as a typical example of a large state-space model, and present a method for selectively retrieving 'target information' from this model. This method partitions the state space into equivalence classes, as identified by an appropriate equivalence relation. The set of equivalence classes H, which serves as a reduced state space, contains none of the superfluous information of the original model. After construction and characterization of a Markov chain with state space H, the target information is efficiently retrieved via Markov chain Monte Carlo sampling. This approach represents a new breed of simulation techniques which are highly optimized for studying molecular self-assembly and, moreover, serves as a valuable guideline for analysis of other large state-space models.

  18. Mathematical Fluid Dynamics of Store and Stage Separation

    DTIC Science & Technology

    2005-05-01

    coordinates r = stretched inner radius S, (x) = effective source strength Re, = transition Reynolds number t = time r = reflection coefficient T = temperature...wave drag due to lift integral has the same form as that due to thickness, the source strength of the equivalent body depends on streamwise derivatives...revolution in which the source strength S, (x) is proportional to the x rate of change of cross sectional area, the source strength depends on the streamwise

  19. Long-range transport of pollutants to the Falkland Islands and Antarctica: evidence from lake sediment fly ash particle records.

    PubMed

    Rose, Neil L; Jones, Vivienne J; Noon, Philippa E; Hodgson, Dominic A; Flower, Roger J; Appleby, Peter G

    2012-09-18

    (210)Pb-dated sediment cores taken from lakes on the Falkland Islands, the South Orkney Islands, and the Larsemann Hills in Antarctica were analyzed for fly ash particles to assess the temporal record of contamination from high temperature fossil-fuel combustion sources. Very low, but detectable, levels were observed in the Antarctic lakes. In the Falkland Island lakes, the record of fly ash extended back to the late-19th century and the scale of contamination was considerably higher. These data, in combination with meteorological, modeling, and fossil-fuel consumption data, indicate most likely sources are in South America, probably Chile and Brazil. Other southern hemisphere sources, notably from Australia, contribute to a background contamination and were more important historically. Comparing southern polar data with the equivalent from the northern hemisphere emphasizes the difference in contamination of the two circumpolar regions, with the Falkland Island sites only having a level of contamination similar to that of northern Svalbard.

  20. Robust Programming Problems Based on the Mean-Variance Model Including Uncertainty Factors

    NASA Astrophysics Data System (ADS)

    Hasuike, Takashi; Ishii, Hiroaki

    2009-01-01

    This paper considers robust programming problems based on the mean-variance model including uncertainty sets and fuzzy factors. Since these problems are not well-defined problems due to fuzzy factors, it is hard to solve them directly. Therefore, introducing chance constraints, fuzzy goals and possibility measures, the proposed models are transformed into the deterministic equivalent problems. Furthermore, in order to solve these equivalent problems efficiently, the solution method is constructed introducing the mean-absolute deviation and doing the equivalent transformations.

  1. Source apportionment and sensitivity analysis: two methodologies with two different purposes

    NASA Astrophysics Data System (ADS)

    Clappier, Alain; Belis, Claudio A.; Pernigotti, Denise; Thunis, Philippe

    2017-11-01

    This work reviews the existing methodologies for source apportionment and sensitivity analysis to identify key differences and stress their implicit limitations. The emphasis is laid on the differences between source impacts (sensitivity analysis) and contributions (source apportionment) obtained by using four different methodologies: brute-force top-down, brute-force bottom-up, tagged species and decoupled direct method (DDM). A simple theoretical example to compare these approaches is used highlighting differences and potential implications for policy. When the relationships between concentration and emissions are linear, impacts and contributions are equivalent concepts. In this case, source apportionment and sensitivity analysis may be used indifferently for both air quality planning purposes and quantifying source contributions. However, this study demonstrates that when the relationship between emissions and concentrations is nonlinear, sensitivity approaches are not suitable to retrieve source contributions and source apportionment methods are not appropriate to evaluate the impact of abatement strategies. A quantification of the potential nonlinearities should therefore be the first step prior to source apportionment or planning applications, to prevent any limitations in their use. When nonlinearity is mild, these limitations may, however, be acceptable in the context of the other uncertainties inherent to complex models. Moreover, when using sensitivity analysis for planning, it is important to note that, under nonlinear circumstances, the calculated impacts will only provide information for the exact conditions (e.g. emission reduction share) that are simulated.

  2. Petroleum systems of the Northwest Java Province, Java and offshore southeast Sumatra, Indonesia

    USGS Publications Warehouse

    Bishop, Michele G.

    2000-01-01

    Mature, synrift lacustrine shales of Eocene to Oligocene age and mature, late-rift coals and coaly shales of Oligocene to Miocene age are source rocks for oil and gas in two important petroleum systems of the onshore and offshore areas of the Northwest Java Basin. Biogenic gas and carbonate-sourced gas have also been identified. These hydrocarbons are trapped primarily in anticlines and fault blocks involving sandstone and carbonate reservoirs. These source rocks and reservoir rocks were deposited in a complex of Tertiary rift basins formed from single or multiple half-grabens on the south edge of the Sunda Shelf plate. The overall transgressive succession was punctuated by clastic input from the exposed Sunda Shelf and marine transgressions from the south. The Northwest Java province may contain more than 2 billion barrels of oil equivalent in addition to the 10 billion barrels of oil equivalent already identified.

  3. Modification of the TASMIP x-ray spectral model for the simulation of microfocus x-ray sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sisniega, A.; Vaquero, J. J., E-mail: juanjose.vaquero@uc3m.es; Instituto de Investigación Sanitaria Gregorio Marañón, Madrid ES28007

    2014-01-15

    Purpose: The availability of accurate and simple models for the estimation of x-ray spectra is of great importance for system simulation, optimization, or inclusion of photon energy information into data processing. There is a variety of publicly available tools for estimation of x-ray spectra in radiology and mammography. However, most of these models cannot be used directly for modeling microfocus x-ray sources due to differences in inherent filtration, energy range and/or anode material. For this reason the authors propose in this work a new model for the simulation of microfocus spectra based on existing models for mammography and radiology, modifiedmore » to compensate for the effects of inherent filtration and energy range. Methods: The authors used the radiology and mammography versions of an existing empirical model [tungsten anode spectral model interpolating polynomials (TASMIP)] as the basis of the microfocus model. First, the authors estimated the inherent filtration included in the radiology model by comparing the shape of the spectra with spectra from the mammography model. Afterwards, the authors built a unified spectra dataset by combining both models and, finally, they estimated the parameters of the new version of TASMIP for microfocus sources by calibrating against experimental exposure data from a microfocus x-ray source. The model was validated by comparing estimated and experimental exposure and attenuation data for different attenuating materials and x-ray beam peak energy values, using two different x-ray tubes. Results: Inherent filtration for the radiology spectra from TASMIP was found to be equivalent to 1.68 mm Al, as compared to spectra obtained from the mammography model. To match the experimentally measured exposure data the combined dataset required to apply a negative filtration of about 0.21 mm Al and an anode roughness of 0.003 mm W. The validation of the model against real acquired data showed errors in exposure and attenuation in line with those reported for other models for radiology or mammography. Conclusions: A new version of the TASMIP model for the estimation of x-ray spectra in microfocus x-ray sources has been developed and validated experimentally. Similarly to other versions of TASMIP, the estimation of spectra is very simple, involving only the evaluation of polynomial expressions.« less

  4. An equivalent n-source for WGPu derived from a spectrum-shifted PuBe source

    NASA Astrophysics Data System (ADS)

    Ghita, Gabriel; Sjoden, Glenn; Baciak, James; Walker, Scotty; Cornelison, Spring

    2008-04-01

    We have designed, built, and laboratory-tested a unique shield design that transforms the complex neutron spectrum from PuBe source neutrons, generated at high energies, to nearly exactly the neutron signature leaking from a significant spherical mass of weapons grade plutonium (WGPu). This equivalent "X-material shield assembly" (Patent Pending) enables the harder PuBe source spectrum (average energy of 4.61 MeV) from a small encapsulated standard 1-Ci PuBe source to be transformed, through interactions in the shield, so that leakage neutrons are shifted in energy and yield to become a close reproduction of the neutron spectrum leaking from a large subcritical mass of WGPu metal (mean energy 2.11 MeV). The utility of this shielded PuBe surrogate for WGPu is clear, since it directly enables detector field testing without the expense and risk of handling large amounts of Special Nuclear Materials (SNM) as WGPu. Also, conventional sources using Cf-252, which is difficult to produce, and decays with a 2.7 year half life, could be replaced by this shielded PuBe technology in order to simplify operational use, since a sealed PuBe source relies on Pu-239 (T½=24,110 y), and remains viable for more than hundreds of years.

  5. SU-F-19A-06: Experimental Investigation of the Energy Dependence of TLD Sensitivity in Low-Energy Photon Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Z; Nath, R

    Purpose: To measure the energy dependence of TLD sensitivity in lowenergy photon beams with equivalent mono-energetic energy matching those of 103Pd, 125I and 131Cs brachytherapy sources. Methods: A Pantek DXT 300 x-ray unit (Precision X-ray, Branford, CT), with stable digital voltage control down to 20 kV, was used to establish three lowenergy photon beams with narrow energy spread and equivalent monoenergetic energies matching those of 103Pd, 125I and 131Cs brachytherapy sources. The low-energy x-ray beams and a reference 6 MV photon beam were calibrated according to the AAPM TG-61 and TG-51 protocols, respectively, using a parallel-plate low-energy chamber and amore » Farmer cylindrical chamber with NIST traceable calibration factors. The dose response of model TLD-100 micro-cubes (1×1×1 mm{sup 3}) in each beam was measured for five different batches of TLDs (each contained approximately 100 TLDs) that have different histories of irradiation and usage. Relative absorbed dose sensitivity was determined as the quotient of the slope of dose response for a beam-of-interest to that of the reference beam. Results: Equivalent mono-energetic photon energies of the low-energy beams established for 103Pd, 125I and 131Cs sources were 20.5, 27.5, and 30.1 keV, respectively. Each beam exhibited narrow spectral spread with energyhomogeneity index close to 90%. The relative absorbed-dose sensitivity was found to vary between different batches of TLD with maximum differences of up to 8%. The mean and standard deviation determined from the five TLD batches was 1.453 ± 0.026, 1.541 ± 0.035 and 1.529 ± 0.051 for the simulated 103P, 125I and 131Cs beams, respectively. Conclusion: Our measured relative absorbed-dose sensitivities are greater than the historically measured value of 1.41. We find that the relative absorbed-dose sensitivity of TLD in the 103P beam is approximately 5% lower than that of 125I and 131Cs beams. Comparison of our results with other studies will be presented.« less

  6. Higher-order gravity in higher dimensions: geometrical origins of four-dimensional cosmology?

    NASA Astrophysics Data System (ADS)

    Troisi, Antonio

    2017-03-01

    Determining the cosmological field equations is still very much debated and led to a wide discussion around different theoretical proposals. A suitable conceptual scheme could be represented by gravity models that naturally generalize Einstein theory like higher-order gravity theories and higher-dimensional ones. Both of these two different approaches allow one to define, at the effective level, Einstein field equations equipped with source-like energy-momentum tensors of geometrical origin. In this paper, the possibility is discussed to develop a five-dimensional fourth-order gravity model whose lower-dimensional reduction could provide an interpretation of cosmological four-dimensional matter-energy components. We describe the basic concepts of the model, the complete field equations formalism and the 5-D to 4-D reduction procedure. Five-dimensional f( R) field equations turn out to be equivalent, on the four-dimensional hypersurfaces orthogonal to the extra coordinate, to an Einstein-like cosmological model with three matter-energy tensors related with higher derivative and higher-dimensional counter-terms. By considering the gravity model with f(R)=f_0R^n the possibility is investigated to obtain five-dimensional power law solutions. The effective four-dimensional picture and the behaviour of the geometrically induced sources are finally outlined in correspondence to simple cases of such higher-dimensional solutions.

  7. Immortalized N/TERT keratinocytes as an alternative cell source in 3D human epidermal models.

    PubMed

    Smits, Jos P H; Niehues, Hanna; Rikken, Gijs; van Vlijmen-Willems, Ivonne M J J; van de Zande, Guillaume W H J F; Zeeuwen, Patrick L J M; Schalkwijk, Joost; van den Bogaard, Ellen H

    2017-09-19

    The strong societal urge to reduce the use of experimental animals, and the biological differences between rodent and human skin, have led to the development of alternative models for healthy and diseased human skin. However, the limited availability of primary keratinocytes to generate such models hampers large-scale implementation of skin models in biomedical, toxicological, and pharmaceutical research. Immortalized cell lines may overcome these issues, however, few immortalized human keratinocyte cell lines are available and most do not form a fully stratified epithelium. In this study we compared two immortalized keratinocyte cell lines (N/TERT1, N/TERT2G) to human primary keratinocytes based on epidermal differentiation, response to inflammatory mediators, and the development of normal and inflammatory human epidermal equivalents (HEEs). Stratum corneum permeability, epidermal morphology, and expression of epidermal differentiation and host defence genes and proteins in N/TERT-HEE cultures was similar to that of primary human keratinocytes. We successfully generated N/TERT-HEEs with psoriasis or atopic dermatitis features and validated these models for drug-screening purposes. We conclude that the N/TERT keratinocyte cell lines are useful substitutes for primary human keratinocytes thereby providing a biologically relevant, unlimited cell source for in vitro studies on epidermal biology, inflammatory skin disease pathogenesis and therapeutics.

  8. Uncertainties in Hydrologic and Climate Change Impact Analysis in Major Watersheds in British Columbia, Canada

    NASA Astrophysics Data System (ADS)

    Bennett, K. E.; Schnorbus, M.; Werner, A. T.; Music, B.; Caya, D.; Rodenhuis, D. R.

    2009-12-01

    Uncertainties in the projections of future hydrologic change can be assessed using a suite of tools, thereby allowing researchers to focus on improvement to identifiable sources of uncertainty. A pareto set of optimal hydrologic parameterizations was run for three BC watersheds (Fraser, Peace and Columbia) for a range of downscaled Global Climate Model (GCM) emission scenarios to illustrate the uncertainty in hydrologic response to climate change. Results show varying responses of hydrologic regimes across geographic landscapes. Uncertainties in streamflow and water balance (runoff, evapo-transpiration, snow water equivalent, soil moisture) were analysed by forcing the Variable Infiltration Capacity (VIC) hydrologic model, run under twenty-five optimal parameter solution sets using six Bias-Corrected Statistically Downscaled (BCSD) GCM emission scenario projections for the 2050s and the 2080s. Projected changes by the 2050s include increased winter flows, increases and decreases in freshet magnitude depending on the scenario, and decreases in summer flows persisting until September. Winter runoff had the greatest range between GCM emission scenarios, while the hydrologic parameters within individual GCM emission scenarios had a winter runoff range an order of magnitude smaller. Evapo-transpiration, snow water equivalent and soil moisture exhibited a spread of ~10% or less. Streamflow changes by the 2080s lie outside the natural range of historic variability over the winter and spring. Results indicate that the changes projected between GCM emission scenarios are greater than the differences between the hydrologic model parameterizations. An alternate tool, the Canadian Regional Climate Model (CRCM) has been set up for these watersheds and various runs have been analysed to determine the range and variability present and to examine these results in comparison to the hydrologic model projections. The CRCM range and variability is an improvement over the Canadian GCM and thus requires less bias correction. However, without downscaling the CRCM results are still coarser than what is required to drive macroscale hydrologic models, such as VIC. Applying these tools has illustrated the importance of focusing on improved downscaling efforts, including downscaling CRCM results rather than CGCM data. Tools for decision-making in the face of uncertainty are emerging as a priority for the climate change impacts community, and there is a need to focus on incorporating uncertainty information along with the projection of impacts. Assessing uncertainty across a range of regimes and geographic regions can assist to identify the main sources of uncertainty and allow researchers to focus on improving those sources using more robust methodological approaches and tools.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sisniega, A.; Vaquero, J. J., E-mail: juanjose.vaquero@uc3m.es; Instituto de Investigación Sanitaria Gregorio Marañón, Madrid ES28007

    Purpose: The availability of accurate and simple models for the estimation of x-ray spectra is of great importance for system simulation, optimization, or inclusion of photon energy information into data processing. There is a variety of publicly available tools for estimation of x-ray spectra in radiology and mammography. However, most of these models cannot be used directly for modeling microfocus x-ray sources due to differences in inherent filtration, energy range and/or anode material. For this reason the authors propose in this work a new model for the simulation of microfocus spectra based on existing models for mammography and radiology, modifiedmore » to compensate for the effects of inherent filtration and energy range. Methods: The authors used the radiology and mammography versions of an existing empirical model [tungsten anode spectral model interpolating polynomials (TASMIP)] as the basis of the microfocus model. First, the authors estimated the inherent filtration included in the radiology model by comparing the shape of the spectra with spectra from the mammography model. Afterwards, the authors built a unified spectra dataset by combining both models and, finally, they estimated the parameters of the new version of TASMIP for microfocus sources by calibrating against experimental exposure data from a microfocus x-ray source. The model was validated by comparing estimated and experimental exposure and attenuation data for different attenuating materials and x-ray beam peak energy values, using two different x-ray tubes. Results: Inherent filtration for the radiology spectra from TASMIP was found to be equivalent to 1.68 mm Al, as compared to spectra obtained from the mammography model. To match the experimentally measured exposure data the combined dataset required to apply a negative filtration of about 0.21 mm Al and an anode roughness of 0.003 mm W. The validation of the model against real acquired data showed errors in exposure and attenuation in line with those reported for other models for radiology or mammography. Conclusions: A new version of the TASMIP model for the estimation of x-ray spectra in microfocus x-ray sources has been developed and validated experimentally. Similarly to other versions of TASMIP, the estimation of spectra is very simple, involving only the evaluation of polynomial expressions.« less

  10. What Makes Hydrologic Models Differ? Using SUMMA to Systematically Explore Model Uncertainty and Error

    NASA Astrophysics Data System (ADS)

    Bennett, A.; Nijssen, B.; Chegwidden, O.; Wood, A.; Clark, M. P.

    2017-12-01

    Model intercomparison experiments have been conducted to quantify the variability introduced during the model development process, but have had limited success in identifying the sources of this model variability. The Structure for Unifying Multiple Modeling Alternatives (SUMMA) has been developed as a framework which defines a general set of conservation equations for mass and energy as well as a common core of numerical solvers along with the ability to set options for choosing between different spatial discretizations and flux parameterizations. SUMMA can be thought of as a framework for implementing meta-models which allows for the investigation of the impacts of decisions made during the model development process. Through this flexibility we develop a hierarchy of definitions which allows for models to be compared to one another. This vocabulary allows us to define the notion of weak equivalence between model instantiations. Through this weak equivalence we develop the concept of model mimicry, which can be used to investigate the introduction of uncertainty and error during the modeling process as well as provide a framework for identifying modeling decisions which may complement or negate one another. We instantiate SUMMA instances that mimic the behaviors of the Variable Infiltration Capacity (VIC) model and the Precipitation Runoff Modeling System (PRMS) by choosing modeling decisions which are implemented in each model. We compare runs from these models and their corresponding mimics across the Columbia River Basin located in the Pacific Northwest of the United States and Canada. From these comparisons, we are able to determine the extent to which model implementation has an effect on the results, as well as determine the changes in sensitivity of parameters due to these implementation differences. By examining these changes in results and sensitivities we can attempt to postulate changes in the modeling decisions which may provide better estimation of state variables.

  11. Global Distribution of Human-Associated Fecal Genetic Markers in Reference Samples from Six Continents.

    PubMed

    Mayer, René E; Reischer, Georg H; Ixenmaier, Simone K; Derx, Julia; Blaschke, Alfred Paul; Ebdon, James E; Linke, Rita; Egle, Lukas; Ahmed, Warish; Blanch, Anicet R; Byamukama, Denis; Savill, Marion; Mushi, Douglas; Cristóbal, Héctor A; Edge, Thomas A; Schade, Margit A; Aslan, Asli; Brooks, Yolanda M; Sommer, Regina; Masago, Yoshifumi; Sato, Maria I; Taylor, Huw D; Rose, Joan B; Wuertz, Stefan; Shanks, Orin C; Piringer, Harald; Mach, Robert L; Savio, Domenico; Zessner, Matthias; Farnleitner, Andreas H

    2018-05-01

    Numerous bacterial genetic markers are available for the molecular detection of human sources of fecal pollution in environmental waters. However, widespread application is hindered by a lack of knowledge regarding geographical stability, limiting implementation to a small number of well-characterized regions. This study investigates the geographic distribution of five human-associated genetic markers (HF183/BFDrev, HF183/BacR287, BacHum-UCD, BacH, and Lachno2) in municipal wastewaters (raw and treated) from 29 urban and rural wastewater treatment plants (750-4 400 000 population equivalents) from 13 countries spanning six continents. In addition, genetic markers were tested against 280 human and nonhuman fecal samples from domesticated, agricultural and wild animal sources. Findings revealed that all genetic markers are present in consistently high concentrations in raw (median log 10 7.2-8.0 marker equivalents (ME) 100 mL -1 ) and biologically treated wastewater samples (median log 10 4.6-6.0 ME 100 mL -1 ) regardless of location and population. The false positive rates of the various markers in nonhuman fecal samples ranged from 5% to 47%. Results suggest that several genetic markers have considerable potential for measuring human-associated contamination in polluted environmental waters. This will be helpful in water quality monitoring, pollution modeling and health risk assessment (as demonstrated by QMRAcatch) to guide target-oriented water safety management across the globe.

  12. Mercury deposition in snow near an industrial emission source in the western U.S. and comparison to ISC3 model predictions

    USGS Publications Warehouse

    Abbott, M.L.; Susong, D.D.; Krabbenhoft, D.P.; Rood, A.S.

    2002-01-01

    Mercury (total and methyl) was evaluated in snow samples collected near a major mercury emission source on the Idaho National Engineering and Environmental Laboratory (INEEL) in southeastern Idaho and 160 km downwind in Teton Range in western Wyoming. The sampling was done to assess near-field (<12 km) deposition rates around the source, compare them to those measured in a relatively remote, pristine downwind location, and to use the measurements to develop improved, site-specific model input parameters for precipitation scavenging coefficient and the fraction of Hg emissions deposited locally. Measured snow water concentrations (ng L-1) were converted to deposition (ug m-2) using the sample location snow water equivalent. The deposition was then compared to that predicted using the ISC3 air dispersion/deposition model which was run with a range of particle and vapor scavenging coefficient input values. Accepted model statistical performance measures (fractional bias and normalized mean square error) were calculated for the different modeling runs, and the best model performance was selected. Measured concentrations close to the source (average = 5.3 ng L-1) were about twice those measured in the Teton Range (average = 2.7 ng L-1) which were within the expected range of values for remote background areas. For most of the sampling locations, the ISC3 model predicted within a factor of two of the observed deposition. The best modeling performance was obtained using a scavenging coefficient value for 0.25 ??m diameter particulate and the assumption that all of the mercury is reactive Hg(II) and subject to local deposition. A 0.1 ??m particle assumption provided conservative overprediction of the data, while a vapor assumption resulted in highly variable predictions. Partitioning a fraction of the Hg emissions to elemental Hg(0) (a U.S. EPA default assumption for combustion facility risk assessments) would have underpredicted the observed fallout.

  13. 40 CFR 70.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... section 183(f) of the Act; (11) Any standard or other requirement of the program to control air pollution... emissions which could not reasonably pass through a stack, chimney, vent, or other functionally-equivalent... means any stationary source (or any group of stationary sources that are located on one or more...

  14. Exploring cover crops as carbon sources for anaerobic soil disinfestation in a vegetable production system

    USDA-ARS?s Scientific Manuscript database

    In a raised-bed plasticulture vegetable production system utilizing anaerobic soil disinfestation (ASD) in Florida field trials, pathogen, weed, and parasitic nematode control was equivalent to or better than the methyl bromide control. Molasses was used as the labile carbon source to stimulate micr...

  15. 40 CFR 430.107 - Pretreatment standards for new sources (PSNS).

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Secondary Fiber Non-Deink Subcategory § 430.107 Pretreatment standards for new sources (PSNS). Except as... biocides: Subpart J [PSNS for secondary fiber non-deink facilities where paperboard from wastepaper is....00030 y = wastewater discharged in kgal per ton of product. a The following equivalent mass limitations...

  16. 40 CFR 430.106 - Pretreatment standards for existing sources (PSES).

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... CATEGORY Secondary Fiber Non-Deink Subcategory § 430.106 Pretreatment standards for existing sources (PSES... [PSES for secondary fiber non-deink facilities where paperboard from wastepaper is produced] Pollutant... = wastewater discharged in kgal per ton of product. a The following equivalent mass limitations are provided as...

  17. Differences in staining intensities affect reported occurrences and concentrations of Giardia spp. in surface drinking water sources

    EPA Science Inventory

    Aim USEPA Method 1623, or its equivalent, is currently used to monitor for protozoan contamination of surface drinking water sources worldwide. At least three approved staining kits used for detecting Cryptosporidium and Giardia are commercially available. This study focuses on ...

  18. 40 CFR 430.106 - Pretreatment standards for existing sources (PSES).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Fiber Non-Deink Subcategory § 430.106 Pretreatment standards for existing sources (PSES). Except as... secondary fiber non-deink facilities where paperboard from wastepaper is produced] Pollutant or pollutant... per ton of product. a The following equivalent mass limitations are provided as guidance in cases when...

  19. 40 CFR 430.107 - Pretreatment standards for new sources (PSNS).

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Secondary Fiber Non-Deink Subcategory § 430.107 Pretreatment standards for new sources (PSNS). Except as... biocides: Subpart J [PSNS for secondary fiber non-deink facilities where paperboard from wastepaper is....00030 y = wastewater discharged in kgal per ton of product. a The following equivalent mass limitations...

  20. 40 CFR 430.106 - Pretreatment standards for existing sources (PSES).

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Fiber Non-Deink Subcategory § 430.106 Pretreatment standards for existing sources (PSES). Except as... secondary fiber non-deink facilities where paperboard from wastepaper is produced] Pollutant or pollutant... per ton of product. a The following equivalent mass limitations are provided as guidance in cases when...

  1. 40 CFR 430.106 - Pretreatment standards for existing sources (PSES).

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... CATEGORY Secondary Fiber Non-Deink Subcategory § 430.106 Pretreatment standards for existing sources (PSES... [PSES for secondary fiber non-deink facilities where paperboard from wastepaper is produced] Pollutant... = wastewater discharged in kgal per ton of product. a The following equivalent mass limitations are provided as...

  2. 40 CFR 430.107 - Pretreatment standards for new sources (PSNS).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) EFFLUENT GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Secondary Fiber Non...: Subpart J [PSNS for secondary fiber non-deink facilities where paperboard from wastepaper is produced... = wastewater discharged in kgal per ton of product. a The following equivalent mass limitations are provided as...

  3. 40 CFR 430.107 - Pretreatment standards for new sources (PSNS).

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) EFFLUENT GUIDELINES AND STANDARDS THE PULP, PAPER, AND PAPERBOARD POINT SOURCE CATEGORY Secondary Fiber Non...: Subpart J [PSNS for secondary fiber non-deink facilities where paperboard from wastepaper is produced... = wastewater discharged in kgal per ton of product. a The following equivalent mass limitations are provided as...

  4. 40 CFR 430.57 - Pretreatment standards for new sources (PSNS).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... compounds as biocides. In cases when POTWs find it necessary to impose mass effluent standards, equivalent mass standards are provided as guidance: Subpart E Pollutant or pollutant property Supplemental PSNS... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Pretreatment standards for new sources...

  5. 40 CFR 430.57 - Pretreatment standards for new sources (PSNS).

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... compounds as biocides. In cases when POTWs find it necessary to impose mass effluent standards, equivalent mass standards are provided as guidance: Subpart E Pollutant or pollutant property Supplemental PSNS... 40 Protection of Environment 30 2011-07-01 2011-07-01 false Pretreatment standards for new sources...

  6. Lower Cody Shale (Niobrara equivalent) in the Bighorn Basin, Wyoming and Montana: thickness, distribution, and source rock potential

    USGS Publications Warehouse

    Finn, Thomas M.

    2014-01-01

    The lower shaly member of the Cody Shale in the Bighorn Basin, Wyoming and Montana is Coniacian to Santonian in age and is equivalent to the upper part of the Carlile Shale and basal part of the Niobrara Formation in the Powder River Basin to the east. The lower Cody ranges in thickness from 700 to 1,200 feet and underlies much of the central part of the basin. It is composed of gray to black shale, calcareous shale, bentonite, and minor amounts of siltstone and sandstone. Sixty-six samples, collected from well cuttings, from the lower Cody Shale were analyzed using Rock-Eval and total organic carbon analysis to determine the source rock potential. Total organic carbon content averages 2.28 weight percent for the Carlile equivalent interval and reaches a maximum of nearly 5 weight percent. The Niobrara equivalent interval averages about 1.5 weight percent and reaches a maximum of over 3 weight percent, indicating that both intervals are good to excellent source rocks. S2 values from pyrolysis analysis also indicate that both intervals have a good to excellent source rock potential. Plots of hydrogen index versus oxygen index, hydrogen index versus Tmax, and S2/S3 ratios indicate that organic matter contains both Type II and Type III kerogen capable of generating oil and gas. Maps showing the distribution of kerogen types and organic richness for the lower shaly member of the Cody Shale show that it is more organic-rich and more oil-prone in the eastern and southeastern parts of the basin. Thermal maturity based on vitrinite reflectance (Ro) ranges from 0.60–0.80 percent Ro around the margins of the basin, increasing to greater than 2.0 percent Ro in the deepest part of the basin, indicates that the lower Cody is mature to overmature with respect to hydrocarbon generation.

  7. Inverse and Forward Modeling of The 2014 Iquique Earthquake with Run-up Data

    NASA Astrophysics Data System (ADS)

    Fuentes, M.

    2015-12-01

    The April 1, 2014 Mw 8.2 Iquique earthquake excited a moderate tsunami which turned on the national alert of tsunami threat. This earthquake was located in the well-known seismic gap in northern Chile which had a high seismic potential (~ Mw 9.0) after the two main large historic events of 1868 and 1877. Nonetheless, studies of the seismic source performed with seismic data inversions suggest that the event exhibited a main patch located around 19.8° S at 40 km of depth with a seismic moment equivalent to Mw = 8.2. Thus, a large seismic deficit remains in the gap being capable to release an event of Mw = 8.8-8.9. To understand the importance of the tsunami threat in this zone, a seismic source modeling of the Iquique Earthquake is performed. A new approach based on stochastic k2 seismic sources is presented. A set of those sources is generated and for each one, a full numerical tsunami model is performed in order to obtain the run-up heights along the coastline. The results are compared with the available field run-up measurements and with the tide gauges that registered the signal. The comparison is not uniform; it penalizes more when the discrepancies are larger close to the peak run-up location. This criterion allows to identify the best seismic source from the set of scenarios that explains better the observations from a statistical point of view. By the other hand, a L2 norm minimization is used to invert the seismic source by comparing the peak nearshore tsunami amplitude (PNTA) with the run-up observations. This method searches in a space of solutions the best seismic configuration by retrieving the Green's function coefficients in order to explain the field measurements. The results obtained confirm that a concentrated down-dip patch slip adequately models the run-up data.

  8. Future Change of Snow Water Equivalent over Japan

    NASA Astrophysics Data System (ADS)

    Hara, M.; Kawase, H.; Kimura, F.; Fujita, M.; Ma, X.

    2012-12-01

    Western side of Honshu Island and Hokkaido Island in Japan are ones of the heaviest snowfall areas in the world. Although a heavy snowfall often brings disaster, snow is one of the major sources for agriculture, industrial, and house-use in Japan. Even during the winter, the monthly mean of the surface air temperature often exceeds 0 C in large parts of the heavy snow areas along the Sea of Japan. Thus, snow cover may be seriously reduced in these areas as a result of the global warming, which is caused by an increase in greenhouse gases. The change in seasonal march of snow water equivalent, e.g., snowmelt season and amount will strongly influence to social-economic activities. We performed a series of numerical experiments including present and future climate simulations and much-snow and less-snow cases using a regional climate model. Pseudo-Global-Warming (PGW) method (Kimura and Kitoh, 2008) is applied for the future climate simulations. MIROC 3.2 medres 2070s output under IPCC SRES A2 scenario and 1990s output under 20c3m scenario used for PGW method. The precipitation, snow depth, and surface air temperature of the hindcast simulations show good agreement with the AMeDAS station data. In much-snow cases, The decreasing rate of maximum total snow water equivalent over Japan due to climate change was 49%. Main cause of the decrease of the total snow water equivalent is the air temperature rise due to global climate change. The difference in the precipitation amount between the present and the future simulations is small.

  9. Symbolic computation of equivalence transformations and parameter reduction for nonlinear physical models

    NASA Astrophysics Data System (ADS)

    Cheviakov, Alexei F.

    2017-11-01

    An efficient systematic procedure is provided for symbolic computation of Lie groups of equivalence transformations and generalized equivalence transformations of systems of differential equations that contain arbitrary elements (arbitrary functions and/or arbitrary constant parameters), using the software package GeM for Maple. Application of equivalence transformations to the reduction of the number of arbitrary elements in a given system of equations is discussed, and several examples are considered. The first computational example of generalized equivalence transformations where the transformation of the dependent variable involves an arbitrary constitutive function is presented. As a detailed physical example, a three-parameter family of nonlinear wave equations describing finite anti-plane shear displacements of an incompressible hyperelastic fiber-reinforced medium is considered. Equivalence transformations are computed and employed to radically simplify the model for an arbitrary fiber direction, invertibly reducing the model to a simple form that corresponds to a special fiber direction, and involves no arbitrary elements. The presented computation algorithm is applicable to wide classes of systems of differential equations containing arbitrary elements.

  10. The Evaluation of the 0.07 and 3 mm Dose Equivalent with a Portable Beta Spectrometer

    NASA Astrophysics Data System (ADS)

    Hoshi, Katsuya; Yoshida, Tadayoshi; Tsujimura, Norio; Okada, Kazuhiko

    Beta spectra of various nuclide species were measured using a commercially available compact spectrometer. The shape of the spectra obtained via the spectrometer was almost similar to that of the theoretical spectra. The beta dose equivalent at any depth was obtained as a product of the measured pulse height spectra and the appropriate conversion coefficients of ICRP Publication 74. The dose rates evaluated from the spectra were comparable with the reference dose rates of standard beta calibration sources. In addition, we were able to determine the dose equivalents with a relative error of indication of 10% without the need for complicated correction.

  11. A novel fully-humanised 3D skin equivalent to model early melanoma invasion

    PubMed Central

    Hill, David S; Robinson, Neil D P; Caley, Matthew P; Chen, Mei; O’Toole, Edel A; Armstrong, Jane L; Przyborski, Stefan; Lovat, Penny E

    2015-01-01

    Metastatic melanoma remains incurable, emphasising the acute need for improved research models to investigate the underlying biological mechanisms mediating tumour invasion and metastasis, and to develop more effective targeted therapies to improve clinical outcome. Available animal models of melanoma do not accurately reflect human disease and current in vitro human skin equivalent models incorporating melanoma cells are not fully representative of the human skin microenvironment. We have developed a robust and reproducible, fully-humanised 3D skin equivalent comprising a stratified, terminally differentiated epidermis and a dermal compartment consisting of fibroblast-generated extracellular matrix. Melanoma cells incorporated into the epidermis were able to invade through the basement membrane and into the dermis, mirroring early tumour invasion in vivo. Comparison of our novel 3D melanoma skin equivalent with melanoma in situ and metastatic melanoma indicates this model accurately recreates features of disease pathology, making it a physiologically representative model of early radial and vertical growth phase melanoma invasion. PMID:26330548

  12. Comment on ''Equivalence between the Thirring model and a derivative-coupling model''

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Banerjee, R.

    1988-06-15

    An operator equivalence between the Thirring model and the fermionic sector of a Dirac field interacting via derivative coupling with two scalar fields is established in the path-integral framework. Relations between the coupling parameters of the two models, as found by Gomes and da Silva, can be reproduced.

  13. The Equivalence of Two Methods of Parameter Estimation for the Rasch Model.

    ERIC Educational Resources Information Center

    Blackwood, Larry G.; Bradley, Edwin L.

    1989-01-01

    Two methods of estimating parameters in the Rasch model are compared. The equivalence of likelihood estimations from the model of G. J. Mellenbergh and P. Vijn (1981) and from usual unconditional maximum likelihood (UML) estimation is demonstrated. Mellenbergh and Vijn's model is a convenient method of calculating UML estimates. (SLD)

  14. Threshold flux-controlled memristor model and its equivalent circuit implementation

    NASA Astrophysics Data System (ADS)

    Wu, Hua-Gan; Bao, Bo-Cheng; Chen, Mo

    2014-11-01

    Modeling a memristor is an effective way to explore the memristor properties due to the fact that the memristor devices are still not commercially available for common researchers. In this paper, a physical memristive device is assumed to exist whose ionic drift direction is perpendicular to the direction of the applied voltage, upon which, corresponding to the HP charge-controlled memristor model, a novel threshold flux-controlled memristor model with a window function is proposed. The fingerprints of the proposed model are analyzed. Especially, a practical equivalent circuit of the proposed model is realized, from which the corresponding experimental fingerprints are captured. The equivalent circuit of the threshold memristor model is appropriate for various memristors based breadboard experiments.

  15. Using open source data for flood risk mapping and management in Brazil

    NASA Astrophysics Data System (ADS)

    Whitley, Alison; Malloy, James; Chirouze, Manuel

    2013-04-01

    Whitley, A., Malloy, J. and Chirouze, M. Worldwide the frequency and severity of major natural disasters, particularly flooding, has increased. Concurrently, countries such as Brazil are experiencing rapid socio-economic development with growing and increasingly concentrated populations, particularly in urban areas. Hence, it is unsurprising that Brazil has experienced a number of major floods in the past 30 years such as the January 2011 floods which killed 900 people and resulted in significant economic losses of approximately 1 billion US dollars. Understanding, mitigating against and even preventing flood risk is high priority. There is a demand for flood models in many developing economies worldwide for a range of uses including risk management, emergency planning and provision of insurance solutions. However, developing them can be expensive. With an increasing supply of freely-available, open source data, the costs can be significantly reduced, making the tools required for natural hazard risk assessment more accessible. By presenting a flood model developed for eight urban areas of Brazil as part of a collaboration between JBA Risk Management and Guy Carpenter, we explore the value of open source data and demonstrate its usability in a business context within the insurance industry. We begin by detailing the open source data available and compare its suitability to commercially-available equivalents for datasets including digital terrain models and river gauge records. We present flood simulation outputs in order to demonstrate the impact of the choice of dataset on the results obtained and its use in a business context. Via use of the 2D hydraulic model JFlow+, our examples also show how advanced modelling techniques can be used on relatively crude datasets to obtain robust and good quality results. In combination with accessible, standard specification GPU technology and open source data, use of JFlow+ has enabled us to produce large-scale hazard maps suitable for business use and emergency planning such as those we show for Brazil.

  16. Equivalent Air Spring Suspension Model for Quarter-Passive Model of Passenger Vehicles

    PubMed Central

    Abid, Haider J.; Chen, Jie; Nassar, Ameen A.

    2015-01-01

    This paper investigates the GENSIS air spring suspension system equivalence to a passive suspension system. The SIMULINK simulation together with the OptiY optimization is used to obtain the air spring suspension model equivalent to passive suspension system, where the car body response difference from both systems with the same road profile inputs is used as the objective function for optimization (OptiY program). The parameters of air spring system such as initial pressure, volume of bag, length of surge pipe, diameter of surge pipe, and volume of reservoir are obtained from optimization. The simulation results show that the air spring suspension equivalent system can produce responses very close to the passive suspension system. PMID:27351020

  17. Wind Tunnel Testing of Various Disk-Gap-Band Parachutes

    NASA Technical Reports Server (NTRS)

    Cruz, Juan R.; Mineck, Raymond E.; Keller, Donald F.; Bobskill, Maria V.

    2003-01-01

    Two Disk-Gap-Band model parachute designs were tested in the NASA Langley Transonic Dynamics Tunnel. The purposes of these tests were to determine the drag and static stability coefficients of these two model parachutes at various subsonic Mach numbers in support of the Mars Exploration Rover mission. The two model parachute designs were designated 1.6 Viking and MPF. These model parachute designs were chosen to investigate the tradeoff between drag and static stability. Each of the parachute designs was tested with models fabricated from MIL-C-7020 Type III or F-111 fabric. The reason for testing model parachutes fabricated with different fabrics was to evaluate the effect of fabric permeability on the drag and static stability coefficients. Several improvements over the Viking-era wind tunnel tests were implemented in the testing procedures and data analyses. Among these improvements were corrections for test fixture drag interference and blockage effects, and use of an improved test fixture for measuring static stability coefficients. The 1.6 Viking model parachutes had drag coefficients from 0.440 to 0.539, while the MPF model parachutes had drag coefficients from 0.363 to 0.428. The 1.6 Viking model parachutes had drag coefficients 18 to 22 percent higher than the MPF model parachute for equivalent fabric materials and test conditions. Model parachutes of the same design tested at the same conditions had drag coefficients approximately 11 to 15 percent higher when manufactured from F-111 fabric as compared to those fabricated from MIL-C-7020 Type III fabric. The lower fabric permeability of the F-111 fabric was the source of this difference. The MPF model parachutes had smaller absolute statically stable trim angles of attack as compared to the 1.6 Viking model parachutes for equivalent fabric materials and test conditions. This was attributed to the MPF model parachutes larger band height to nominal diameter ratio. For both designs, model parachutes fabricated from F-111 fabric had significantly greater statically stable absolute trim angles of attack at equivalent test conditions as compared to those fabricated from MILC-7020 Type III fabric. This reduction in static stability exhibited by model parachutes fabricated from F-111 fabric was attributed to the lower permeability of the F-111 fabric. The drag and static stability coefficient results were interpolated to obtain their values at Mars flight conditions using total porosity as the interpolating parameter.

  18. Water Isotopes in Precipitation: Data/Model Comparison for Present-Day and Past Climates

    NASA Technical Reports Server (NTRS)

    Jouzel, J.; Hoffmann, G.; Masson, V.

    1998-01-01

    Variations of HDO and H2O-18 concentrations are observed in precipitation both on a geographical and on a temporal basis. These variations, resulting from successive isotopic fractionation processes at each phase change of water during its atmospheric cycle, are well documented through the IAEA/WMO network and other sources. Isotope concentrations are, in middle and high latitudes, linearly related to the annual mean temperature at the precipitation site. Paleoclimatologists have used this relationship to infer paleotemperatures from isotope paleodata extractable from ice cores, deep groundwater and other such sources. For this application to be valid, however, the spatial relationship must also hold in time at a given location as the location undergoes a series of climatic changes. Progress in water isotope modeling aimed at examining and evaluating this assumption has been recently reviewed with a focus on polar regions and, more specifically, on Greenland. This article was largely based on the results obtained using the isotopic version of the NASA/GISS Atmospheric General Circulation Model (AGCM) fitted with isotope tracer diagnostics. We extend this review in comparing the results of two different isotopic AGCMs (NASA/GISS and ECHAM) and in examining, with a more global perspective, the validity of the above assumption, i.e. the equivalence of the spatial and temporal isotope-temperature relationship. We also examine recent progress made in modeling the relationship between the conditions prevailing in moisture source regions for precipitation and the deuterium-excess of that precipitation.

  19. Equivalence principle and bound kinetic energy.

    PubMed

    Hohensee, Michael A; Müller, Holger; Wiringa, R B

    2013-10-11

    We consider the role of the internal kinetic energy of bound systems of matter in tests of the Einstein equivalence principle. Using the gravitational sector of the standard model extension, we show that stringent limits on equivalence principle violations in antimatter can be indirectly obtained from tests using bound systems of normal matter. We estimate the bound kinetic energy of nucleons in a range of light atomic species using Green's function Monte Carlo calculations, and for heavier species using a Woods-Saxon model. We survey the sensitivities of existing and planned experimental tests of the equivalence principle, and report new constraints at the level of between a few parts in 10(6) and parts in 10(8) on violations of the equivalence principle for matter and antimatter.

  20. SU-C-9A-04: Alternative Analytic Solution to the Paralyzable Detector Model to Calculate Deadtime and Deadtime Loss

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siman, W; Kappadath, S

    2014-06-01

    Purpose: Some common methods to solve for deadtime are (1) dual-source method, which assumes two equal activities; (2) model fitting, which requires multiple acquisitions as source decays; and (3) lossless model, which assumes no deadtime loss at low count rates. We propose a new analytic alternative solution to calculate deadtime for paralyzable gamma camera. Methods: Deadtime T can be calculated analytically from two distinct observed count rates M1 and M2 when the ratio of the true count rates alpha=N2/N1 is known. Alpha can be measured as a ratio of two measured activities using dose calibrators or via radioactive decay. Knowledgemore » of alpha creates a system with 2 equations and 2 unknowns, i.e., T and N1. To verify the validity of the proposed method, projections of a non-uniform phantom (4GBq 99mTc) were acquired in using Siemens SymbiaS multiple times over 48 hours. Each projection has >100kcts. The deadtime for each projection was calculated by fitting the data to a paralyzable model and also by using the proposed 2-acquisition method. The two estimates of deadtime were compared using the Bland-Altmann method. In addition, the dependency of uncertainty in T on uncertainty in alpha was investigated for several imaging conditions. Results: The results strongly suggest that the 2-acquisition method is equivalent to the fitting method. The Bland-Altman analysis yielded mean difference in deadtime estimate of ∼0.076us (95%CI: -0.049us, 0.103us) between the 2-acquisition and model fitting methods. The 95% limits of agreement were calculated to be -0.104 to 0.256us. The uncertainty in deadtime calculated using the proposed method is highly dependent on the uncertainty in the ratio alpha. Conclusion: The 2-acquisition method was found to be equivalent to the parameter fitting method. The proposed method offers a simpler and more practical way to analytically solve for a paralyzable detector deadtime, especially during physics testing.« less

  1. Considering the reversibility of passive and reactive transport problems: Are forward-in-time and backward-in-time models ever equivalent?

    NASA Astrophysics Data System (ADS)

    Engdahl, N.

    2017-12-01

    Backward in time (BIT) simulations of passive tracers are often used for capture zone analysis, source area identification, and generation of travel time and age distributions. The BIT approach has the potential to become an immensely powerful tool for direct inverse modeling but the necessary relationships between the processes modeled in the forward and backward models have yet to be formally established. This study explores the time reversibility of passive and reactive transport models in a variety of 2D heterogeneous domains using particle-based random walk methods for the transport and nonlinear reaction steps. Distributed forward models are used to generate synthetic observations that form the initial conditions for the backward in time models and we consider both linear-flood and point injections. The results for passive travel time distributions show that forward and backward models are not exactly equivalent but that the linear-flood BIT models are reasonable approximations. Point based BIT models fall within the travel time range of the forward models, though their distributions can be distinctive in some cases. The BIT approximation is not as robust when nonlinear reactive transport is considered and we find that this reaction system is only exactly reversible under uniform flow conditions. We use a series of simplified, longitudinally symmetric, but heterogeneous, domains to illustrate the causes of these discrepancies between the two model types. Many of the discrepancies arise because diffusion is a "self-adjoint" operator, which causes mass to spread in the forward and backward models. This allows particles to enter low velocity regions in the both models, which has opposite effects in the forward and reverse models. It may be possible to circumvent some of these limitations using an anti-diffusion model to undo mixing when time is reversed, but this is beyond the capabilities of the existing Lagrangian methods.

  2. The Equivalence of Regression Models Using Difference Scores and Models Using Separate Scores for Each Informant: Implications for the Study of Informant Discrepancies

    ERIC Educational Resources Information Center

    Laird, Robert D.; Weems, Carl F.

    2011-01-01

    Research on informant discrepancies has increasingly utilized difference scores. This article demonstrates the statistical equivalence of regression models using difference scores (raw or standardized) and regression models using separate scores for each informant to show that interpretations should be consistent with both models. First,…

  3. Equivalency of the DINA Model and a Constrained General Diagnostic Model. Research Report. ETS RR-11-37

    ERIC Educational Resources Information Center

    von Davier, Matthias

    2011-01-01

    This report shows that the deterministic-input noisy-AND (DINA) model is a special case of more general compensatory diagnostic models by means of a reparameterization of the skill space and the design (Q-) matrix of item by skills associations. This reparameterization produces a compensatory model that is equivalent to the (conjunctive) DINA…

  4. Modeling the Footprint and Equivalent Radiance Transfer Path Length for Tower-Based Hemispherical Observations of Chlorophyll Fluorescence

    PubMed Central

    Liu, Xinjie; Liu, Liangyun; Hu, Jiaochan; Du, Shanshan

    2017-01-01

    The measurement of solar-induced chlorophyll fluorescence (SIF) is a new tool for estimating gross primary production (GPP). Continuous tower-based spectral observations together with flux measurements are an efficient way of linking the SIF to the GPP. Compared to conical observations, hemispherical observations made with cosine-corrected foreoptic have a much larger field of view and can better match the footprint of the tower-based flux measurements. However, estimating the equivalent radiation transfer path length (ERTPL) for hemispherical observations is more complex than for conical observations and this is a key problem that needs to be addressed before accurate retrieval of SIF can be made. In this paper, we first modeled the footprint of hemispherical spectral measurements and found that, under convective conditions with light winds, 90% of the total radiation came from an FOV of width 72°, which in turn covered 75.68% of the source area of the flux measurements. In contrast, conical spectral observations covered only 1.93% of the flux footprint. Secondly, using theoretical considerations, we modeled the ERTPL of the hemispherical spectral observations made with cosine-corrected foreoptic and found that the ERTPL was approximately equal to twice the sensor height above the canopy. Finally, the modeled ERTPL was evaluated using a simulated dataset. The ERTPL calculated using the simulated data was about 1.89 times the sensor’s height above the target surface, which was quite close to the results for the modeled ERTPL. Furthermore, the SIF retrieved from atmospherically corrected spectra using the modeled ERTPL fitted well with the reference values, giving a relative root mean square error of 18.22%. These results show that the modeled ERTPL was reasonable and that this method is applicable to tower-based hemispherical observations of SIF. PMID:28509843

  5. Low Luminosity States of the Black Hole Candidate GX 339-4. 1; ASCA and Simultaneous Radio/RXTE Observations

    NASA Technical Reports Server (NTRS)

    Wilms, Joern; Nowak, Michael A.; Dove, James B.; Fender, Robert P.; DiMatteo, Tiziana

    1998-01-01

    We discuss a series of observations of the black hole candidate GX 339-4 in low luminosity, spectrally hard states. We present spectral analysis of three separate archival Advanced Satellite for Cosmology and Astrophysics (ASCA) data sets and eight separate Rossi X-ray Timing Explorer (RXTE) data sets. Three of the RXTE observations were strictly simultaneous with 843 Mega Hertz and 8.3-9.1 Giga Hertz radio observations. All of these observations have (3-9 keV) flux approximately less than 10(exp-9) ergs s(exp-1) CM(exp -2). The ASCA data show evidence for an approximately 6.4 keV Fe line with equivalent width approximately 40 eV, as well as evidence for a soft excess that is well-modeled by a power law plus a multicolor blackbody spectrum with peak temperature approximately equals 150-200 eV. The RXTE data sets also show evidence of an Fe line with equivalent widths approximately equal to 20-1OO eV. Reflection models show a hardening of the RXTE spectra with decreasing X-ray flux; however, these models do not exhibit evidence of a correlation between the photon index of the incident power law flux and the solid angle subtended by the reflector. 'Sphere+disk' Comptonization models and Advection Dominated Accretion Flow (ADAF) models also provide reasonable descriptions of the RXTE data. The former models yield coronal temperatures in the range 20-50 keV and optical depths of r approximately equal to 3. The model fits to the X-ray data, however, do not simultaneously explain the observed radio properties. The most likely source of the radio flux is synchrotron emission from an extended outflow of extent greater than O(10 (exp7) GM/c2).

  6. Equivalence of Einstein and Jordan frames in quantized anisotropic cosmological models

    NASA Astrophysics Data System (ADS)

    Pandey, Sachin; Pal, Sridip; Banerjee, Narayan

    2018-06-01

    The present work shows that the mathematical equivalence of the Jordan frame and its conformally transformed version, the Einstein frame, so as far as Brans-Dicke theory is concerned, survives a quantization of cosmological models, arising as solutions to the Brans-Dicke theory. We work with the Wheeler-deWitt quantization scheme and take up quite a few anisotropic cosmological models as examples. We effectively show that the transformation from the Jordan to the Einstein frame is a canonical one and hence two frames furnish equivalent description of same physical scenario.

  7. Equivalence of MAXENT and Poisson point process models for species distribution modeling in ecology.

    PubMed

    Renner, Ian W; Warton, David I

    2013-03-01

    Modeling the spatial distribution of a species is a fundamental problem in ecology. A number of modeling methods have been developed, an extremely popular one being MAXENT, a maximum entropy modeling approach. In this article, we show that MAXENT is equivalent to a Poisson regression model and hence is related to a Poisson point process model, differing only in the intercept term, which is scale-dependent in MAXENT. We illustrate a number of improvements to MAXENT that follow from these relations. In particular, a point process model approach facilitates methods for choosing the appropriate spatial resolution, assessing model adequacy, and choosing the LASSO penalty parameter, all currently unavailable to MAXENT. The equivalence result represents a significant step in the unification of the species distribution modeling literature. Copyright © 2013, The International Biometric Society.

  8. Equivalent linear damping characterization in linear and nonlinear force-stiffness muscle models.

    PubMed

    Ovesy, Marzieh; Nazari, Mohammad Ali; Mahdavian, Mohammad

    2016-02-01

    In the current research, the muscle equivalent linear damping coefficient which is introduced as the force-velocity relation in a muscle model and the corresponding time constant are investigated. In order to reach this goal, a 1D skeletal muscle model was used. Two characterizations of this model using a linear force-stiffness relationship (Hill-type model) and a nonlinear one have been implemented. The OpenSim platform was used for verification of the model. The isometric activation has been used for the simulation. The equivalent linear damping and the time constant of each model were extracted by using the results obtained from the simulation. The results provide a better insight into the characteristics of each model. It is found that the nonlinear models had a response rate closer to the reality compared to the Hill-type models.

  9. Multilevel Optimization Framework for Hierarchical Stiffened Shells Accelerated by Adaptive Equivalent Strategy

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Tian, Kuo; Zhao, Haixin; Hao, Peng; Zhu, Tianyu; Zhang, Ke; Ma, Yunlong

    2017-06-01

    In order to improve the post-buckling optimization efficiency of hierarchical stiffened shells, a multilevel optimization framework accelerated by adaptive equivalent strategy is presented in this paper. Firstly, the Numerical-based Smeared Stiffener Method (NSSM) for hierarchical stiffened shells is derived by means of the numerical implementation of asymptotic homogenization (NIAH) method. Based on the NSSM, a reasonable adaptive equivalent strategy for hierarchical stiffened shells is developed from the concept of hierarchy reduction. Its core idea is to self-adaptively decide which hierarchy of the structure should be equivalent according to the critical buckling mode rapidly predicted by NSSM. Compared with the detailed model, the high prediction accuracy and efficiency of the proposed model is highlighted. On the basis of this adaptive equivalent model, a multilevel optimization framework is then established by decomposing the complex entire optimization process into major-stiffener-level and minor-stiffener-level sub-optimizations, during which Fixed Point Iteration (FPI) is employed to accelerate convergence. Finally, the illustrative examples of the multilevel framework is carried out to demonstrate its efficiency and effectiveness to search for the global optimum result by contrast with the single-level optimization method. Remarkably, the high efficiency and flexibility of the adaptive equivalent strategy is indicated by compared with the single equivalent strategy.

  10. Skin photosensitivity as a model in photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Richter, Anna M.; Jain, Ashok K.; Canaan, Alice J.; Meadows, Howard; Levy, Julia G.

    1996-01-01

    Skin photosensitivity is the most common side effect of photodynamic therapy (PDT) and in clinical situations needs to be avoided or at least minimized. However, because of the accessibility of skin tissue, skin photosensitivity represents a useful test system in vivo for evaluation of the pharmacokinetics of photosensitizers and light sources. Pig skin resembles in many aspects human skin and, therefore, is most suitable for these tests. Using pig skin photosensitivity as an end point, we evaluate the effect of cell loading with a photosensitizer, benzoporphyrin derivative (BPD verteporfin) following its intravenous administration either as a rapid bolus or slow infusion. Skin response to light activation indicated a very similar cell content of BPD. These results were in agrement with those obtained in an in vitro model. In addition, in the same pig skin photosensitivity model we compared the efficiency of activation of BPD with either laser (690 plus or minus 3 nm) or light-emitting diode (LED; 690 plus or minus 12 nm) light. Results indicated the equivalency of the two light sources in this test system, with LED light being slightly more efficient, due possibly to a fluence rate lower than laser light.

  11. Fault slip and seismic moment of the 1700 Cascadia earthquake inferred from Japanese tsunami descriptions

    USGS Publications Warehouse

    Satake, K.; Wang, K.; Atwater, B.F.

    2003-01-01

    The 1700 Cascadia earthquake attained moment magnitude 9 according to new estimates based on effects of its tsunami in Japan, computed coseismic seafloor deformation for hypothetical ruptures in Cascadia, and tsunami modeling in the Pacific Ocean. Reports of damage and flooding show that the 1700 Casscadia tsunami reached 1-5 m heights at seven shoreline sites in Japan. Three sets of estimated heights express uncertainty about location and depth of reported flooding, landward decline in tsunami heights from shorelines, and post-1700 land-level changes. We compare each set with tsunami heights computed from six Cascadia sources. Each source is vertical seafloor displacement calculated with a three-dimensional elastic dislocation model, for three sources the rupture extends the 1100 km length of the subduction zone and differs in width and shallow dip; for the other sources, ruptures of ordinary width extend 360-670 km. To compute tsunami waveforms, we use a linear long-wave approximation with a finite difference method, and we employ modern bathymetry with nearshore grid spacing as small as 0.4 km. The various combinations of Japanese tsunami heights and Cascadia sources give seismic moment of 1-9 ?? 1022 N m, equivalent to moment magnitude 8.7-9.2. This range excludes several unquantified uncertainties. The most likely earthquake, of moment magnitude 9.0, has 19 m of coseismic slip on an offshore, full-slip zone 1100 km long with linearly decreasing slip on a downdip partial-slip zone. The shorter rupture models require up to 40 m offshore slip and predict land-level changes inconsistent with coastal paleoseismological evidence. Copyright 2003 by the American Geophysical Union.

  12. Development of FWIGPR, an open-source package for full-waveform inversion of common-offset GPR data

    NASA Astrophysics Data System (ADS)

    Jazayeri, S.; Kruse, S.

    2017-12-01

    We introduce a package for full-waveform inversion (FWI) of Ground Penetrating Radar (GPR) data based on a combination of open-source programs. The FWI requires a good starting model, based on direct knowledge of field conditions or on traditional ray-based inversion methods. With a good starting model, the FWI can improve resolution of selected subsurface features. The package will be made available for general use in educational and research activities. The FWIGPR package consists of four main components: 3D to 2D data conversion, source wavelet estimation, forward modeling, and inversion. (These four components additionally require the development, by the user, of a good starting model.) A major challenge with GPR data is the unknown form of the waveform emitted by the transmitter held close to the ground surface. We apply a blind deconvolution method to estimate the source wavelet, based on a sparsity assumption about the reflectivity series of the subsurface model (Gholami and Sacchi 2012). The estimated wavelet is deconvolved from the data and the sparsest reflectivity series with fewest reflectors. The gprMax code (www.gprmax.com) is used as the forward modeling tool and the PEST parameter estimation package (www.pesthomepage.com) for the inversion. To reduce computation time, the field data are converted to an effective 2D equivalent, and the gprMax code can be run in 2D mode. In the first step, the user must create a good starting model of the data, presumably using ray-based methods. This estimated model will be introduced to the FWI process as an initial model. Next, the 3D data is converted to 2D, then the user estimates the source wavelet that best fits the observed data by sparsity assumption of the earth's response. Last, PEST runs gprMax with the initial model and calculates the misfit between the synthetic and observed data, and using an iterative algorithm calling gprMax several times ineach iteration, finds successive models that better fit the data. To gauge whether the iterative process has arrived at a local or global minima, the process can be repeated with a range of starting models. Tests have shown that this package can successfully improve estimates of selected subsurface model parameters for simple synthetic and real data. Ongoing research will focus on FWI of more complex scenarios.

  13. Predicting the digestible energy of corn determined with growing swine from nutrient composition and cross-species measurements.

    PubMed

    Smith, B; Hassen, A; Hinds, M; Rice, D; Jones, D; Sauber, T; Iiams, C; Sevenich, D; Allen, R; Owens, F; McNaughton, J; Parsons, C

    2015-03-01

    The DE values of corn grain for pigs will differ among corn sources. More accurate prediction of DE may improve diet formulation and reduce diet cost. Corn grain sources ( = 83) were assayed with growing swine (20 kg) in DE experiments with total collection of feces, with 3-wk-old broiler chick in nitrogen-corrected apparent ME (AME) trials and with cecectomized adult roosters in nitrogen-corrected true ME (TME) studies. Additional AME data for the corn grain source set was generated based on an existing near-infrared transmittance prediction model (near-infrared transmittance-predicted AME [NIT-AME]). Corn source nutrient composition was determined by wet chemistry methods. These data were then used to 1) test the accuracy of predicting swine DE of individual corn sources based on available literature equations and nutrient composition and 2) develop models for predicting DE of sources from nutrient composition and the cross-species information gathered above (AME, NIT-AME, and TME). The overall measured DE, AME, NIT-AME, and TME values were 4,105 ± 11, 4,006 ± 10, 4,004 ± 10, and 4,086 ± 12 kcal/kg DM, respectively. Prediction models were developed using 80% of the corn grain sources; the remaining 20% was reserved for validation of the developed prediction equation. Literature equations based on nutrient composition proved imprecise for predicting corn DE; the root mean square error of prediction ranged from 105 to 331 kcal/kg, an equivalent of 2.6 to 8.8% error. Yet among the corn composition traits, 4-variable models developed in the current study provided adequate prediction of DE (model ranging from 0.76 to 0.79 and root mean square error [RMSE] of 50 kcal/kg). When prediction equations were tested using the validation set, these models had a 1 to 1.2% error of prediction. Simple linear equations from AME, NIT-AME, or TME provided an accurate prediction of DE for individual sources ( ranged from 0.65 to 0.73 and RMSE ranged from 50 to 61 kcal/kg). Percentage error of prediction based on the validation data set was greater (1.4%) for the TME model than for the NIT-AME or AME models (1 and 1.2%, respectively), indicating that swine DE values could be accurately predicted by using AME or NIT-AME. In conclusion, regression equations developed from broiler measurements or from analyzed nutrient composition proved adequate to reliably predict the DE of commercially available corn hybrids for growing pigs.

  14. Comment on ``Metric-affine approach to teleparallel gravity''

    NASA Astrophysics Data System (ADS)

    Formiga, J. B.

    2013-09-01

    It is well known that the teleparallel equivalent of general relativity yields the same vacuum solutions as general relativity does, which ensures that this particular teleparallel model is in good agreement with experiments. A lesser known result concerns the existence of a wider class of teleparallel models which also admits these solutions when the spacetime is diagonalizable by means of a coordinate change. However, it is stated by Obukhov and Pereira [Phys. Rev. D 67, 044016 (2003)] that the teleparallel equivalent of general relativity is the only teleparallel model which admits black holes. To show that this statement is not true, I present the result of Hayashi and Shirafuji [Phys. Rev. D 19, 3524 (1979)], which proves the existence of this wider class by showing the equivalence between two Lagrangians. It turns out that this equivalence also holds for plane-wave metrics. In addition, I update the constraints on the parameters of the teleparallel models.

  15. Requirements to Design to Code: Towards a Fully Formal Approach to Automatic Code Generation

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2004-01-01

    A general-purpose method to mechanically transform system requirements into a provably equivalent model has yet to appear. Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including sensor networks and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The gap that current tools and methods leave unfilled is that their formal models cannot be proven to be equivalent to the system requirements as originated by the customer. For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a provably equivalent formal model that can be used as the basis for code generation and other transformations.

  16. A simple model of space radiation damage in GaAs solar cells

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Stith, J. J.; Stock, L. V.

    1983-01-01

    A simple model is derived for the radiation damage of shallow junction gallium arsenide (GaAs) solar cells. Reasonable agreement is found between the model and specific experimental studies of radiation effects with electron and proton beams. In particular, the extreme sensitivity of the cell to protons stopping near the cell junction is predicted by the model. The equivalent fluence concept is of questionable validity for monoenergetic proton beams. Angular factors are quite important in establishing the cell sensitivity to incident particle types and energies. A fluence of isotropic incidence 1 MeV electrons (assuming infinite backing) is equivalent to four times the fluence of normal incidence 1 MeV electrons. Spectral factors common to the space radiations are considered, and cover glass thickness required to minimize the initial damage for a typical cell configuration is calculated. Rough equivalence between the geosynchronous environment and an equivalent 1 MeV electron fluence (normal incidence) is established.

  17. A Formal Approach to Requirements-Based Programming

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    No significant general-purpose method is currently available to mechanically transform system requirements into a provably equivalent model. The widespread use of such a method represents a necessary step toward high-dependability system engineering for numerous application domains. Current tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The "gap" unfilled by such tools and methods is that the formal models cannot be proven to be equivalent to the requirements. We offer a method for mechanically transforming requirements into a provably equivalent formal model that can be used as the basis for code generation and other transformations. This method is unique in offering full mathematical tractability while using notations and techniques that are well known and well trusted. Finally, we describe further application areas we are investigating for use of the approach.

  18. Towards an Automated Development Methodology for Dependable Systems with Application to Sensor Networks

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.

    2005-01-01

    A general-purpose method to mechanically transform system requirements into a probably equivalent model has yet to appeal: Such a method represents a necessary step toward high-dependability system engineering for numerous possible application domains, including sensor networks and autonomous systems. Currently available tools and methods that start with a formal model of a system and mechanically produce a probably equivalent implementation are valuable but not su8cient. The "gap" unfilled by such tools and methods is that their. formal models cannot be proven to be equivalent to the system requirements as originated by the customel: For the classes of systems whose behavior can be described as a finite (but significant) set of scenarios, we offer a method for mechanically transforming requirements (expressed in restricted natural language, or in other appropriate graphical notations) into a probably equivalent formal model that can be used as the basis for code generation and other transformations.

  19. Shale characterization in mass transport complex as a potential source rock: An example from onshore West Java Basin, Indonesia

    NASA Astrophysics Data System (ADS)

    Nugraha, A. M. S.; Widiarti, R.; Kusumah, E. P.

    2017-12-01

    This study describes a deep-water slump facies shale of the Early Miocene Jatiluhur/Cibulakan Formation to understand its potential as a source rock in an active tectonic region, the onshore West Java. The formation is equivalent with the Gumai Formation, which has been well-known as another prolific source rock besides the Oligocene Talang Akar Formation in North West Java Basin, Indonesia. The equivalent shale formation is expected to have same potential source rock towards the onshore of Central Java. The shale samples were taken onshore, 150 km away from the basin. The shale must be rich of organic matter, have good quality of kerogen, and thermally matured to be categorized as a potential source rock. Investigations from petrography, X-Ray diffractions (XRD), and backscattered electron show heterogeneous mineralogy in the shales. The mineralogy consists of clay minerals, minor quartz, muscovite, calcite, chlorite, clinopyroxene, and other weathered minerals. This composition makes the shale more brittle. Scanning Electron Microscope (SEM) analysis indicate secondary porosities and microstructures. Total Organic Carbon (TOC) shows 0.8-1.1 wt%, compared to the basinal shale 1.5-8 wt%. The shale properties from this outcropped formation indicate a good potential source rock that can be found in the subsurface area with better quality and maturity.

  20. 78 FR 31315 - Kraft Pulp Mills NSPS Review

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-23

    ... furnaces to levels equivalent to the new source PM limits in the NESHAP for chemical recovery combustion... will enable a broader understanding of condensable PM emissions from pulp and paper combustion sources... for 0.5 seconds (no ppmdv limit). 2. Use non-combustion control device with a limit of 5 ppmdv. 3. It...

  1. 40 CFR 63.602 - Standards for existing sources.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) National Emission Standards for Hazardous Air Pollutants From Phosphoric Acid Manufacturing Plants § 63.602 Standards for existing sources. (a) Wet process phosphoric acid process line. On and after the date on which... of equivalent P2O5 feed (0.020 lb/ton). (b) Superphosphoric acid process line—(1) Vacuum evaporation...

  2. 40 CFR 63.602 - Standards for existing sources.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) National Emission Standards for Hazardous Air Pollutants From Phosphoric Acid Manufacturing Plants § 63.602 Standards for existing sources. (a) Wet process phosphoric acid process line. On and after the date on which... of equivalent P2O5 feed (0.020 lb/ton). (b) Superphosphoric acid process line—(1) Vacuum evaporation...

  3. 40 CFR 63.602 - Standards for existing sources.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) National Emission Standards for Hazardous Air Pollutants From Phosphoric Acid Manufacturing Plants § 63.602 Standards for existing sources. (a) Wet process phosphoric acid process line. On and after the date on which... of equivalent P2O5 feed (0.020 lb/ton). (b) Superphosphoric acid process line—(1) Vacuum evaporation...

  4. 21 CFR 573.140 - Ammoniated cottonseed meal.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... required by the act, the following: (1) The name of the additive. (2) The maximum percentage of equivalent crude protein from the nonprotein nitrogen. (3) Directions for use to provide not more than 20 percent... source of protein and/or as a source of nonprotein nitrogen in an amount not to exceed 20 percent of the...

  5. 21 CFR 573.140 - Ammoniated cottonseed meal.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... required by the act, the following: (1) The name of the additive. (2) The maximum percentage of equivalent crude protein from the nonprotein nitrogen. (3) Directions for use to provide not more than 20 percent... source of protein and/or as a source of nonprotein nitrogen in an amount not to exceed 20 percent of the...

  6. 21 CFR 573.140 - Ammoniated cottonseed meal.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... required by the act, the following: (1) The name of the additive. (2) The maximum percentage of equivalent crude protein from the nonprotein nitrogen. (3) Directions for use to provide not more than 20 percent... source of protein and/or as a source of nonprotein nitrogen in an amount not to exceed 20 percent of the...

  7. 21 CFR 573.140 - Ammoniated cottonseed meal.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... required by the act, the following: (1) The name of the additive. (2) The maximum percentage of equivalent crude protein from the nonprotein nitrogen. (3) Directions for use to provide not more than 20 percent... source of protein and/or as a source of nonprotein nitrogen in an amount not to exceed 20 percent of the...

  8. 21 CFR 573.140 - Ammoniated cottonseed meal.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... required by the act, the following: (1) The name of the additive. (2) The maximum percentage of equivalent crude protein from the nonprotein nitrogen. (3) Directions for use to provide not more than 20 percent... source of protein and/or as a source of nonprotein nitrogen in an amount not to exceed 20 percent of the...

  9. The Scaling of Broadband Shock-Associated Noise with Increasing Temperature

    NASA Technical Reports Server (NTRS)

    Miller, Steven A. E.

    2013-01-01

    A physical explanation for the saturation of broadband shock-associated noise (BBSAN) intensity with increasing jet stagnation temperature has eluded investigators. An explanation is proposed for this phenomenon with the use of an acoustic analogy. To isolate the relevant physics, the scaling of BBSAN peak intensity level at the sideline observer location is examined. The equivalent source within the framework of an acoustic analogy for BBSAN is based on local field quantities at shock wave shear layer interactions. The equivalent source combined with accurate calculations of the propagation of sound through the jet shear layer, using an adjoint vector Green's function solver of the linearized Euler equations, allows for predictions that retain the scaling with respect to stagnation pressure and allows for saturation of BBSAN with increasing stagnation temperature. The sources and vector Green's function have arguments involving the steady Reynolds- Averaged Navier-Stokes solution of the jet. It is proposed that saturation of BBSAN with increasing jet temperature occurs due to a balance between the amplication of the sound propagation through the shear layer and the source term scaling.

  10. Lithium-ion battery cell-level control using constrained model predictive control and equivalent circuit models

    NASA Astrophysics Data System (ADS)

    Xavier, Marcelo A.; Trimboli, M. Scott

    2015-07-01

    This paper introduces a novel application of model predictive control (MPC) to cell-level charging of a lithium-ion battery utilizing an equivalent circuit model of battery dynamics. The approach employs a modified form of the MPC algorithm that caters for direct feed-though signals in order to model near-instantaneous battery ohmic resistance. The implementation utilizes a 2nd-order equivalent circuit discrete-time state-space model based on actual cell parameters; the control methodology is used to compute a fast charging profile that respects input, output, and state constraints. Results show that MPC is well-suited to the dynamics of the battery control problem and further suggest significant performance improvements might be achieved by extending the result to electrochemical models.

  11. Research on the time-temperature-damage superposition principle of NEPE propellant

    NASA Astrophysics Data System (ADS)

    Han, Long; Chen, Xiong; Xu, Jin-sheng; Zhou, Chang-sheng; Yu, Jia-quan

    2015-11-01

    To describe the relaxation behavior of NEPE (Nitrate Ester Plasticized Polyether) propellant, we analyzed the equivalent relationships between time, temperature, and damage. We conducted a series of uniaxial tensile tests and employed a cumulative damage model to calculate the damage values for relaxation tests at different strain levels. The damage evolution curve of the tensile test at 100 mm/min was obtained through numerical analysis. Relaxation tests were conducted over a range of temperature and strain levels, and the equivalent relationship between time, temperature, and damage was deduced based on free volume theory. The equivalent relationship was then used to generate predictions of the long-term relaxation behavior of the NEPE propellant. Subsequently, the equivalent relationship between time and damage was introduced into the linear viscoelastic model to establish a nonlinear model which is capable of describing the mechanical behavior of composite propellants under a uniaxial tensile load. The comparison between model prediction and experimental data shows that the presented model provides a reliable forecast of the mechanical behavior of propellants.

  12. Research on Equivalent Tests of Dynamics of On-orbit Soft Contact Technology Based on On-Orbit Experiment Data

    NASA Astrophysics Data System (ADS)

    Yang, F.; Dong, Z. H.; Ye, X.

    2018-05-01

    Currently, space robots have been become a very important means of space on-orbit maintenance and support. Many countries are taking deep research and experiment on this. Because space operation attitude is very complicated, it is difficult to model them in research lab. This paper builds up a complete equivalent experiment framework according to the requirement of proposed space soft-contact technology. Also, this paper carries out flexible multi-body dynamics parameters verification for on-orbit soft-contact mechanism, which combines on-orbit experiment data, the built soft-contact mechanism equivalent model and flexible multi-body dynamics equivalent model that is based on KANE equation. The experiment results approve the correctness of the built on-orbit soft-contact flexible multi-body dynamics.

  13. In vitro 3D full thickness skin equivalent tissue model using silk and collagen biomaterials

    PubMed Central

    Bellas, Evangelia; Seiberg, Miri; Garlick, Jonathan; Kaplan, David L.

    2013-01-01

    Current approaches to develop skin equivalents often only include the epidermal and dermal components. Yet, full thickness skin includes the hypodermis, a layer below the dermis of adipose tissue containing vasculature, nerves and fibroblasts, necessary to support the epidermis and dermis. In the present study, we developed a full thickness skin equivalent including an epidermis, dermis and hypodermis that could serve as an in vitro model for studying skin development, disease or as a platform for consumer product testing as a means to avoid animal testing. The full thickness skin equivalent was easy to handle and was maintained in culture for greater than 14 days while expressing physiologically relevant morphologies of both the epidermis and dermis, as seen by keratin 10, collagen I and collagen IV expression. The skin equivalent produced glycerol and leptin, markers of adipose tissue metabolism. This work serves as a foundation for our understanding of some of the necessary factors needed to develop a stable, functional model of full-thickness skin. PMID:23161763

  14. Measurement equivalence of the German Job Satisfaction Survey used in a multinational organization: implications of Schwartz's culture model.

    PubMed

    Liu, Cong; Borg, Ingwer; Spector, Paul E

    2004-12-01

    The authors tested measurement equivalence of the German Job Satisfaction Survey (GJSS) using structural equation modeling methodology. Employees from 18 countries and areas provided data on 5 job satisfaction facets. The effects of language and culture on measurement equivalence were examined. A cultural distance hypothesis, based on S. H. Schwartz's (1999) theory, was tested with 4 cultural groups: West Europe, English speaking, Latin America, and Far East. Findings indicated the robustness of the GJSS in terms of measurement equivalence across countries. The survey maintained high transportability across countries speaking the same language and countries sharing similar cultural backgrounds. Consistent with Schwartz's model, a cultural distance effect on scale transportability among scales used in maximally dissimilar cultures was detected. Scales used in the West Europe group showed greater equivalence to scales used in the English-speaking and Latin America groups than scales used in the Far East group. 2004 APA, all rights reserved

  15. Subspace-based analysis of the ERT inverse problem

    NASA Astrophysics Data System (ADS)

    Ben Hadj Miled, Mohamed Khames; Miller, Eric L.

    2004-05-01

    In a previous work, we proposed a source-type formulation to the electrical resistance tomography (ERT) problem. Specifically, we showed that inhomogeneities in the medium can be viewed as secondary sources embedded in the homogeneous background medium and located at positions associated with variation in electrical conductivity. Assuming a piecewise constant conductivity distribution, the support of equivalent sources is equal to the boundary of the inhomogeneity. The estimation of the anomaly shape takes the form of an inverse source-type problem. In this paper, we explore the use of subspace methods to localize the secondary equivalent sources associated with discontinuities in the conductivity distribution. Our first alternative is the multiple signal classification (MUSIC) algorithm which is commonly used in the localization of multiple sources. The idea is to project a finite collection of plausible pole (or dipole) sources onto an estimated signal subspace and select those with largest correlations. In ERT, secondary sources are excited simultaneously but in different ways, i.e. with distinct amplitude patterns, depending on the locations and amplitudes of primary sources. If the number of receivers is "large enough", different source configurations can lead to a set of observation vectors that span the data subspace. However, since sources that are spatially close to each other have highly correlated signatures, seperation of such signals becomes very difficult in the presence of noise. To overcome this problem we consider iterative MUSIC algorithms like R-MUSIC and RAP-MUSIC. These recursive algorithms pose a computational burden as they require multiple large combinatorial searches. Results obtained with these algorithms using simulated data of different conductivity patterns are presented.

  16. Towards integrating tracer studies in conceptual rainfall-runoff models: recent insights from a sub-arctic catchment in the Cairngorm Mountains, Scotland

    NASA Astrophysics Data System (ADS)

    Soulsby, Chris; Dunn, Sarah M.

    2003-02-01

    Hydrochemical tracers (alkalinity and silica) were used in an end-member mixing analysis (EMMA) of runoff sources in the 10 km2 Allt a' Mharcaidh catchment. A three-component mixing model was used to separate the hydrograph and estimate, to a first approximation, the range of likely contributions of overland flow, shallow subsurface storm flow, and groundwater to the annual hydrograph. A conceptual, catchment-scale rainfall-runoff model (DIY) was also used to separate the annual hydrograph in an equivalent set of flow paths. The two approaches produced independent representations of catchment hydrology that exhibited reasonable agreement. This showed the dominance of overland flow in generating storm runoff and the important role of groundwater inputs throughout the hydrological year. Moreover, DIY was successfully adapted to simulate stream chemistry (alkalinity) at daily time steps. Sensitivity analysis showed that whilst a distinct groundwater source at the catchment scale could be identified, there was considerable uncertainty in differentiating between overland flow and subsurface storm flow in both the EMMA and DIY applications. Nevertheless, the study indicated that the complementary use of tracer analysis in EMMA can increase the confidence in conceptual model structure. However, conclusions are restricted to the specific spatial and temporal scales examined.

  17. IC 3639—a New Bona Fide Compton-Thick AGN Unveiled by NuSTAR

    NASA Astrophysics Data System (ADS)

    Boorman, Peter G.; Gandhi, P.; Alexander, D. M.; Annuar, A.; Ballantyne, D. R.; Bauer, F.; Boggs, S. E.; Brandt, W. N.; Brightman, M.; Christensen, F. E.; Craig, W. W.; Farrah, D.; Hailey, C. J.; Harrison, F. A.; Hönig, S. F.; Koss, M.; LaMassa, S. M.; Masini, A.; Ricci, C.; Risaliti, G.; Stern, D.; Zhang, W. W.

    2016-12-01

    We analyze high-quality NuSTAR observations of the local (z = 0.011) Seyfert 2 active galactic nucleus (AGN) IC 3639, in conjunction with archival Suzaku and Chandra data. This provides the first broadband X-ray spectral analysis of the source, spanning nearly two decades in energy (0.5-30 keV). Previous X-ray observations of the source below 10 keV indicated strong reflection/obscuration on the basis of a pronounced iron fluorescence line at 6.4 keV. The hard X-ray energy coverage of NuSTAR, together with self-consistent toroidal reprocessing models, enables direct broadband constraints on the obscuring column density of the source. We find the source to be heavily Compton-thick (CTK) with an obscuring column in excess of 3.6× {10}24 cm-2, unconstrained at the upper end. We further find an intrinsic 2-10 keV luminosity of {{log}}10({L}2{--10{keV}} [{erg} {{{s}}}-1])={43.4}-1.1+0.6 to 90% confidence, almost 400 times the observed flux, and consistent with various multiwavelength diagnostics. Such a high ratio of intrinsic to observed flux, in addition to an Fe-Kα fluorescence line equivalent width exceeding 2 keV, is extreme among known bona fide CTK AGNs, which we suggest are both due to the high level of obscuration present around IC 3639. Our study demonstrates that broadband spectroscopic modeling with NuSTAR enables large corrections for obscuration to be carried out robustly and emphasizes the need for improved modeling of AGN tori showing intense iron fluorescence.

  18. Association between Refractive Errors and Ocular Biometry in Iranian Adults

    PubMed Central

    Hashemi, Hassan; Khabazkhoob, Mehdi; Emamian, Mohammad Hassan; Shariati, Mohammad; Miraftab, Mohammad; Yekta, Abbasali; Ostadimoghaddam, Hadi; Fotouhi, Akbar

    2015-01-01

    Purpose: To investigate the association between ocular biometrics such as axial length (AL), anterior chamber depth (ACD), lens thickness (LT), vitreous chamber depth (VCD) and corneal power (CP) with different refractive errors. Methods: In a cross-sectional study on the 40 to 64-year-old population of Shahroud, random cluster sampling was performed. Ocular biometrics were measured using the Allegro Biograph (WaveLight AG, Erlangen, Germany) for all participants. Refractive errors were determined using cycloplegic refraction. Results: In the first model, the strongest correlations were found between spherical equivalent with axial length and corneal power. Spherical equivalent was strongly correlated with axial length in high myopic and high hyperopic cases, and with corneal power in high hyperopic cases; 69.5% of variability in spherical equivalent was attributed to changes in these variables. In the second model, the correlations between vitreous chamber depth and corneal power with spherical equivalent were stronger in myopes than hyperopes, while the correlations between lens thickness and anterior chamber depth with spherical equivalent were stronger in hyperopic cases than myopic ones. In the third model, anterior chamber depth + lens thickness correlated with spherical equivalent only in moderate and severe cases of hyperopia, and this index was not correlated with spherical equivalent in moderate to severe myopia. Conclusion: In individuals aged 40-64 years, corneal power and axial length make the greatest contribution to spherical equivalent in high hyperopia and high myopia. Anterior segment biometric components have a more important role in hyperopia than myopia. PMID:26730304

  19. Polarimetry With Phased Array Antennas: Theoretical Framework and Definitions

    NASA Astrophysics Data System (ADS)

    Warnick, Karl F.; Ivashina, Marianna V.; Wijnholds, Stefan J.; Maaskant, Rob

    2012-01-01

    For phased array receivers, the accuracy with which the polarization state of a received signal can be measured depends on the antenna configuration, array calibration process, and beamforming algorithms. A signal and noise model for a dual-polarized array is developed and related to standard polarimetric antenna figures of merit, and the ideal polarimetrically calibrated, maximum-sensitivity beamforming solution for a dual-polarized phased array feed is derived. A practical polarimetric beamformer solution that does not require exact knowledge of the array polarimetric response is shown to be equivalent to the optimal solution in the sense that when the practical beamformers are calibrated, the optimal solution is obtained. To provide a rough initial polarimetric calibration for the practical beamformer solution, an approximate single-source polarimetric calibration method is developed. The modeled instrumental polarization error for a dipole phased array feed with the practical beamformer solution and single-source polarimetric calibration was -10 dB or lower over the array field of view for elements with alignments perturbed by random rotations with 5 degree standard deviation.

  20. Development of guidelines for the definition of the relavant information content in data classes

    NASA Technical Reports Server (NTRS)

    Schmitt, E.

    1973-01-01

    The problem of experiment design is defined as an information system consisting of information source, measurement unit, environmental disturbances, data handling and storage, and the mathematical analysis and usage of data. Based on today's concept of effective computability, general guidelines for the definition of the relevant information content in data classes are derived. The lack of a universally applicable information theory and corresponding mathematical or system structure is restricting the solvable problem classes to a small set. It is expected that a new relativity theory of information, generally described by a universal algebra of relations will lead to new mathematical models and system structures capable of modeling any well defined practical problem isomorphic to an equivalence relation at any corresponding level of abstractness.

Top