Accuracy-preserving source term quadrature for third-order edge-based discretization
NASA Astrophysics Data System (ADS)
Nishikawa, Hiroaki; Liu, Yi
2017-09-01
In this paper, we derive a family of source term quadrature formulas for preserving third-order accuracy of the node-centered edge-based discretization for conservation laws with source terms on arbitrary simplex grids. A three-parameter family of source term quadrature formulas is derived, and as a subset, a one-parameter family of economical formulas is identified that does not require second derivatives of the source term. Among the economical formulas, a unique formula is then derived that does not require gradients of the source term at neighbor nodes, thus leading to a significantly smaller discretization stencil for source terms. All the formulas derived in this paper do not require a boundary closure, and therefore can be directly applied at boundary nodes. Numerical results are presented to demonstrate third-order accuracy at interior and boundary nodes for one-dimensional grids and linear triangular/tetrahedral grids over straight and curved geometries.
High-order scheme for the source-sink term in a one-dimensional water temperature model
Jing, Zheng; Kang, Ling
2017-01-01
The source-sink term in water temperature models represents the net heat absorbed or released by a water system. This term is very important because it accounts for solar radiation that can significantly affect water temperature, especially in lakes. However, existing numerical methods for discretizing the source-sink term are very simplistic, causing significant deviations between simulation results and measured data. To address this problem, we present a numerical method specific to the source-sink term. A vertical one-dimensional heat conduction equation was chosen to describe water temperature changes. A two-step operator-splitting method was adopted as the numerical solution. In the first step, using the undetermined coefficient method, a high-order scheme was adopted for discretizing the source-sink term. In the second step, the diffusion term was discretized using the Crank-Nicolson scheme. The effectiveness and capability of the numerical method was assessed by performing numerical tests. Then, the proposed numerical method was applied to a simulation of Guozheng Lake (located in central China). The modeling results were in an excellent agreement with measured data. PMID:28264005
High-order scheme for the source-sink term in a one-dimensional water temperature model.
Jing, Zheng; Kang, Ling
2017-01-01
The source-sink term in water temperature models represents the net heat absorbed or released by a water system. This term is very important because it accounts for solar radiation that can significantly affect water temperature, especially in lakes. However, existing numerical methods for discretizing the source-sink term are very simplistic, causing significant deviations between simulation results and measured data. To address this problem, we present a numerical method specific to the source-sink term. A vertical one-dimensional heat conduction equation was chosen to describe water temperature changes. A two-step operator-splitting method was adopted as the numerical solution. In the first step, using the undetermined coefficient method, a high-order scheme was adopted for discretizing the source-sink term. In the second step, the diffusion term was discretized using the Crank-Nicolson scheme. The effectiveness and capability of the numerical method was assessed by performing numerical tests. Then, the proposed numerical method was applied to a simulation of Guozheng Lake (located in central China). The modeling results were in an excellent agreement with measured data.
Observation-based source terms in the third-generation wave model WAVEWATCH
NASA Astrophysics Data System (ADS)
Zieger, Stefan; Babanin, Alexander V.; Erick Rogers, W.; Young, Ian R.
2015-12-01
Measurements collected during the AUSWEX field campaign, at Lake George (Australia), resulted in new insights into the processes of wind wave interaction and whitecapping dissipation, and consequently new parameterizations of the input and dissipation source terms. The new nonlinear wind input term developed accounts for dependence of the growth on wave steepness, airflow separation, and for negative growth rate under adverse winds. The new dissipation terms feature the inherent breaking term, a cumulative dissipation term and a term due to production of turbulence by waves, which is particularly relevant for decaying seas and for swell. The latter is consistent with the observed decay rate of ocean swell. This paper describes these source terms implemented in WAVEWATCH III ®and evaluates the performance against existing source terms in academic duration-limited tests, against buoy measurements for windsea-dominated conditions, under conditions of extreme wind forcing (Hurricane Katrina), and against altimeter data in global hindcasts. Results show agreement by means of growth curves as well as integral and spectral parameters in the simulations and hindcast.
Auditing the multiply-related concepts within the UMLS
Mougin, Fleur; Grabar, Natalia
2014-01-01
Objective This work focuses on multiply-related Unified Medical Language System (UMLS) concepts, that is, concepts associated through multiple relations. The relations involved in such situations are audited to determine whether they are provided by source vocabularies or result from the integration of these vocabularies within the UMLS. Methods We study the compatibility of the multiple relations which associate the concepts under investigation and try to explain the reason why they co-occur. Towards this end, we analyze the relations both at the concept and term levels. In addition, we randomly select 288 concepts associated through contradictory relations and manually analyze them. Results At the UMLS scale, only 0.7% of combinations of relations are contradictory, while homogeneous combinations are observed in one-third of situations. At the scale of source vocabularies, one-third do not contain more than one relation between the concepts under investigation. Among the remaining source vocabularies, seven of them mainly present multiple non-homogeneous relations between terms. Analysis at the term level also shows that only in a quarter of cases are the source vocabularies responsible for the presence of multiply-related concepts in the UMLS. These results are available at: http://www.isped.u-bordeaux2.fr/ArticleJAMIA/results_multiply_related_concepts.aspx. Discussion Manual analysis was useful to explain the conceptualization difference in relations between terms across source vocabularies. The exploitation of source relations was helpful for understanding why some source vocabularies describe multiple relations between a given pair of terms. PMID:24464853
Erratum to Surface‐wave green’s tensors in the near field
Haney, Matthew M.; Hisashi Nakahara,
2016-01-01
Haney and Nakahara (2014) derived expressions for surface‐wave Green’s tensors that included near‐field behavior. Building on the result for a force source, Haney and Nakahara (2014) further derived expressions for a general point moment tensor source using the exact Green’s tensors. However, it has come to our attention that, although the Green’s tensors were correct, the resulting expressions for a general point moment tensor source were missing some terms. In this erratum, we provide updated expressions with these missing terms. The inclusion of the missing terms changes the example given in Haney and Nakahara (2014).
Source term model evaluations for the low-level waste facility performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yim, M.S.; Su, S.I.
1995-12-31
The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.
NASA Astrophysics Data System (ADS)
Perez, Pedro B.; Hamawi, John N.
2017-09-01
Nuclear power plant radiation protection design features are based on radionuclide source terms derived from conservative assumptions that envelope expected operating experience. Two parameters that significantly affect the radionuclide concentrations in the source term are failed fuel fraction and effective fission product appearance rate coefficients. Failed fuel fraction may be a regulatory based assumption such as in the U.S. Appearance rate coefficients are not specified in regulatory requirements, but have been referenced to experimental data that is over 50 years old. No doubt the source terms are conservative as demonstrated by operating experience that has included failed fuel, but it may be too conservative leading to over-designed shielding for normal operations as an example. Design basis source term methodologies for normal operations had not advanced until EPRI published in 2015 an updated ANSI/ANS 18.1 source term basis document. Our paper revisits the fission product appearance rate coefficients as applied in the derivation source terms following the original U.S. NRC NUREG-0017 methodology. New coefficients have been calculated based on recent EPRI results which demonstrate the conservatism in nuclear power plant shielding design.
Nonlinearly driven harmonics of Alfvén modes
NASA Astrophysics Data System (ADS)
Zhang, B.; Breizman, B. N.; Zheng, L. J.; Berk, H. L.
2014-01-01
In order to study the leading order nonlinear magneto-hydrodynamic (MHD) harmonic response of a plasma in realistic geometry, the AEGIS code has been generalized to account for inhomogeneous source terms. These source terms are expressed in terms of the quadratic corrections that depend on the functional form of a linear MHD eigenmode, such as the Toroidal Alfvén Eigenmode. The solution of the resultant equation gives the second order harmonic response. Preliminary results are presented here.
Final safety analysis report for the Galileo Mission: Volume 2: Book 1, Accident model document
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The Accident Model Document (AMD) is the second volume of the three volume Final Safety Analysis Report (FSAR) for the Galileo outer planetary space science mission. This mission employs Radioisotope Thermoelectric Generators (RTGs) as the prime electrical power sources for the spacecraft. Galileo will be launched into Earth orbit using the Space Shuttle and will use the Inertial Upper Stage (IUS) booster to place the spacecraft into an Earth escape trajectory. The RTG's employ silicon-germanium thermoelectric couples to produce electricity from the heat energy that results from the decay of the radioisotope fuel, Plutonium-238, used in the RTG heat source.more » The heat source configuration used in the RTG's is termed General Purpose Heat Source (GPHS), and the RTG's are designated GPHS-RTGs. The use of radioactive material in these missions necessitates evaluations of the radiological risks that may be encountered by launch complex personnel as well as by the Earth's general population resulting from postulated malfunctions or failures occurring in the mission operations. The FSAR presents the results of a rigorous safety assessment, including substantial analyses and testing, of the launch and deployment of the RTGs for the Galileo mission. This AMD is a summary of the potential accident and failure sequences which might result in fuel release, the analysis and testing methods employed, and the predicted source terms. Each source term consists of a quantity of fuel released, the location of release and the physical characteristics of the fuel released. Each source term has an associated probability of occurrence. 27 figs., 11 tabs.« less
ERIC Educational Resources Information Center
Mitchell, Karen J.; Raye, Carol L.; McGuire, Joseph T.; Frankel, Hillary; Greene, Erich J.; Johnson, Marcia K.
2008-01-01
A short-term source monitoring procedure with functional magnetic resonance imaging assessed neural activity when participants made judgments about the format of 1 of 4 studied items (picture, word), the encoding task performed (cost, place), or whether an item was old or new. The results support findings from long-term memory studies showing that…
Auditing the multiply-related concepts within the UMLS.
Mougin, Fleur; Grabar, Natalia
2014-10-01
This work focuses on multiply-related Unified Medical Language System (UMLS) concepts, that is, concepts associated through multiple relations. The relations involved in such situations are audited to determine whether they are provided by source vocabularies or result from the integration of these vocabularies within the UMLS. We study the compatibility of the multiple relations which associate the concepts under investigation and try to explain the reason why they co-occur. Towards this end, we analyze the relations both at the concept and term levels. In addition, we randomly select 288 concepts associated through contradictory relations and manually analyze them. At the UMLS scale, only 0.7% of combinations of relations are contradictory, while homogeneous combinations are observed in one-third of situations. At the scale of source vocabularies, one-third do not contain more than one relation between the concepts under investigation. Among the remaining source vocabularies, seven of them mainly present multiple non-homogeneous relations between terms. Analysis at the term level also shows that only in a quarter of cases are the source vocabularies responsible for the presence of multiply-related concepts in the UMLS. These results are available at: http://www.isped.u-bordeaux2.fr/ArticleJAMIA/results_multiply_related_concepts.aspx. Manual analysis was useful to explain the conceptualization difference in relations between terms across source vocabularies. The exploitation of source relations was helpful for understanding why some source vocabularies describe multiple relations between a given pair of terms. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Radiological analysis of plutonium glass batches with natural/enriched boron
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rainisch, R.
2000-06-22
The disposition of surplus plutonium inventories by the US Department of Energy (DOE) includes the immobilization of certain plutonium materials in a borosilicate glass matrix, also referred to as vitrification. This paper addresses source terms of plutonium masses immobilized in a borosilicate glass matrix where the glass components include both natural boron and enriched boron. The calculated source terms pertain to neutron and gamma source strength (particles per second), and source spectrum changes. The calculated source terms corresponding to natural boron and enriched boron are compared to determine the benefits (decrease in radiation source terms) for to the use ofmore » enriched boron. The analysis of plutonium glass source terms shows that a large component of the neutron source terms is due to (a, n) reactions. The Americium-241 and plutonium present in the glass emit alpha particles (a). These alpha particles interact with low-Z nuclides like B-11, B-10, and O-17 in the glass to produce neutrons. The low-Z nuclides are referred to as target particles. The reference glass contains 9.4 wt percent B{sub 2}O{sub 3}. Boron-11 was found to strongly support the (a, n) reactions in the glass matrix. B-11 has a natural abundance of over 80 percent. The (a, n) reaction rates for B-10 are lower than for B-11 and the analysis shows that the plutonium glass neutron source terms can be reduced by artificially enriching natural boron with B-10. The natural abundance of B-10 is 19.9 percent. Boron enriched to 96-wt percent B-10 or above can be obtained commercially. Since lower source terms imply lower dose rates to radiation workers handling the plutonium glass materials, it is important to know the achievable decrease in source terms as a result of boron enrichment. Plutonium materials are normally handled in glove boxes with shielded glass windows and the work entails both extremity and whole-body exposures. Lowering the source terms of the plutonium batches will make the handling of these materials less difficult and will reduce radiation exposure to operating workers.« less
Time-frequency approach to underdetermined blind source separation.
Xie, Shengli; Yang, Liu; Yang, Jun-Mei; Zhou, Guoxu; Xiang, Yong
2012-02-01
This paper presents a new time-frequency (TF) underdetermined blind source separation approach based on Wigner-Ville distribution (WVD) and Khatri-Rao product to separate N non-stationary sources from M(M <; N) mixtures. First, an improved method is proposed for estimating the mixing matrix, where the negative value of the auto WVD of the sources is fully considered. Then after extracting all the auto-term TF points, the auto WVD value of the sources at every auto-term TF point can be found out exactly with the proposed approach no matter how many active sources there are as long as N ≤ 2M-1. Further discussion about the extraction of auto-term TF points is made and finally the numerical simulation results are presented to show the superiority of the proposed algorithm by comparing it with the existing ones.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mueller, C.; Nabelssi, B.; Roglans-Ribas, J.
1995-04-01
This report contains the Appendices for the Analysis of Accident Sequences and Source Terms at Waste Treatment and Storage Facilities for Waste Generated by the U.S. Department of Energy Waste Management Operations. The main report documents the methodology, computational framework, and results of facility accident analyses performed as a part of the U.S. Department of Energy (DOE) Waste Management Programmatic Environmental Impact Statement (WM PEIS). The accident sequences potentially important to human health risk are specified, their frequencies are assessed, and the resultant radiological and chemical source terms are evaluated. A personal computer-based computational framework and database have been developedmore » that provide these results as input to the WM PEIS for calculation of human health risk impacts. This report summarizes the accident analyses and aggregates the key results for each of the waste streams. Source terms are estimated and results are presented for each of the major DOE sites and facilities by WM PEIS alternative for each waste stream. The appendices identify the potential atmospheric release of each toxic chemical or radionuclide for each accident scenario studied. They also provide discussion of specific accident analysis data and guidance used or consulted in this report.« less
Henry, S B; Holzemer, W L; Reilly, C A; Campbell, K E
1994-01-01
OBJECTIVE: To analyze the terms used by nurses in a variety of data sources and to test the feasibility of using SNOMED III to represent nursing terms. DESIGN: Prospective research design with manual matching of terms to the SNOMED III vocabulary. MEASUREMENTS: The terms used by nurses to describe patient problems during 485 episodes of care for 201 patients hospitalized for Pneumocystis carinii pneumonia were identified. Problems from four data sources (nurse interview, intershift report, nursing care plan, and nurse progress note/flowsheet) were classified based on the substantive area of the problem and on the terminology used to describe the problem. A test subset of the 25 most frequently used terms from the two written data sources (nursing care plan and nurse progress note/flowsheet) were manually matched to SNOMED III terms to test the feasibility of using that existing vocabulary to represent nursing terms. RESULTS: Nurses most frequently described patient problems as signs/symptoms in the verbal nurse interview and intershift report. In the written data sources, problems were recorded as North American Nursing Diagnosis Association (NANDA) terms and signs/symptoms with similar frequencies. Of the nursing terms in the test subset, 69% were represented using one or more SNOMED III terms. PMID:7719788
Sample Based Unit Liter Dose Estimates
DOE Office of Scientific and Technical Information (OSTI.GOV)
JENSEN, L.
The Tank Waste Characterization Program has taken many core samples, grab samples, and auger samples from the single-shell and double-shell tanks during the past 10 years. Consequently, the amount of sample data available has increased, both in terms of quantity of sample results and the number of tanks characterized. More and better data is available than when the current radiological and toxicological source terms used in the Basis for Interim Operation (BIO) (FDH 1999a) and the Final Safety Analysis Report (FSAR) (FDH 1999b) were developed. The Nuclear Safety and Licensing (NS and L) organization wants to use the new datamore » to upgrade the radiological and toxicological source terms used in the BIO and FSAR. The NS and L organization requested assistance in producing a statistically based process for developing the source terms. This report describes the statistical techniques used and the assumptions made to support the development of a new radiological source term for liquid and solid wastes stored in single-shell and double-shell tanks. The results given in this report are a revision to similar results given in an earlier version of the document (Jensen and Wilmarth 1999). The main difference between the results in this document and the earlier version is that the dose conversion factors (DCF) for converting {mu}Ci/g or {mu}Ci/L to Sv/L (sieverts per liter) have changed. There are now two DCFs, one based on ICRP-68 and one based on ICW-71 (Brevick 2000).« less
High Order Finite Difference Methods with Subcell Resolution for 2D Detonation Waves
NASA Technical Reports Server (NTRS)
Wang, W.; Shu, C. W.; Yee, H. C.; Sjogreen, B.
2012-01-01
In simulating hyperbolic conservation laws in conjunction with an inhomogeneous stiff source term, if the solution is discontinuous, spurious numerical results may be produced due to different time scales of the transport part and the source term. This numerical issue often arises in combustion and high speed chemical reacting flows.
Efficient Development of High Fidelity Structured Volume Grids for Hypersonic Flow Simulations
NASA Technical Reports Server (NTRS)
Alter, Stephen J.
2003-01-01
A new technique for the control of grid line spacing and intersection angles of a structured volume grid, using elliptic partial differential equations (PDEs) is presented. Existing structured grid generation algorithms make use of source term hybridization to provide control of grid lines, imposing orthogonality implicitly at the boundary and explicitly on the interior of the domain. A bridging function between the two types of grid line control is typically used to blend the different orthogonality formulations. It is shown that utilizing such a bridging function with source term hybridization can result in the excessive use of computational resources and diminishes robustness. A new approach, Anisotropic Lagrange Based Trans-Finite Interpolation (ALBTFI), is offered as a replacement to source term hybridization. The ALBTFI technique captures the essence of the desired grid controls while improving the convergence rate of the elliptic PDEs when compared with source term hybridization. Grid generation on a blunt cone and a Shuttle Orbiter is used to demonstrate and assess the ALBTFI technique, which is shown to be as much as 50% faster, more robust, and produces higher quality grids than source term hybridization.
Common Calibration Source for Monitoring Long-term Ozone Trends
NASA Technical Reports Server (NTRS)
Kowalewski, Matthew
2004-01-01
Accurate long-term satellite measurements are crucial for monitoring the recovery of the ozone layer. The slow pace of the recovery and limited lifetimes of satellite monitoring instruments demands that datasets from multiple observation systems be combined to provide the long-term accuracy needed. A fundamental component of accurately monitoring long-term trends is the calibration of these various instruments. NASA s Radiometric Calibration and Development Facility at the Goddard Space Flight Center has provided resources to minimize calibration biases between multiple instruments through the use of a common calibration source and standardized procedures traceable to national standards. The Facility s 50 cm barium sulfate integrating sphere has been used as a common calibration source for both US and international satellite instruments, including the Total Ozone Mapping Spectrometer (TOMS), Solar Backscatter Ultraviolet 2 (SBUV/2) instruments, Shuttle SBUV (SSBUV), Ozone Mapping Instrument (OMI), Global Ozone Monitoring Experiment (GOME) (ESA), Scanning Imaging SpectroMeter for Atmospheric ChartographY (SCIAMACHY) (ESA), and others. We will discuss the advantages of using a common calibration source and its effects on long-term ozone data sets. In addition, sphere calibration results from various instruments will be presented to demonstrate the accuracy of the long-term characterization of the source itself.
NASA Astrophysics Data System (ADS)
Bonhoff, H. A.; Petersson, B. A. T.
2010-08-01
For the characterization of structure-borne sound sources with multi-point or continuous interfaces, substantial simplifications and physical insight can be obtained by incorporating the concept of interface mobilities. The applicability of interface mobilities, however, relies upon the admissibility of neglecting the so-called cross-order terms. Hence, the objective of the present paper is to clarify the importance and significance of cross-order terms for the characterization of vibrational sources. From previous studies, four conditions have been identified for which the cross-order terms can become more influential. Such are non-circular interface geometries, structures with distinctively differing transfer paths as well as a suppression of the zero-order motion and cases where the contact forces are either in phase or out of phase. In a theoretical study, the former four conditions are investigated regarding the frequency range and magnitude of a possible strengthening of the cross-order terms. For an experimental analysis, two source-receiver installations are selected, suitably designed to obtain strong cross-order terms. The transmitted power and the source descriptors are predicted by the approximations of the interface mobility approach and compared with the complete calculations. Neglecting the cross-order terms can result in large misinterpretations at certain frequencies. On average, however, the cross-order terms are found to be insignificant and can be neglected with good approximation. The general applicability of interface mobilities for structure-borne sound source characterization and the description of the transmission process thereby is confirmed.
NASA Astrophysics Data System (ADS)
Roustan, Yelva; Duhanyan, Nora; Bocquet, Marc; Winiarek, Victor
2013-04-01
A sensitivity study of the numerical model, as well as, an inverse modelling approach applied to the atmospheric dispersion issues after the Chernobyl disaster are both presented in this paper. On the one hand, the robustness of the source term reconstruction through advanced data assimilation techniques was tested. On the other hand, the classical approaches for sensitivity analysis were enhanced by the use of an optimised forcing field which otherwise is known to be strongly uncertain. The POLYPHEMUS air quality system was used to perform the simulations of radionuclide dispersion. Activity concentrations in air and deposited to the ground of iodine-131, caesium-137 and caesium-134 were considered. The impact of the implemented parameterizations of the physical processes (dry and wet depositions, vertical turbulent diffusion), of the forcing fields (meteorology and source terms) and of the numerical configuration (horizontal resolution) were investigated for the sensitivity study of the model. A four dimensional variational scheme (4D-Var) based on the approximate adjoint of the chemistry transport model was used to invert the source term. The data assimilation is performed with measurements of activity concentrations in air extracted from the Radioactivity Environmental Monitoring (REM) database. For most of the investigated configurations (sensitivity study), the statistics to compare the model results to the field measurements as regards the concentrations in air are clearly improved while using a reconstructed source term. As regards the ground deposited concentrations, an improvement can only be seen in case of satisfactorily modelled episode. Through these studies, the source term and the meteorological fields are proved to have a major impact on the activity concentrations in air. These studies also reinforce the use of reconstructed source term instead of the usual estimated one. A more detailed parameterization of the deposition process seems also to be able to improve the simulation results. For deposited activities the results are more complex probably due to a strong sensitivity to some of the meteorological fields which remain quite uncertain.
Bayesian source term determination with unknown covariance of measurements
NASA Astrophysics Data System (ADS)
Belal, Alkomiet; Tichý, Ondřej; Šmídl, Václav
2017-04-01
Determination of a source term of release of a hazardous material into the atmosphere is a very important task for emergency response. We are concerned with the problem of estimation of the source term in the conventional linear inverse problem, y = Mx, where the relationship between the vector of observations y is described using the source-receptor-sensitivity (SRS) matrix M and the unknown source term x. Since the system is typically ill-conditioned, the problem is recast as an optimization problem minR,B(y - Mx)TR-1(y - Mx) + xTB-1x. The first term minimizes the error of the measurements with covariance matrix R, and the second term is a regularization of the source term. There are different types of regularization arising for different choices of matrices R and B, for example, Tikhonov regularization assumes covariance matrix B as the identity matrix multiplied by scalar parameter. In this contribution, we adopt a Bayesian approach to make inference on the unknown source term x as well as unknown R and B. We assume prior on x to be a Gaussian with zero mean and unknown diagonal covariance matrix B. The covariance matrix of the likelihood R is also unknown. We consider two potential choices of the structure of the matrix R. First is the diagonal matrix and the second is a locally correlated structure using information on topology of the measuring network. Since the inference of the model is intractable, iterative variational Bayes algorithm is used for simultaneous estimation of all model parameters. The practical usefulness of our contribution is demonstrated on an application of the resulting algorithm to real data from the European Tracer Experiment (ETEX). This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).
Low birth weight and air pollution in California: Which sources and components drive the risk?
Laurent, Olivier; Hu, Jianlin; Li, Lianfa; Kleeman, Michael J; Bartell, Scott M; Cockburn, Myles; Escobedo, Loraine; Wu, Jun
2016-01-01
Intrauterine growth restriction has been associated with exposure to air pollution, but there is a need to clarify which sources and components are most likely responsible. This study investigated the associations between low birth weight (LBW, <2500g) in term born infants (≥37 gestational weeks) and air pollution by source and composition in California, over the period 2001-2008. Complementary exposure models were used: an empirical Bayesian kriging model for the interpolation of ambient pollutant measurements, a source-oriented chemical transport model (using California emission inventories) that estimated fine and ultrafine particulate matter (PM2.5 and PM0.1, respectively) mass concentrations (4km×4km) by source and composition, a line-source roadway dispersion model at fine resolution, and traffic index estimates. Birth weight was obtained from California birth certificate records. A case-cohort design was used. Five controls per term LBW case were randomly selected (without covariate matching or stratification) from among term births. The resulting datasets were analyzed by logistic regression with a random effect by hospital, using generalized additive mixed models adjusted for race/ethnicity, education, maternal age and household income. In total 72,632 singleton term LBW cases were included. Term LBW was positively and significantly associated with interpolated measurements of ozone but not total fine PM or nitrogen dioxide. No significant association was observed between term LBW and primary PM from all sources grouped together. A positive significant association was observed for secondary organic aerosols. Exposure to elemental carbon (EC), nitrates and ammonium were also positively and significantly associated with term LBW, but only for exposure during the third trimester of pregnancy. Significant positive associations were observed between term LBW risk and primary PM emitted by on-road gasoline and diesel or by commercial meat cooking sources. Primary PM from wood burning was inversely associated with term LBW. Significant positive associations were also observed between term LBW and ultrafine particle numbers modeled with the line-source roadway dispersion model, traffic density and proximity to roadways. This large study based on complementary exposure metrics suggests that not only primary pollution sources (traffic and commercial meat cooking) but also EC and secondary pollutants are risk factors for term LBW. Copyright © 2016 Elsevier Ltd. All rights reserved.
Horowitz, A.J.; Elrick, K.A.; Smith, J.J.
2005-01-01
In cooperation with the City of Atlanta, Georgia, the US Geological Survey has designed and implemented a water-quantity and quality monitoring network that measures a variety of biological and chemical constituents in water and suspended sediment. The network consists of 20 long-term monitoring sites and is intended to assess water-quality trends in response to planned infrastructural improvements. Initial results from the network indicate that nonpoint-source contributions may be more significant than point-source contributions for selected sediment associated trace elements and nutrients. There also are indications of short-term discontinuous point-source contributions of these same constituents during baseflow.
Laceby, J Patrick; Huon, Sylvain; Onda, Yuichi; Vaury, Veronique; Evrard, Olivier
2016-12-01
The Fukushima Daiichi Nuclear Power Plant (FDNPP) accident resulted in radiocesium fallout contaminating coastal catchments of the Fukushima Prefecture. As the decontamination effort progresses, the potential downstream migration of radiocesium contaminated particulate matter from forests, which cover over 65% of the most contaminated region, requires investigation. Carbon and nitrogen elemental concentrations and stable isotope ratios are thus used to model the relative contributions of forest, cultivated and subsoil sources to deposited particulate matter in three contaminated coastal catchments. Samples were taken from the main identified sources: cultivated (n = 28), forest (n = 46), and subsoils (n = 25). Deposited particulate matter (n = 82) was sampled during four fieldwork campaigns from November 2012 to November 2014. A distribution modelling approach quantified relative source contributions with multiple combinations of element parameters (carbon only, nitrogen only, and four parameters) for two particle size fractions (<63 μm and <2 mm). Although there was significant particle size enrichment for the particulate matter parameters, these differences only resulted in a 6% (SD 3%) mean difference in relative source contributions. Further, the three different modelling approaches only resulted in a 4% (SD 3%) difference between relative source contributions. For each particulate matter sample, six models (i.e. <63 μm and <2 mm from the three modelling approaches) were used to incorporate a broader definition of potential uncertainty into model results. Forest sources were modelled to contribute 17% (SD 10%) of particulate matter indicating they present a long term potential source of radiocesium contaminated material in fallout impacted catchments. Subsoils contributed 45% (SD 26%) of particulate matter and cultivated sources contributed 38% (SD 19%). The reservoir of radiocesium in forested landscapes in the Fukushima region represents a potential long-term source of particulate contaminated matter that will require diligent management for the foreseeable future. Copyright © 2016 Elsevier Ltd. All rights reserved.
Modeling Vortex Generators in the Wind-US Code
NASA Technical Reports Server (NTRS)
Dudek, Julianne C.
2010-01-01
A source term model which simulates the effects of vortex generators was implemented into the Wind-US Navier Stokes code. The source term added to the Navier-Stokes equations simulates the lift force which would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, supersonic flow in a rectangular duct with a counterrotating vortex generator pair, and subsonic flow in an S-duct with 22 co-rotating vortex generators. The validation results indicate that the source term vortex generator model provides a useful tool for screening vortex generator configurations and gives comparable results to solutions computed using a gridded vane.
NASA Technical Reports Server (NTRS)
Jung, Y. K.; Udalski, A.; Yee, J. C.; Sumi, T.; Gould, A.; Han, C.; Albrow, M. D.; Lee, C.-U.; Bennett, D. P.; Suzuki, D.
2017-01-01
In the process of analyzing an observed light curve, one often confronts various scenarios that can mimic the planetary signals causing difficulties in the accurate interpretation of the lens system. In this paper, we present the analysis of the microlensing event OGLE-2016-BLG-0733. The light curve of the event shows a long-term asymmetric perturbation that would appear to be due to a planet. From the detailed modeling of the lensing light curve, however, we find that the perturbation originates from the binarity of the source rather than the lens. This result demonstrates that binary sources with roughly equal-luminosity components can mimic long-term perturbations induced by planets with projected separations near the Einstein ring. The result also represents the importance of the consideration of various interpretations in planet-like perturbations and of high-cadence observations for ensuring the unambiguous detection of the planet.
On the application of subcell resolution to conservation laws with stiff source terms
NASA Technical Reports Server (NTRS)
Chang, Shih-Hung
1989-01-01
LeVeque and Yee recently investigated a one-dimensional scalar conservation law with stiff source terms modeling the reacting flow problems and discovered that for the very stiff case most of the current finite difference methods developed for non-reacting flows would produce wrong solutions when there is a propagating discontinuity. A numerical scheme, essentially nonoscillatory/subcell resolution - characteristic direction (ENO/SRCD), is proposed for solving conservation laws with stiff source terms. This scheme is a modification of Harten's ENO scheme with subcell resolution, ENO/SR. The locations of the discontinuities and the characteristic directions are essential in the design. Strang's time-splitting method is used and time evolutions are done by advancing along the characteristics. Numerical experiment using this scheme shows excellent results on the model problem of LeVeque and Yee. Comparisons of the results of ENO, ENO/SR, and ENO/SRCD are also presented.
A well-balanced scheme for Ten-Moment Gaussian closure equations with source term
NASA Astrophysics Data System (ADS)
Meena, Asha Kumari; Kumar, Harish
2018-02-01
In this article, we consider the Ten-Moment equations with source term, which occurs in many applications related to plasma flows. We present a well-balanced second-order finite volume scheme. The scheme is well-balanced for general equation of state, provided we can write the hydrostatic solution as a function of the space variables. This is achieved by combining hydrostatic reconstruction with contact preserving, consistent numerical flux, and appropriate source discretization. Several numerical experiments are presented to demonstrate the well-balanced property and resulting accuracy of the proposed scheme.
NASA Astrophysics Data System (ADS)
Guinot, Vincent
2017-11-01
The validity of flux and source term formulae used in shallow water models with porosity for urban flood simulations is assessed by solving the two-dimensional shallow water equations over computational domains representing periodic building layouts. The models under assessment are the Single Porosity (SP), the Integral Porosity (IP) and the Dual Integral Porosity (DIP) models. 9 different geometries are considered. 18 two-dimensional initial value problems and 6 two-dimensional boundary value problems are defined. This results in a set of 96 fine grid simulations. Analysing the simulation results leads to the following conclusions: (i) the DIP flux and source term models outperform those of the SP and IP models when the Riemann problem is aligned with the main street directions, (ii) all models give erroneous flux closures when is the Riemann problem is not aligned with one of the main street directions or when the main street directions are not orthogonal, (iii) the solution of the Riemann problem is self-similar in space-time when the street directions are orthogonal and the Riemann problem is aligned with one of them, (iv) a momentum balance confirms the existence of the transient momentum dissipation model presented in the DIP model, (v) none of the source term models presented so far in the literature allows all flow configurations to be accounted for(vi) future laboratory experiments aiming at the validation of flux and source term closures should focus on the high-resolution, two-dimensional monitoring of both water depth and flow velocity fields.
Numerical modeling of heat transfer in the fuel oil storage tank at thermal power plant
NASA Astrophysics Data System (ADS)
Kuznetsova, Svetlana A.
2015-01-01
Presents results of mathematical modeling of convection of a viscous incompressible fluid in a rectangular cavity with conducting walls of finite thickness in the presence of a local source of heat in the bottom of the field in terms of convective heat exchange with the environment. A mathematical model is formulated in terms of dimensionless variables "stream function - vorticity vector speed - temperature" in the Cartesian coordinate system. As the results show the distributions of hydrodynamic parameters and temperatures using different boundary conditions on the local heat source.
PFLOTRAN-RepoTREND Source Term Comparison Summary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frederick, Jennifer M.
Code inter-comparison studies are useful exercises to verify and benchmark independently developed software to ensure proper function, especially when the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment. This summary describes the results of the first portion of the code inter-comparison between PFLOTRAN and RepoTREND, which compares the radionuclide source term used in a typical performance assessment.
Defense RDT&E Online System (DROLS) Handbook
1993-07-01
of the descriptor TROPICAL DISEASES hierarchically will produce the same results as a cumulated search of the following terms: CHOLERA DENGUE ...Header List The Source header List is a two volume listing of all source names arranged in alphabetical order. Each en ~try consists of: Source Name...BB Belgium ................................................................ BE Belize
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, David; Bucknor, Matthew; Jerden, James
2016-10-01
The potential release of radioactive material during a plant incident, referred to as the source term, is a vital design metric and will be a major focus of advanced reactor licensing. The U.S. Nuclear Regulatory Commission has stated an expectation for advanced reactor vendors to present a mechanistic assessment of the potential source term in their license applications. The mechanistic source term presents an opportunity for vendors to realistically assess the radiological consequences of an incident, and may allow reduced emergency planning zones and smaller plant sites. However, the development of a mechanistic source term for advanced reactors is notmore » without challenges, as there are often numerous phenomena impacting the transportation and retention of radionuclides. This project sought to evaluate U.S. capabilities regarding the mechanistic assessment of radionuclide release from core damage incidents at metal fueled, pool-type sodium fast reactors (SFRs). The purpose of the analysis was to identify, and prioritize, any gaps regarding computational tools or data necessary for the modeling of radionuclide transport and retention phenomena. To accomplish this task, a parallel-path analysis approach was utilized. One path, led by Argonne and Sandia National Laboratories, sought to perform a mechanistic source term assessment using available codes, data, and models, with the goal to identify gaps in the current knowledge base. The second path, performed by an independent contractor, performed sensitivity analyses to determine the importance of particular radionuclides and transport phenomena in regards to offsite consequences. The results of the two pathways were combined to prioritize gaps in current capabilities.« less
Prediction of discretization error using the error transport equation
NASA Astrophysics Data System (ADS)
Celik, Ismail B.; Parsons, Don Roscoe
2017-06-01
This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.
Modeling Vortex Generators in a Navier-Stokes Code
NASA Technical Reports Server (NTRS)
Dudek, Julianne C.
2011-01-01
A source-term model that simulates the effects of vortex generators was implemented into the Wind-US Navier-Stokes code. The source term added to the Navier-Stokes equations simulates the lift force that would result from a vane-type vortex generator in the flowfield. The implementation is user-friendly, requiring the user to specify only three quantities for each desired vortex generator: the range of grid points over which the force is to be applied and the planform area and angle of incidence of the physical vane. The model behavior was evaluated for subsonic flow in a rectangular duct with a single vane vortex generator, subsonic flow in an S-duct with 22 corotating vortex generators, and supersonic flow in a rectangular duct with a counter-rotating vortex-generator pair. The model was also used to successfully simulate microramps in supersonic flow by treating each microramp as a pair of vanes with opposite angles of incidence. The validation results indicate that the source-term vortex-generator model provides a useful tool for screening vortex-generator configurations and gives comparable results to solutions computed using gridded vanes.
Analysis and Synthesis of Tonal Aircraft Noise Sources
NASA Technical Reports Server (NTRS)
Allen, Matthew P.; Rizzi, Stephen A.; Burdisso, Ricardo; Okcu, Selen
2012-01-01
Fixed and rotary wing aircraft operations can have a significant impact on communities in proximity to airports. Simulation of predicted aircraft flyover noise, paired with listening tests, is useful to noise reduction efforts since it allows direct annoyance evaluation of aircraft or operations currently in the design phase. This paper describes efforts to improve the realism of synthesized source noise by including short term fluctuations, specifically for inlet-radiated tones resulting from the fan stage of turbomachinery. It details analysis performed on an existing set of recorded turbofan data to isolate inlet-radiated tonal fan noise, then extract and model short term tonal fluctuations using the analytic signal. Methodologies for synthesizing time-variant tonal and broadband turbofan noise sources using measured fluctuations are also described. Finally, subjective listening test results are discussed which indicate that time-variant synthesized source noise is perceived to be very similar to recordings.
Water Source Pollution and Disease Diagnosis in a Nigerian Rural Community.
ERIC Educational Resources Information Center
Sangodoyin, A. Y.
1991-01-01
Samples from five water sources (spring, borehole, pond, stream, and well) in rural Nigerian communities were tested. Results include source reliabilities in terms of water quality and quantity, pollution effects upon water quality, epidemiological effects related to water quantity and waste disposal, and impact of water quality improvement upon…
Rey-Martinez, Jorge; Pérez-Fernández, Nicolás
2016-12-01
The proposed validation goal of 0.9 in intra-class correlation coefficient was reached with the results of this study. With the obtained results we consider that the developed software (RombergLab) is a validated balance assessment software. The reliability of this software is dependent of the used force platform technical specifications. Develop and validate a posturography software and share its source code in open source terms. Prospective non-randomized validation study: 20 consecutive adults underwent two balance assessment tests, six condition posturography was performed using a clinical approved software and force platform and the same conditions were measured using the new developed open source software using a low cost force platform. Intra-class correlation index of the sway area obtained from the center of pressure variations in both devices for the six conditions was the main variable used for validation. Excellent concordance between RombergLab and clinical approved force platform was obtained (intra-class correlation coefficient =0.94). A Bland and Altman graphic concordance plot was also obtained. The source code used to develop RombergLab was published in open source terms.
Comparing different types of source memory attributes in dementia of Alzheimer's type.
Mammarella, Nicola; Fairfield, Beth; Di Domenico, Alberto
2012-04-01
Source monitoring (SM) refers to our ability to discriminate between memories from different sources. Twenty healthy high-cognitive functioning older adults, 20 healthy low-cognitive functioning older adults, and 20 older adults with dementia of Alzheimer's type (DAT) were asked to perform a series of SM tasks that varied in terms of the to-be-remembered source attribute (perceptual, spatial, temporal, semantic, social, and affective details). Results indicated that older DAT adults had greater difficulty in SM compared to the healthy control groups, especially with spatial and semantic details. Data are discussed in terms of the SM framework and suggest that poor memory for some types of source information may be considered as an important indicator of clinical memory function when assessing for the presence and severity of dementia.
Wang, Liqin; Bray, Bruce E.; Shi, Jianlin; Fiol, Guilherme Del; Haug, Peter J.
2017-01-01
Objective Disease-specific vocabularies are fundamental to many knowledge-based intelligent systems and applications like text annotation, cohort selection, disease diagnostic modeling, and therapy recommendation. Reference standards are critical in the development and validation of automated methods for disease-specific vocabularies. The goal of the present study is to design and test a generalizable method for the development of vocabulary reference standards from expert-curated, disease-specific biomedical literature resources. Methods We formed disease-specific corpora from literature resources like textbooks, evidence-based synthesized online sources, clinical practice guidelines, and journal articles. Medical experts annotated and adjudicated disease-specific terms in four classes (i.e., causes or risk factors, signs or symptoms, diagnostic tests or results, and treatment). Annotations were mapped to UMLS concepts. We assessed source variation, the contribution of each source to build disease-specific vocabularies, the saturation of the vocabularies with respect to the number of used sources, and the generalizability of the method with different diseases. Results The study resulted in 2588 string-unique annotations for heart failure in four classes, and 193 and 425 respectively for pulmonary embolism and rheumatoid arthritis in treatment class. Approximately 80% of the annotations were mapped to UMLS concepts. The agreement among heart failure sources ranged between 0.28 and 0.46. The contribution of these sources to the final vocabulary ranged between 18% and 49%. With the sources explored, the heart failure vocabulary reached near saturation in all four classes with the inclusion of minimal six sources (or between four to seven sources if only counting terms occurred in two or more sources). It took fewer sources to reach near saturation for the other two diseases in terms of the treatment class. Conclusions We developed a method for the development of disease-specific reference vocabularies. Expert-curated biomedical literature resources are substantial for acquiring disease-specific medical knowledge. It is feasible to reach near saturation in a disease-specific vocabulary using a relatively small number of literature sources. PMID:26971304
NASA Astrophysics Data System (ADS)
Luu, Thomas; Brooks, Eugene D.; Szőke, Abraham
2010-03-01
In the difference formulation for the transport of thermally emitted photons the photon intensity is defined relative to a reference field, the black body at the local material temperature. This choice of reference field combines the separate emission and absorption terms that nearly cancel, thereby removing the dominant cause of noise in the Monte Carlo solution of thick systems, but introduces time and space derivative source terms that cannot be determined until the end of the time step. The space derivative source term can also lead to noise induced crashes under certain conditions where the real physical photon intensity differs strongly from a black body at the local material temperature. In this paper, we consider a difference formulation relative to the material temperature at the beginning of the time step, or in cases where an alternative temperature better describes the radiation field, that temperature. The result is a method where iterative solution of the material energy equation is efficient and noise induced crashes are avoided. We couple our generalized reference field scheme with an ad hoc interpolation of the space derivative source, resulting in an algorithm that produces the correct flux between zones as the physical system approaches the thick limit.
Long-term variability in bright hard X-ray sources: 5+ years of BATSE data
NASA Technical Reports Server (NTRS)
Robinson, C. R.; Harmon, B. A.; McCollough, M. L.; Paciesas, W. S.; Sahi, M.; Scott, D. M.; Wilson, C. A.; Zhang, S. N.; Deal, K. J.
1997-01-01
The operation of the Compton Gamma Ray Observatory (CGRO)/burst and transient source experiment (BATSE) continues to provide data for inclusion into a data base for the analysis of long term variability in bright, hard X-ray sources. The all-sky capability of BATSE provides up to 30 flux measurements/day for each source. The long baseline and the various rising and setting occultation flux measurements allow searches for periodic and quasi-periodic signals with periods of between several hours to hundreds of days to be conducted. The preliminary results from an analysis of the hard X-ray variability in 24 of the brightest BATSE sources are presented. Power density spectra are computed for each source and profiles are presented of the hard X-ray orbital modulations in some X-ray binaries, together with amplitude modulations and variations in outburst durations and intensities in recurrent X-ray transients.
NASA Astrophysics Data System (ADS)
Hasnain, Shahid; Saqib, Muhammad; Mashat, Daoud Suleiman
2017-07-01
This research paper represents a numerical approximation to non-linear three dimension reaction diffusion equation with non-linear source term from population genetics. Since various initial and boundary value problems exist in three dimension reaction diffusion phenomena, which are studied numerically by different numerical methods, here we use finite difference schemes (Alternating Direction Implicit and Fourth Order Douglas Implicit) to approximate the solution. Accuracy is studied in term of L2, L∞ and relative error norms by random selected grids along time levels for comparison with analytical results. The test example demonstrates the accuracy, efficiency and versatility of the proposed schemes. Numerical results showed that Fourth Order Douglas Implicit scheme is very efficient and reliable for solving 3-D non-linear reaction diffusion equation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zachara, John M.; Chen, Xingyuan; Murray, Chris
In this study, a well-field within a uranium (U) plume in the groundwater-surface water transition zone was monitored for a 3 year period for water table elevation and dissolved solutes. The plume discharges to the Columbia River, which displays a dramatic spring stage surge resulting from snowmelt. Groundwater exhibits a low hydrologic gradient and chemical differences with river water. River water intrudes the site in spring. Specific aims were to assess the impacts of river intrusion on dissolved uranium (U aq), specific conductance (SpC), and other solutes, and to discriminate between transport, geochemical, and source term heterogeneity effects. Time seriesmore » trends for U aq and SpC were complex and displayed large temporal and well-to-well variability as a result of water table elevation fluctuations, river water intrusion, and changes in groundwater flow directions. The wells were clustered into subsets exhibiting common behaviors resulting from the intrusion dynamics of river water and the location of source terms. Hot-spots in U aq varied in location with increasing water table elevation through the combined effects of advection and source term location. Heuristic reactive transport modeling with PFLOTRAN demonstrated that mobilized U aq was transported between wells and source terms in complex trajectories, and was diluted as river water entered and exited the groundwater system. While U aq time-series concentration trends varied significantly from year-to-year as a result of climate-caused differences in the spring hydrograph, common and partly predictable response patterns were observed that were driven by water table elevation, and the extent and duration of river water intrusion.« less
Zachara, John M.; Chen, Xingyuan; Murray, Chris; ...
2016-03-04
In this study, a well-field within a uranium (U) plume in the groundwater-surface water transition zone was monitored for a 3 year period for water table elevation and dissolved solutes. The plume discharges to the Columbia River, which displays a dramatic spring stage surge resulting from snowmelt. Groundwater exhibits a low hydrologic gradient and chemical differences with river water. River water intrudes the site in spring. Specific aims were to assess the impacts of river intrusion on dissolved uranium (U aq), specific conductance (SpC), and other solutes, and to discriminate between transport, geochemical, and source term heterogeneity effects. Time seriesmore » trends for U aq and SpC were complex and displayed large temporal and well-to-well variability as a result of water table elevation fluctuations, river water intrusion, and changes in groundwater flow directions. The wells were clustered into subsets exhibiting common behaviors resulting from the intrusion dynamics of river water and the location of source terms. Hot-spots in U aq varied in location with increasing water table elevation through the combined effects of advection and source term location. Heuristic reactive transport modeling with PFLOTRAN demonstrated that mobilized U aq was transported between wells and source terms in complex trajectories, and was diluted as river water entered and exited the groundwater system. While U aq time-series concentration trends varied significantly from year-to-year as a result of climate-caused differences in the spring hydrograph, common and partly predictable response patterns were observed that were driven by water table elevation, and the extent and duration of river water intrusion.« less
NASA Astrophysics Data System (ADS)
Conti, P.; Testi, D.; Grassi, W.
2017-11-01
This work reviews and compares suitable models for the thermal analysis of forced convection over a heat source in a porous medium. The set of available models refers to an infinite medium in which a fluid moves over different three heat source geometries: i.e. the moving infinite line source, the moving finite line source, and the moving infinite cylindrical source. In this perspective, the present work presents a plain and handy compendium of the above-mentioned models for forced external convection in porous media; besides, we propose a dimensionless analysis to figure out the reciprocal deviation among available models, helping the selection of the most suitable one in the specific case of interest. Under specific conditions, the advection term becomes ineffective in terms of heat transfer performances, allowing the use of purely-conductive models. For that reason, available analytical and numerical solutions for purely-conductive media are also reviewed and compared, again, by dimensionless criteria. Therefore, one can choose the simplest solution, with significant benefits in terms of computational effort and interpretation of the results. The main outcomes presented in the paper are: the conditions under which the system can be considered subject to a Darcy flow, the minimal distance beyond which the finite dimension of the heat source does not affect the thermal field, and the critical fluid velocity needed to have a significant contribution of the advection term in the overall heat transfer process.
An extension of the Lighthill theory of jet noise to encompass refraction and shielding
NASA Technical Reports Server (NTRS)
Ribner, Herbert S.
1995-01-01
A formalism for jet noise prediction is derived that includes the refractive 'cone of silence' and other effects; outside the cone it approximates the simple Lighthill format. A key step is deferral of the simplifying assumption of uniform density in the dominant 'source' term. The result is conversion to a convected wave equation retaining the basic Lighthill source term. The main effect is to amend the Lighthill solution to allow for refraction by mean flow gradients, achieved via a frequency-dependent directional factor. A general formula for power spectral density emitted from unit volume is developed as the Lighthill-based value multiplied by a squared 'normalized' Green's function (the directional factor), referred to a stationary point source. The convective motion of the sources, with its powerful amplifying effect, also directional, is already accounted for in the Lighthill format: wave convection and source convection are decoupled. The normalized Green's function appears to be near unity outside the refraction dominated 'cone of silence', this validates our long term practice of using Lighthill-based approaches outside the cone, with extension inside via the Green's function. The function is obtained either experimentally (injected 'point' source) or numerically (computational aeroacoustics). Approximation by unity seems adequate except near the cone and except when there are shrouding jets: in that case the difference from unity quantifies the shielding effect. Further extension yields dipole and monopole source terms (cf. Morfey, Mani, and others) when the mean flow possesses density gradients (e.g., hot jets).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul L. Wichlacz
2003-09-01
This source-term summary document is intended to describe the current understanding of contaminant source terms and the conceptual model for potential source-term release to the environment at the Idaho National Engineering and Environmental Laboratory (INEEL), as presented in published INEEL reports. The document presents a generalized conceptual model of the sources of contamination and describes the general categories of source terms, primary waste forms, and factors that affect the release of contaminants from the waste form into the vadose zone and Snake River Plain Aquifer. Where the information has previously been published and is readily available, summaries of the inventorymore » of contaminants are also included. Uncertainties that affect the estimation of the source term release are also discussed where they have been identified by the Source Term Technical Advisory Group. Areas in which additional information are needed (i.e., research needs) are also identified.« less
Effect of source location and listener location on ILD cues in a reverberant room
NASA Astrophysics Data System (ADS)
Ihlefeld, Antje; Shinn-Cunningham, Barbara G.
2004-05-01
Short-term interaural level differences (ILDs) were analyzed for simulations of the signals that would reach a listener in a reverberant room. White noise was convolved with manikin head-related impulse responses measured in a classroom to simulate different locations of the source relative to the manikin and different manikin positions in the room. The ILDs of the signals were computed within each third-octave band over a relatively short time window to investigate how reliably ILD cues encode source laterality. Overall, the mean of the ILD magnitude increases with lateral angle and decreases with distance, as expected. Increasing reverberation decreases the mean ILD magnitude and increases the variance of the short-term ILD, so that the spatial information carried by ILD cues is degraded by reverberation. These results suggest that the mean ILD is not a reliable cue for determining source laterality in a reverberant room. However, by taking into account both the mean and variance, the distribution of high-frequency short-term ILDs provides some spatial information. This analysis suggests that, in order to use ILDs to judge source direction in reverberant space, listeners must accumulate information about how the short-term ILD varies over time. [Work supported by NIDCD and AFOSR.
NASA Astrophysics Data System (ADS)
García-Mayordomo, J.; Gaspar-Escribano, J. M.; Benito, B.
2007-10-01
A probabilistic seismic hazard assessment of the Province of Murcia in terms of peak ground acceleration (PGA) and spectral accelerations [SA( T)] is presented in this paper. In contrast to most of the previous studies in the region, which were performed for PGA making use of intensity-to-PGA relationships, hazard is here calculated in terms of magnitude and using European spectral ground-motion models. Moreover, we have considered the most important faults in the region as specific seismic sources, and also comprehensively reviewed the earthquake catalogue. Hazard calculations are performed following the Probabilistic Seismic Hazard Assessment (PSHA) methodology using a logic tree, which accounts for three different seismic source zonings and three different ground-motion models. Hazard maps in terms of PGA and SA(0.1, 0.2, 0.5, 1.0 and 2.0 s) and coefficient of variation (COV) for the 475-year return period are shown. Subsequent analysis is focused on three sites of the province, namely, the cities of Murcia, Lorca and Cartagena, which are important industrial and tourism centres. Results at these sites have been analysed to evaluate the influence of the different input options. The most important factor affecting the results is the choice of the attenuation relationship, whereas the influence of the selected seismic source zonings appears strongly site dependant. Finally, we have performed an analysis of source contribution to hazard at each of these cities to provide preliminary guidance in devising specific risk scenarios. We have found that local source zones control the hazard for PGA and SA( T ≤ 1.0 s), although contribution from specific fault sources and long-distance north Algerian sources becomes significant from SA(0.5 s) onwards.
NASA Astrophysics Data System (ADS)
Evangeliou, Nikolaos; Hamburger, Thomas; Cozic, Anne; Balkanski, Yves; Stohl, Andreas
2017-07-01
This paper describes the results of an inverse modeling study for the determination of the source term of the radionuclides 134Cs, 137Cs and 131I released after the Chernobyl accident. The accident occurred on 26 April 1986 in the Former Soviet Union and released about 1019 Bq of radioactive materials that were transported as far away as the USA and Japan. Thereafter, several attempts to assess the magnitude of the emissions were made that were based on the knowledge of the core inventory and the levels of the spent fuel. More recently, when modeling tools were further developed, inverse modeling techniques were applied to the Chernobyl case for source term quantification. However, because radioactivity is a sensitive topic for the public and attracts a lot of attention, high-quality measurements, which are essential for inverse modeling, were not made available except for a few sparse activity concentration measurements far from the source and far from the main direction of the radioactive fallout. For the first time, we apply Bayesian inversion of the Chernobyl source term using not only activity concentrations but also deposition measurements from the most recent public data set. These observations refer to a data rescue attempt that started more than 10 years ago, with a final goal to provide available measurements to anyone interested. In regards to our inverse modeling results, emissions of 134Cs were estimated to be 80 PBq or 30-50 % higher than what was previously published. From the released amount of 134Cs, about 70 PBq were deposited all over Europe. Similar to 134Cs, emissions of 137Cs were estimated as 86 PBq, on the same order as previously reported results. Finally, 131I emissions of 1365 PBq were found, which are about 10 % less than the prior total releases. The inversion pushes the injection heights of the three radionuclides to higher altitudes (up to about 3 km) than previously assumed (≈ 2.2 km) in order to better match both concentration and deposition observations over Europe. The results of the present inversion were confirmed using an independent Eulerian model, for which deposition patterns were also improved when using the estimated posterior releases. Although the independent model tends to underestimate deposition in countries that are not in the main direction of the plume, it reproduces country levels of deposition very efficiently. The results were also tested for robustness against different setups of the inversion through sensitivity runs. The source term data from this study are publicly available.
ERIC Educational Resources Information Center
Newhagen, John E.
1994-01-01
Analyzes television news stories broadcast during the Persian Gulf War for censorship disclaimers, the censoring source, and the producing network. Discusses results in terms of both production- and viewer-based differences. Considers the question of whether censorship "works" in terms of unanticipated results related to story…
Temporal X-ray astronomy with a pinhole camera. [cygnus and scorpius constellation
NASA Technical Reports Server (NTRS)
Holt, S. S.
1975-01-01
Preliminary results from the Ariel-5 all-sky X-ray monitor are presented, along with sufficient experiment details to define the experiment sensitivity. Periodic modulation of the X-ray emission was investigated from three sources with which specific periods were associated, with the results that the 4.8 hour variation from Cyg X-3 was confirmed, a long-term average 5.6 day variation from Cyg X-1 was discovered, and no detectable 0.787 day modulation of Sco X-1 was observed. Consistency of the long-term Sco X-1 emission with a shot-noise model is discussed, wherein the source behavior is shown to be interpretable as approximately 100 flares per day, each with a duration of several hours. A sudden increase in the Cyg X-1 intensity by almost a factor of three on 22 April 1975 is reported, after 5 months of relative source constancy. The light curve of a bright nova-like transient source in Triangulum is presented, and compared with previously observed transient sources. Preliminary evidence for the existence of X-ray bursts with duration less than 1 hour is offered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Genn Saji
2006-07-01
The term 'ultimate risk' is used here to describe the probabilities and radiological consequences that should be incorporated in siting, containment design and accident management of nuclear power plants for hypothetical accidents. It is closely related with the source terms specified in siting criteria which assures an adequate separation of radioactive inventories of the plants from the public, in the event of a hypothetical and severe accident situation. The author would like to point out that current source terms which are based on the information from the Windscale accident (1957) through TID-14844 are very outdated and do not incorporate lessonsmore » learned from either the Three Miles Island (TMI, 1979) nor Chernobyl accident (1986), two of the most severe accidents ever experienced. As a result of the observations of benign radionuclides released at TMI, the technical community in the US felt that a more realistic evaluation of severe reactor accident source terms was necessary. In this background, the 'source term research project' was organized in 1984 to respond to these challenges. Unfortunately, soon after the time of the final report from this project was released, the Chernobyl accident occurred. Due to the enormous consequences induced by then accident, the one time optimistic perspectives in establishing a more realistic source term were completely shattered. The Chernobyl accident, with its human death toll and dispersion of a large part of the fission fragments inventories into the environment, created a significant degradation in the public's acceptance of nuclear energy throughout the world. In spite of this, nuclear communities have been prudent in responding to the public's anxiety towards the ultimate safety of nuclear plants, since there still remained many unknown points revolving around the mechanism of the Chernobyl accident. In order to resolve some of these mysteries, the author has performed a scoping study of the dispersion and deposition mechanisms of fuel particles and fission fragments during the initial phase of the Chernobyl accident. Through this study, it is now possible to generally reconstruct the radiological consequences by using a dispersion calculation technique, combined with the meteorological data at the time of the accident and land contamination densities of {sup 137}Cs measured and reported around the Chernobyl area. Although it is challenging to incorporate lessons learned from the Chernobyl accident into the source term issues, the author has already developed an example of safety goals by incorporating the radiological consequences of the accident. The example provides safety goals by specifying source term releases in a graded approach in combination with probabilities, i.e. risks. The author believes that the future source term specification should be directly linked with safety goals. (author)« less
Rebound of a coal tar creosote plume following partial source zone treatment with permanganate.
Thomson, N R; Fraser, M J; Lamarche, C; Barker, J F; Forsey, S P
2008-11-14
The long-term management of dissolved plumes originating from a coal tar creosote source is a technical challenge. For some sites stabilization of the source may be the best practical solution to decrease the contaminant mass loading to the plume and associated off-site migration. At the bench-scale, the deposition of manganese oxides, a permanganate reaction byproduct, has been shown to cause pore plugging and the formation of a manganese oxide layer adjacent to the non-aqueous phase liquid creosote which reduces post-treatment mass transfer and hence mass loading from the source. The objective of this study was to investigate the potential of partial permanganate treatment to reduce the ability of a coal tar creosote source zone to generate a multi-component plume at the pilot-scale over both the short-term (weeks to months) and the long-term (years) at a site where there is >10 years of comprehensive synoptic plume baseline data available. A series of preliminary bench-scale experiments were conducted to support this pilot-scale investigation. The results from the bench-scale experiments indicated that if sufficient mass removal of the reactive compounds is achieved then the effective solubility, aqueous concentration and rate of mass removal of the more abundant non-reactive coal tar creosote compounds such as biphenyl and dibenzofuran can be increased. Manganese oxide formation and deposition caused an order-of-magnitude decrease in hydraulic conductivity. Approximately 125 kg of permanganate were delivered into the pilot-scale source zone over 35 days, and based on mass balance estimates <10% of the initial reactive coal tar creosote mass in the source zone was oxidized. Mass discharge estimated at a down-gradient fence line indicated >35% reduction for all monitored compounds except for biphenyl, dibenzofuran and fluoranthene 150 days after treatment, which is consistent with the bench-scale experimental results. Pre- and post-treatment soil core data indicated a highly variable and random spatial distribution of mass within the source zone and provided no insight into the mass removed of any of the monitored species. The down-gradient plume was monitored approximately 1, 2 and 4 years following treatment. The data collected at 1 and 2 years post-treatment showed a decrease in mass discharge (10 to 60%) and/or total plume mass (0 to 55%); however, by 4 years post-treatment there was a rebound in both mass discharge and total plume mass for all monitored compounds to pre-treatment values or higher. The variability of the data collected was too large to resolve subtle changes in plume morphology, particularly near the source zone, that would provide insight into the impact of the formation and deposition of manganese oxides that occurred during treatment on mass transfer and/or flow by-passing. Overall, the results from this pilot-scale investigation indicate that there was a significant but short-term (months) reduction of mass emanating from the source zone as a result of permanganate treatment but there was no long-term (years) impact on the ability of this coal tar creosote source zone to generate a multi-component plume.
Poggi, L A; Malizia, A; Ciparisse, J F; Gaudio, P
2016-10-01
An open issue still under investigation by several international entities working on the safety and security field for the foreseen nuclear fusion reactors is the estimation of source terms that are a hazard for the operators and public, and for the machine itself in terms of efficiency and integrity in case of severe accident scenarios. Source term estimation is a crucial key safety issue to be addressed in the future reactors safety assessments, and the estimates available at the time are not sufficiently satisfactory. The lack of neutronic data along with the insufficiently accurate methodologies used until now, calls for an integrated methodology for source term estimation that can provide predictions with an adequate accuracy. This work proposes a complete methodology to estimate dust source terms starting from a broad information gathering. The wide number of parameters that can influence dust source term production is reduced with statistical tools using a combination of screening, sensitivity analysis, and uncertainty analysis. Finally, a preliminary and simplified methodology for dust source term production prediction for future devices is presented.
Effect of Americium-241 Content on Plutonium Radiation Source Terms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rainisch, R.
1998-12-28
The management of excess plutonium by the US Department of Energy includes a number of storage and disposition alternatives. Savannah River Site (SRS) is supporting DOE with plutonium disposition efforts, including the immobilization of certain plutonium materials in a borosilicate glass matrix. Surplus plutonium inventories slated for vitrification include materials with elevated levels of Americium-241. The Am-241 content of plutonium materials generally reflects in-growth of the isotope due to decay of plutonium and is age-dependent. However, select plutonium inventories have Am-241 levels considerably above the age-based levels. Elevated levels of americium significantly impact radiation source terms of plutonium materials andmore » will make handling of the materials more difficult. Plutonium materials are normally handled in shielded glove boxes, and the work entails both extremity and whole body exposures. This paper reports results of an SRS analysis of plutonium materials source terms vs. the Americium-241 content of the materials. Data with respect to dependence and magnitude of source terms on/vs. Am-241 levels are presented and discussed. The investigation encompasses both vitrified and un-vitrified plutonium oxide (PuO2) batches.« less
Numerical Simulations of Reacting Flows Using Asynchrony-Tolerant Schemes for Exascale Computing
NASA Astrophysics Data System (ADS)
Cleary, Emmet; Konduri, Aditya; Chen, Jacqueline
2017-11-01
Communication and data synchronization between processing elements (PEs) are likely to pose a major challenge in scalability of solvers at the exascale. Recently developed asynchrony-tolerant (AT) finite difference schemes address this issue by relaxing communication and synchronization between PEs at a mathematical level while preserving accuracy, resulting in improved scalability. The performance of these schemes has been validated for simple linear and nonlinear homogeneous PDEs. However, many problems of practical interest are governed by highly nonlinear PDEs with source terms, whose solution may be sensitive to perturbations caused by communication asynchrony. The current work applies the AT schemes to combustion problems with chemical source terms, yielding a stiff system of PDEs with nonlinear source terms highly sensitive to temperature. Examples shown will use single-step and multi-step CH4 mechanisms for 1D premixed and nonpremixed flames. Error analysis will be discussed both in physical and spectral space. Results show that additional errors introduced by the AT schemes are negligible and the schemes preserve their accuracy. We acknowledge funding from the DOE Computational Science Graduate Fellowship administered by the Krell Institute.
Bayesian source term estimation of atmospheric releases in urban areas using LES approach.
Xue, Fei; Kikumoto, Hideki; Li, Xiaofeng; Ooka, Ryozo
2018-05-05
The estimation of source information from limited measurements of a sensor network is a challenging inverse problem, which can be viewed as an assimilation process of the observed concentration data and the predicted concentration data. When dealing with releases in built-up areas, the predicted data are generally obtained by the Reynolds-averaged Navier-Stokes (RANS) equations, which yields building-resolving results; however, RANS-based models are outperformed by large-eddy simulation (LES) in the predictions of both airflow and dispersion. Therefore, it is important to explore the possibility of improving the estimation of the source parameters by using the LES approach. In this paper, a novel source term estimation method is proposed based on LES approach using Bayesian inference. The source-receptor relationship is obtained by solving the adjoint equations constructed using the time-averaged flow field simulated by the LES approach based on the gradient diffusion hypothesis. A wind tunnel experiment with a constant point source downwind of a single building model is used to evaluate the performance of the proposed method, which is compared with that of the existing method using a RANS model. The results show that the proposed method reduces the errors of source location and releasing strength by 77% and 28%, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.
LONG TERM HYDROLOGICAL IMPACT ASSESSMENT (LTHIA)
LTHIA is a universal Urban Sprawl analysis tool that is available to all at no charge through the Internet. It estimates impacts on runoff, recharge and nonpoint source pollution resulting from past or proposed land use changes. It gives long-term average annual runoff for a lan...
NASA Astrophysics Data System (ADS)
Chamakuri, Nagaiah; Engwer, Christian; Kunisch, Karl
2014-09-01
Optimal control for cardiac electrophysiology based on the bidomain equations in conjunction with the Fenton-Karma ionic model is considered. This generic ventricular model approximates well the restitution properties and spiral wave behavior of more complex ionic models of cardiac action potentials. However, it is challenging due to the appearance of state-dependent discontinuities in the source terms. A computational framework for the numerical realization of optimal control problems is presented. Essential ingredients are a shape calculus based treatment of the sensitivities of the discontinuous source terms and a marching cubes algorithm to track iso-surface of excitation wavefronts. Numerical results exhibit successful defibrillation by applying an optimally controlled extracellular stimulus.
Development of surrogate models for the prediction of the flow around an aircraft propeller
NASA Astrophysics Data System (ADS)
Salpigidou, Christina; Misirlis, Dimitris; Vlahostergios, Zinon; Yakinthos, Kyros
2018-05-01
In the present work, the derivation of two surrogate models (SMs) for modelling the flow around a propeller for small aircrafts is presented. Both methodologies use derived functions based on computations with the detailed propeller geometry. The computations were performed using k-ω shear stress transport for modelling turbulence. In the SMs, the modelling of the propeller was performed in a computational domain of disk-like geometry, where source terms were introduced in the momentum equations. In the first SM, the source terms were polynomial functions of swirl and thrust, mainly related to the propeller radius. In the second SM, regression analysis was used to correlate the source terms with the velocity distribution through the propeller. The proposed SMs achieved faster convergence, in relation to the detail model, by providing also results closer to the available operational data. The regression-based model was the most accurate and required less computational time for convergence.
Filtered Mass Density Function for Design Simulation of High Speed Airbreathing Propulsion Systems
NASA Technical Reports Server (NTRS)
Drozda, T. G.; Sheikhi, R. M.; Givi, Peyman
2001-01-01
The objective of this research is to develop and implement new methodology for large eddy simulation of (LES) of high-speed reacting turbulent flows. We have just completed two (2) years of Phase I of this research. This annual report provides a brief and up-to-date summary of our activities during the period: September 1, 2000 through August 31, 2001. In the work within the past year, a methodology termed "velocity-scalar filtered density function" (VSFDF) is developed and implemented for large eddy simulation (LES) of turbulent flows. In this methodology the effects of the unresolved subgrid scales (SGS) are taken into account by considering the joint probability density function (PDF) of all of the components of the velocity and scalar vectors. An exact transport equation is derived for the VSFDF in which the effects of the unresolved SGS convection, SGS velocity-scalar source, and SGS scalar-scalar source terms appear in closed form. The remaining unclosed terms in this equation are modeled. A system of stochastic differential equations (SDEs) which yields statistically equivalent results to the modeled VSFDF transport equation is constructed. These SDEs are solved numerically by a Lagrangian Monte Carlo procedure. The consistency of the proposed SDEs and the convergence of the Monte Carlo solution are assessed by comparison with results obtained by an Eulerian LES procedure in which the corresponding transport equations for the first two SGS moments are solved. The unclosed SGS convection, SGS velocity-scalar source, and SGS scalar-scalar source in the Eulerian LES are replaced by corresponding terms from VSFDF equation. The consistency of the results is then analyzed for a case of two dimensional mixing layer.
ORIGAMI Automator Primer. Automated ORIGEN Source Terms and Spent Fuel Storage Pool Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wieselquist, William A.; Thompson, Adam B.; Bowman, Stephen M.
2016-04-01
Source terms and spent nuclear fuel (SNF) storage pool decay heat load analyses for operating nuclear power plants require a large number of Oak Ridge Isotope Generation and Depletion (ORIGEN) calculations. SNF source term calculations also require a significant amount of bookkeeping to track quantities such as core and assembly operating histories, spent fuel pool (SFP) residence times, heavy metal masses, and enrichments. The ORIGEN Assembly Isotopics (ORIGAMI) module in the SCALE code system provides a simple scheme for entering these data. However, given the large scope of the analysis, extensive scripting is necessary to convert formats and process datamore » to create thousands of ORIGAMI input files (one per assembly) and to process the results into formats readily usable by follow-on analysis tools. This primer describes a project within the SCALE Fulcrum graphical user interface (GUI) called ORIGAMI Automator that was developed to automate the scripting and bookkeeping in large-scale source term analyses. The ORIGAMI Automator enables the analyst to (1) easily create, view, and edit the reactor site and assembly information, (2) automatically create and run ORIGAMI inputs, and (3) analyze the results from ORIGAMI. ORIGAMI Automator uses the standard ORIGEN binary concentrations files produced by ORIGAMI, with concentrations available at all time points in each assembly’s life. The GUI plots results such as mass, concentration, activity, and decay heat using a powerful new ORIGEN Post-Processing Utility for SCALE (OPUS) GUI component. This document includes a description and user guide for the GUI, a step-by-step tutorial for a simplified scenario, and appendices that document the file structures used.« less
Source and identification of heavy ions in the equatorial F layer.
NASA Technical Reports Server (NTRS)
Hanson, W. B.; Sterling, D. L.; Woodman, R. F.
1972-01-01
Further evidence is presented to show that the interpretation of some Ogo 6 retarding potential analyzer (RPA) results in terms of ambient Fe+ ions is correct. The Fe+ ions are observed only within dip latitudes of plus or minus 30 deg, and the reason for this latitudinal specificity is discussed in terms of a low-altitude source region and F region diffusion and electrodynamic drift. It is shown that the polarization field associated with the equatorial electrojet will raise ions to 160 km out of a chemical source region below 100 km but it will do so only in a narrow region centered on the dip equator. Subsequent vertical ExB drift, coupled with motions along the magnetic fields, can move the ions to greater heights and greater latitudes. There should be a resultant fountain of metallic ions rising near the equator that subsequently descends back to the E and D layers at tropical latitudes.
NASA Astrophysics Data System (ADS)
Smith, R. A.; Moore, R. B.; Shanley, J. B.; Miller, E. K.; Kamman, N. C.; Nacci, D.
2009-12-01
Mercury (Hg) concentrations in fish and aquatic wildlife are complex functions of atmospheric Hg deposition rate, terrestrial and aquatic watershed characteristics that influence Hg methylation and export, and food chain characteristics determining Hg bioaccumulation. Because of the complexity and incomplete understanding of these processes, regional-scale models of fish tissue Hg concentration are necessarily empirical in nature, typically constructed through regression analysis of fish tissue Hg concentration data from many sampling locations on a set of potential explanatory variables. Unless the data sets are unusually long and show clear time trends, the empirical basis for model building must be based solely on spatial correlation. Predictive regional scale models are highly useful for improving understanding of the relevant biogeochemical processes, as well as for practical fish and wildlife management and human health protection. Mechanistically, the logical arrangement of explanatory variables is to multiply each of the individual Hg source terms (e.g. dry, wet, and gaseous deposition rates, and residual watershed Hg) for a given fish sampling location by source-specific terms pertaining to methylation, watershed transport, and biological uptake for that location (e.g. SO4 availability, hill slope, lake size). This mathematical form has the desirable property that predicted tissue concentration will approach zero as all individual source terms approach zero. One complication with this form, however, is that it is inconsistent with the standard linear multiple regression equation in which all terms (including those for sources and physical conditions) are additive. An important practical disadvantage of a model in which the Hg source terms are additive (rather than multiplicative) with their modifying factors is that predicted concentration is not zero when all sources are zero, making it unreliable for predicting the effects of large future reductions in Hg deposition. In this paper we compare the results of using several different linear and non-linear models in an analysis of watershed and fish Hg data for 450 New England lakes. The differences in model results pertain to both their utility in interpreting methylation and export processes as well as in fisheries management.
Comparing the contributions of ionospheric outflow and high-altitude production to O+ loss at Mars
NASA Astrophysics Data System (ADS)
Liemohn, Michael; Curry, Shannon; Fang, Xiaohua; Johnson, Blake; Fraenz, Markus; Ma, Yingjuan
2013-04-01
The Mars total O+ escape rate is highly dependent on both the ionospheric and high-altitude source terms. Because of their different source locations, they appear in velocity space distributions as distinct populations. The Mars Test Particle model is used (with background parameters from the BATS-R-US magnetohydrodynamic code) to simulate the transport of ions in the near-Mars space environment. Because it is a collisionless model, the MTP's inner boundary is placed at 300 km altitude for this study. The MHD values at this altitude are used to define an ionospheric outflow source of ions for the MTP. The resulting loss distributions (in both real and velocity space) from this ionospheric source term are compared against those from high-altitude ionization mechanisms, in particular photoionization, charge exchange, and electron impact ionization, each of which have their own (albeit overlapping) source regions. In subsequent simulations, the MHD values defining the ionospheric outflow are systematically varied to parametrically explore possible ionospheric outflow scenarios. For the nominal MHD ionospheric outflow settings, this source contributes only 10% to the total O+ loss rate, nearly all via the central tail region. There is very little dependence of this percentage on the initial temperature, but a change in the initial density or bulk velocity directly alters this loss through the central tail. However, a density or bulk velocity increase of a factor of 10 makes the ionospheric outflow loss comparable in magnitude to the loss from the combined high-altitude sources. The spatial and velocity space distributions of escaping O+ are examined and compared for the various source terms, identifying features specific to each ion source mechanism. These results are applied to a specific Mars Express orbit and used to interpret high-altitude observations from the ion mass analyzer onboard MEX.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carle, Steven F.
2011-05-04
This report describes the development, processes, and results of a hydrologic source term (HST) model for the CLEARWATER (U12q) and WINESKIN (U12r) tests located on Rainier Mesa, Nevada National Security Site, Nevada (Figure 1.1). Of the 61 underground tests (involving 62 unique detonations) conducted on Rainier Mesa (Area 12) between 1957 and 1992 (USDOE, 2015), the CLEARWATER and WINESKIN tests present many unique features that warrant a separate HST modeling effort from other Rainier Mesa tests.
An Exact Form of Lilley's Equation with a Velocity Quadrupole/Temperature Dipole Source Term
NASA Technical Reports Server (NTRS)
Goldstein, Marvin E.
2001-01-01
There have been several attempts to introduce approximations into the exact form of Lilley's equation in order to express the source term as the sum of a quadrupole whose strength is quadratic in the fluctuating velocities and a dipole whose strength is proportional to the temperature fluctuations. The purpose of this note is to show that it is possible to choose the dependent (i.e., the pressure) variable so that this type of result can be derived directly from the Euler equations without introducing any additional approximations.
77 FR 19740 - Water Sources for Long-Term Recirculation Cooling Following a Loss-of-Coolant Accident
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-02
... NUCLEAR REGULATORY COMMISSION [NRC-2010-0249] Water Sources for Long-Term Recirculation Cooling... Regulatory Guide (RG) 1.82, ``Water Sources for Long-Term Recirculation Cooling Following a Loss-of-Coolant... regarding the sumps and suppression pools that provide water sources for emergency core cooling, containment...
Classification of light sources and their interaction with active and passive environments
NASA Astrophysics Data System (ADS)
El-Dardiry, Ramy G. S.; Faez, Sanli; Lagendijk, Ad
2011-03-01
Emission from a molecular light source depends on its optical and chemical environment. This dependence is different for various sources. We present a general classification in terms of constant-amplitude and constant-power sources. Using this classification, we have described the response to both changes in the local density of states and stimulated emission. The unforeseen consequences of this classification are illustrated for photonic studies by random laser experiments and are in good agreement with our correspondingly developed theory. Our results require a revision of studies on sources in complex media.
Snow, Mathew S.; Snyder, Darin C.; Delmore, James E.
2016-01-18
Source term attribution of environmental contamination following the Fukushima Daiichi Nuclear Power Plant (FDNPP) disaster is complicated by a large number of possible similar emission source terms (e.g. FDNPP reactor cores 1–3 and spent fuel ponds 1–4). Cesium isotopic analyses can be utilized to discriminate between environmental contamination from different FDNPP source terms and, if samples are sufficiently temporally resolved, potentially provide insights into the extent of reactor core damage at a given time. Rice, soil, mushroom, and soybean samples taken 100–250 km from the FDNPP site were dissolved using microwave digestion. Radiocesium was extracted and purified using two sequentialmore » ammonium molybdophosphate-polyacrylonitrile columns, following which 135Cs/ 137Cs isotope ratios were measured using thermal ionization mass spectrometry (TIMS). Results were compared with data reported previously from locations to the northwest of FDNPP and 30 km to the south of FDNPP. 135Cs/ 137Cs isotope ratios from samples 100–250 km to the southwest of the FDNPP site show a consistent value of 0.376 ± 0.008. 135Cs/ 137Cs versus 134Cs/ 137Cs correlation plots suggest that radiocesium to the southwest is derived from a mixture of FDNPP reactor cores 1, 2, and 3. Conclusions from the cesium isotopic data are in agreement with those derived independently based upon the event chronology combined with meteorological conditions at the time of the disaster. In conclusion, cesium isotopic analyses provide a powerful tool for source term discrimination of environmental radiocesium contamination at the FDNPP site. For higher precision source term attribution and forensic determination of the FDNPP core conditions based upon cesium, analyses of a larger number of samples from locations to the north and south of the FDNPP site (particularly time-resolved air filter samples) are needed. Published in 2016. This article is a U.S. Government work and is in the public domain in the USA.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snow, Mathew S.; Snyder, Darin C.; Delmore, James E.
Source term attribution of environmental contamination following the Fukushima Daiichi Nuclear Power Plant (FDNPP) disaster is complicated by a large number of possible similar emission source terms (e.g. FDNPP reactor cores 1–3 and spent fuel ponds 1–4). Cesium isotopic analyses can be utilized to discriminate between environmental contamination from different FDNPP source terms and, if samples are sufficiently temporally resolved, potentially provide insights into the extent of reactor core damage at a given time. Rice, soil, mushroom, and soybean samples taken 100–250 km from the FDNPP site were dissolved using microwave digestion. Radiocesium was extracted and purified using two sequentialmore » ammonium molybdophosphate-polyacrylonitrile columns, following which 135Cs/ 137Cs isotope ratios were measured using thermal ionization mass spectrometry (TIMS). Results were compared with data reported previously from locations to the northwest of FDNPP and 30 km to the south of FDNPP. 135Cs/ 137Cs isotope ratios from samples 100–250 km to the southwest of the FDNPP site show a consistent value of 0.376 ± 0.008. 135Cs/ 137Cs versus 134Cs/ 137Cs correlation plots suggest that radiocesium to the southwest is derived from a mixture of FDNPP reactor cores 1, 2, and 3. Conclusions from the cesium isotopic data are in agreement with those derived independently based upon the event chronology combined with meteorological conditions at the time of the disaster. In conclusion, cesium isotopic analyses provide a powerful tool for source term discrimination of environmental radiocesium contamination at the FDNPP site. For higher precision source term attribution and forensic determination of the FDNPP core conditions based upon cesium, analyses of a larger number of samples from locations to the north and south of the FDNPP site (particularly time-resolved air filter samples) are needed. Published in 2016. This article is a U.S. Government work and is in the public domain in the USA.« less
Snow, Mathew S; Snyder, Darin C; Delmore, James E
2016-02-28
Source term attribution of environmental contamination following the Fukushima Daiichi Nuclear Power Plant (FDNPP) disaster is complicated by a large number of possible similar emission source terms (e.g. FDNPP reactor cores 1-3 and spent fuel ponds 1-4). Cesium isotopic analyses can be utilized to discriminate between environmental contamination from different FDNPP source terms and, if samples are sufficiently temporally resolved, potentially provide insights into the extent of reactor core damage at a given time. Rice, soil, mushroom, and soybean samples taken 100-250 km from the FDNPP site were dissolved using microwave digestion. Radiocesium was extracted and purified using two sequential ammonium molybdophosphate-polyacrylonitrile columns, following which (135)Cs/(137) Cs isotope ratios were measured using thermal ionization mass spectrometry (TIMS). Results were compared with data reported previously from locations to the northwest of FDNPP and 30 km to the south of FDNPP. (135)Cs/(137)Cs isotope ratios from samples 100-250 km to the southwest of the FDNPP site show a consistent value of 0.376 ± 0.008. (135)Cs/(137)Cs versus (134)Cs/(137)Cs correlation plots suggest that radiocesium to the southwest is derived from a mixture of FDNPP reactor cores 1, 2, and 3. Conclusions from the cesium isotopic data are in agreement with those derived independently based upon the event chronology combined with meteorological conditions at the time of the disaster. Cesium isotopic analyses provide a powerful tool for source term discrimination of environmental radiocesium contamination at the FDNPP site. For higher precision source term attribution and forensic determination of the FDNPP core conditions based upon cesium, analyses of a larger number of samples from locations to the north and south of the FDNPP site (particularly time-resolved air filter samples) are needed. Published in 2016. This article is a U.S. Government work and is in the public domain in the USA.
Ancient Glass: A Literature Search and its Role in Waste Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strachan, Denis M.; Pierce, Eric M.
2010-07-01
When developing a performance assessment model for the long-term disposal of immobilized low-activity waste (ILAW) glass, it is desirable to determine the durability of glass forms over very long periods of time. However, testing is limited to short time spans, so experiments are performed under conditions that accelerate the key geochemical processes that control weathering. Verification that models currently being used can reliably calculate the long term behavior ILAW glass is a key component of the overall PA strategy. Therefore, Pacific Northwest National Laboratory was contracted by Washington River Protection Solutions, LLC to evaluate alternative strategies that can be usedmore » for PA source term model validation. One viable alternative strategy is the use of independent experimental data from archaeological studies of ancient or natural glass contained in the literature. These results represent a potential independent experiment that date back to approximately 3600 years ago or 1600 before the current era (bce) in the case of ancient glass and 106 years or older in the case of natural glass. The results of this literature review suggest that additional experimental data may be needed before the result from archaeological studies can be used as a tool for model validation of glass weathering and more specifically disposal facility performance. This is largely because none of the existing data set contains all of the information required to conduct PA source term calculations. For example, in many cases the sediments surrounding the glass was not collected and analyzed; therefore having the data required to compare computer simulations of concentration flux is not possible. This type of information is important to understanding the element release profile from the glass to the surrounding environment and provides a metric that can be used to calibrate source term models. Although useful, the available literature sources do not contain the required information needed to simulate the long-term performance of nuclear waste glasses in a near-surface or deep geologic repositories. The information that will be required include 1) experimental measurements to quantify the model parameters, 2) detailed analyses of altered glass samples, and 3) detailed analyses of the sediment surrounding the ancient glass samples.« less
Performance evaluation of WAVEWATCH III model in the Persian Gulf using different wind resources
NASA Astrophysics Data System (ADS)
Kazeminezhad, Mohammad Hossein; Siadatmousavi, Seyed Mostafa
2017-07-01
The third-generation wave model, WAVEWATCH III, was employed to simulate bulk wave parameters in the Persian Gulf using three different wind sources: ERA-Interim, CCMP, and GFS-Analysis. Different formulations for whitecapping term and the energy transfer from wind to wave were used, namely the Tolman and Chalikov (J Phys Oceanogr 26:497-518, 1996), WAM cycle 4 (BJA and WAM4), and Ardhuin et al. (J Phys Oceanogr 40(9):1917-1941, 2010) (TEST405 and TEST451 parameterizations) source term packages. The obtained results from numerical simulations were compared to altimeter-derived significant wave heights and measured wave parameters at two stations in the northern part of the Persian Gulf through statistical indicators and the Taylor diagram. Comparison of the bulk wave parameters with measured values showed underestimation of wave height using all wind sources. However, the performance of the model was best when GFS-Analysis wind data were used. In general, when wind veering from southeast to northwest occurred, and wind speed was high during the rotation, the model underestimation of wave height was severe. Except for the Tolman and Chalikov (J Phys Oceanogr 26:497-518, 1996) source term package, which severely underestimated the bulk wave parameters during stormy condition, the performances of other formulations were practically similar. However, in terms of statistics, the Ardhuin et al. (J Phys Oceanogr 40(9):1917-1941, 2010) source terms with TEST405 parameterization were the most successful formulation in the Persian Gulf when compared to in situ and altimeter-derived observations.
NASA Astrophysics Data System (ADS)
Marques, G.; Fraga, C. C. S.; Medellin-Azuara, J.
2016-12-01
The expansion and operation of urban water supply systems under growing demands, hydrologic uncertainty and water scarcity requires a strategic combination of supply sources for reliability, reduced costs and improved operational flexibility. The design and operation of such portfolio of water supply sources involves integration of long and short term planning to determine what and when to expand, and how much to use of each supply source accounting for interest rates, economies of scale and hydrologic variability. This research presents an integrated methodology coupling dynamic programming optimization with quadratic programming to optimize the expansion (long term) and operations (short term) of multiple water supply alternatives. Lagrange Multipliers produced by the short-term model provide a signal about the marginal opportunity cost of expansion to the long-term model, in an iterative procedure. A simulation model hosts the water supply infrastructure and hydrologic conditions. Results allow (a) identification of trade offs between cost and reliability of different expansion paths and water use decisions; (b) evaluation of water transfers between urban supply systems; and (c) evaluation of potential gains by reducing water system losses as a portfolio component. The latter is critical in several developing countries where water supply system losses are high and often neglected in favor of more system expansion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunett, Acacia J.; Bucknor, Matthew; Grabaskas, David
A vital component of the U.S. reactor licensing process is an integrated safety analysis in which a source term representing the release of radionuclides during normal operation and accident sequences is analyzed. Historically, source term analyses have utilized bounding, deterministic assumptions regarding radionuclide release. However, advancements in technical capabilities and the knowledge state have enabled the development of more realistic and best-estimate retention and release models such that a mechanistic source term assessment can be expected to be a required component of future licensing of advanced reactors. Recently, as part of a Regulatory Technology Development Plan effort for sodium cooledmore » fast reactors (SFRs), Argonne National Laboratory has investigated the current state of knowledge of potential source terms in an SFR via an extensive review of previous domestic experiments, accidents, and operation. As part of this work, the significant sources and transport processes of radionuclides in an SFR have been identified and characterized. This effort examines all stages of release and source term evolution, beginning with release from the fuel pin and ending with retention in containment. Radionuclide sources considered in this effort include releases originating both in-vessel (e.g. in-core fuel, primary sodium, cover gas cleanup system, etc.) and ex-vessel (e.g. spent fuel storage, handling, and movement). Releases resulting from a primary sodium fire are also considered as a potential source. For each release group, dominant transport phenomena are identified and qualitatively discussed. The key product of this effort was the development of concise, inclusive diagrams that illustrate the release and retention mechanisms at a high level, where unique schematics have been developed for in-vessel, ex-vessel and sodium fire releases. This review effort has also found that despite the substantial range of phenomena affecting radionuclide release, the current state of knowledge is extensive, and in most areas may be sufficient. Several knowledge gaps were identified, such as uncertainty in release from molten fuel and availability of thermodynamic data for lanthanides and actinides in liquid sodium. However, the overall findings suggest that high retention rates can be expected within the fuel and primary sodium for all radionuclides other than noble gases.« less
Energy Spectra of Abundant Cosmic-ray Nuclei in Sources, According to the ATIC Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panov, A. D.; Sokolskaya, N. V.; Zatsepin, V. I., E-mail: panov@dec1.sinp.msu.ru
One of the main results of the ATIC (Advanced Thin Ionization Calorimeter) experiment is a collection of energy spectra of abundant cosmic-ray nuclei: protons, He, C, O, Ne, Mg, Si, and Fe measured in terms of energy per particle in the energy range from 50 GeV to tens of teraelectronvolts. In this paper, the ATIC energy spectra of abundant primary nuclei are back-propagated to the spectra in sources in terms of magnetic rigidity using a leaky-box approximation of three different GALPROP-based diffusion models of propagation that fit the latest B/C data of the AMS-02 experiment. It is shown that themore » results of a comparison of the slopes of the spectra in sources are weakly model dependent; therefore the differences of spectral indices are reliable data. A regular growth of the steepness of spectra in sources in the range of magnetic rigidity of 50–1350 GV is found for a charge range from helium to iron. This conclusion is statistically reliable with significance better than 3.2 standard deviations. The results are discussed and compared to the data of other modern experiments.« less
Long Term Temporal and Spectral Evolution of Point Sources in Nearby Elliptical Galaxies
NASA Astrophysics Data System (ADS)
Durmus, D.; Guver, T.; Hudaverdi, M.; Sert, H.; Balman, Solen
2016-06-01
We present the results of an archival study of all the point sources detected in the lines of sight of the elliptical galaxies NGC 4472, NGC 4552, NGC 4649, M32, Maffei 1, NGC 3379, IC 1101, M87, NGC 4477, NGC 4621, and NGC 5128, with both the Chandra and XMM-Newton observatories. Specifically, we studied the temporal and spectral evolution of these point sources over the course of the observations of the galaxies, mostly covering the 2000 - 2015 period. In this poster we present the first results of this study, which allows us to further constrain the X-ray source population in nearby elliptical galaxies and also better understand the nature of individual point sources.
Huang, Weidong; Li, Kun; Wang, Gan; Wang, Yingzhe
2013-11-01
In this article, we present a newly designed inverse umbrella surface aerator, and tested its performance in driving flow of an oxidation ditch. Results show that it has a better performance in driving the oxidation ditch than the original one with higher average velocity and more uniform flow field. We also present a computational fluid dynamics model for predicting the flow field in an oxidation ditch driven by a surface aerator. The improved momentum source term approach to simulate the flow field of the oxidation ditch driven by an inverse umbrella surface aerator was developed and validated through experiments. Four kinds of turbulent models were investigated with the approach, including the standard k - ɛ model, RNG k - ɛ model, realizable k - ɛ model, and Reynolds stress model, and the predicted data were compared with those calculated with the multiple rotating reference frame approach (MRF) and sliding mesh approach (SM). Results of the momentum source term approach are in good agreement with the experimental data, and its prediction accuracy is better than MRF, close to SM. It is also found that the momentum source term approach has lower computational expenses, is simpler to preprocess, and is easier to use.
A simple mass-conserved level set method for simulation of multiphase flows
NASA Astrophysics Data System (ADS)
Yuan, H.-Z.; Shu, C.; Wang, Y.; Shu, S.
2018-04-01
In this paper, a modified level set method is proposed for simulation of multiphase flows with large density ratio and high Reynolds number. The present method simply introduces a source or sink term into the level set equation to compensate the mass loss or offset the mass increase. The source or sink term is derived analytically by applying the mass conservation principle with the level set equation and the continuity equation of flow field. Since only a source term is introduced, the application of the present method is as simple as the original level set method, but it can guarantee the overall mass conservation. To validate the present method, the vortex flow problem is first considered. The simulation results are compared with those from the original level set method, which demonstrates that the modified level set method has the capability of accurately capturing the interface and keeping the mass conservation. Then, the proposed method is further validated by simulating the Laplace law, the merging of two bubbles, a bubble rising with high density ratio, and Rayleigh-Taylor instability with high Reynolds number. Numerical results show that the mass is a well-conserved by the present method.
Mohammadi, A; Hassanzadeh, M; Gharib, M
2016-02-01
In this study, shielding calculation and criticality safety analysis were carried out for general material testing reactor (MTR) research reactors interim storage and relevant transportation cask. During these processes, three major terms were considered: source term, shielding, and criticality calculations. The Monte Carlo transport code MCNP5 was used for shielding calculation and criticality safety analysis and ORIGEN2.1 code for source term calculation. According to the results obtained, a cylindrical cask with body, top, and bottom thicknesses of 18, 13, and 13 cm, respectively, was accepted as the dual-purpose cask. Furthermore, it is shown that the total dose rates are below the normal transport criteria that meet the standards specified. Copyright © 2015 Elsevier Ltd. All rights reserved.
Source Credibility in Tobacco Control Messaging
Schmidt, Allison M.; Ranney, Leah M.; Pepper, Jessica K.; Goldstein, Adam O.
2016-01-01
Objectives Perceived credibility of a message’s source can affect persuasion. This paper reviews how beliefs about the source of tobacco control messages may encourage attitude and behavior change. Methods We conducted a series of searches of the peer-reviewed literature using terms from communication and public health fields. We reviewed research on source credibility, its underlying concepts, and its relation to the persuasiveness of tobacco control messages. Results We recommend an agenda for future research to bridge the gaps between communication literature on source credibility and tobacco control research. Our recommendations are to study the impact of source credibility on persuasion with long-term behavior change outcomes, in different populations and demographic groups, by developing new credibility measures that are topic- and organization-specific, by measuring how credibility operates across media platforms, and by identifying factors that enhance credibility and persuasion. Conclusions This manuscript reviews the state of research on source credibility and identifies gaps that are maximally relevant to tobacco control communication. Knowing first whether a source is perceived as credible, and second, how to enhance perceived credibility, can inform the development of future tobacco control campaigns and regulatory communications. PMID:27525298
26 CFR 1.737-3 - Basis adjustments; Recovery rules.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...
26 CFR 1.737-3 - Basis adjustments; Recovery rules.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...
26 CFR 1.737-3 - Basis adjustments; Recovery rules.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...
26 CFR 1.737-3 - Basis adjustments; Recovery rules.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...
26 CFR 1.737-3 - Basis adjustments; Recovery rules.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...
Further development of a global pollution model for CO, CH4, and CH2 O
NASA Technical Reports Server (NTRS)
Peters, L. K.
1975-01-01
Global tropospheric pollution models are developed that describe the transport and the physical and chemical processes occurring between the principal sources and sinks of CH4 and CO. Results are given of long term static chemical kinetic computer simulations and preliminary short term dynamic simulations.
NuSTAR observations of M31: globular cluster candidates found to be Z sources
NASA Astrophysics Data System (ADS)
Maccarone, Thomas J.; Yukita, Mihoko; Hornschemeier, Ann E.; Lehmer, Bret; Antoniou, Vallia; Ptak, Andrew; Wik, Daniel R.; Zezas, Andreas; Boyd, Patricia T.; Kennea, Jamie A.; Page, Kim; Eracleous, Michael; Williams, Benjamin F.; NuSTAR mission Team
2016-01-01
We present the results of Swift + NuSTAR observations of 4 bright globular cluster sources in M31. Three of these had previously been suggested to be black holes on the basis of their spectra. We show that all are well fit by models indicative of Z source natures for the sources. We also discuss some reasons why the long term light curves of these objects indicate that they are more likely to be neutron stars, and discuss the discrepancy between the empirical understanding of persistent sources and theoretical predictions.
Bayesian estimation of a source term of radiation release with approximately known nuclide ratios
NASA Astrophysics Data System (ADS)
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek
2016-04-01
We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).
NASA Astrophysics Data System (ADS)
Delpueyo, D.; Balandraud, X.; Grédiac, M.
2013-09-01
The aim of this paper is to present a post-processing technique based on a derivative Gaussian filter to reconstruct heat source fields from temperature fields measured by infrared thermography. Heat sources can be deduced from temperature variations thanks to the heat diffusion equation. Filtering and differentiating are key-issues which are closely related here because the temperature fields which are processed are unavoidably noisy. We focus here only on the diffusion term because it is the most difficult term to estimate in the procedure, the reason being that it involves spatial second derivatives (a Laplacian for isotropic materials). This quantity can be reasonably estimated using a convolution of the temperature variation fields with second derivatives of a Gaussian function. The study is first based on synthetic temperature variation fields corrupted by added noise. The filter is optimised in order to reconstruct at best the heat source fields. The influence of both the dimension and the level of a localised heat source is discussed. Obtained results are also compared with another type of processing based on an averaging filter. The second part of this study presents an application to experimental temperature fields measured with an infrared camera on a thin plate in aluminium alloy. Heat sources are generated with an electric heating patch glued on the specimen surface. Heat source fields reconstructed from measured temperature fields are compared with the imposed heat sources. Obtained results illustrate the relevancy of the derivative Gaussian filter to reliably extract heat sources from noisy temperature fields for the experimental thermomechanics of materials.
Preparing aircraft propulsion for a new era in energy and the environment
NASA Technical Reports Server (NTRS)
Stewart, W. L.; Nored, D. L.; Grobman, J. S.; Feiler, C. E.; Petrash, D. A.
1980-01-01
Improving fuel efficiency, new sources of jet fuel, and noise and emission control are subjects of NASA's aeronautics program. Projects aimed at attaining a 5% fuel savings for existing engines and a 13-22% savings for the next generation of turbofan engines using advanced components, and establishing a basis for turboprop-powered commercial air transports with 30-40% savings over conventional turbofan aircraft at comparable speeds and altitudes, are discussed. Fuel sources are considered in terms of reduced hydrogen and higher aromatic contents and resultant higher liner temperatures, and attention is given to lean burning, improved fuel atomization, higher freezing-point fuel, and deriving jet fuel from shale oil or coal. Noise sources including the fan, turbine, combustion process, and flow over internal struts, and attenuation using acoustic treatment, are discussed, while near-term reduction of polluting gaseous emissions at both low and high power, and far-term defining of the minimum gaseous-pollutant levels possible from turbine engines are also under study.
10 CFR 50.67 - Accident source term.
Code of Federal Regulations, 2014 CFR
2014-01-01
... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2014-01-01 2014-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...
10 CFR 50.67 - Accident source term.
Code of Federal Regulations, 2012 CFR
2012-01-01
... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2012-01-01 2012-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...
10 CFR 50.67 - Accident source term.
Code of Federal Regulations, 2010 CFR
2010-01-01
... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2010-01-01 2010-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...
10 CFR 50.67 - Accident source term.
Code of Federal Regulations, 2013 CFR
2013-01-01
... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2013-01-01 2013-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...
10 CFR 50.67 - Accident source term.
Code of Federal Regulations, 2011 CFR
2011-01-01
... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2011-01-01 2011-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...
Microstructure of the combustion zone: Thin-binder AP-polymer sandwiches
NASA Technical Reports Server (NTRS)
Price, E. W.; Panyam, R. R.; Sigman, R. K.
1980-01-01
Experimental results are summarized for systematic quench-burning tests on ammonium perchlorate-HC binder sandwiches with binder thicknesses in the range 10 - 150 microns. Tests included three binders (polysulfide, polybutadiene-acrylonitrile, and hydroxy terminated polybutadiene), and pressures from 1.4 to 14 MPa. In addition, deflagration limits were determined in terms of binder thickness and pressure. Results are discussed in terms of a qualitative theory of sandwich burning consolidated from various sources. Some aspects of the observed results are explained only speculatively.
Rotational and X-ray luminosity evolution of high-B radio pulsars
NASA Astrophysics Data System (ADS)
Benli, Onur; Ertan, Ünal
2018-05-01
In continuation of our earlier work on the long-term evolution of the so-called high-B radio pulsars (HBRPs) with measured braking indices, we have investigated the long-term evolution of the remaining five HBRPs for which braking indices have not been measured yet. This completes our source-by-source analyses of HBRPs in the fallback disc model that was also applied earlier to anomalous X-ray pulsars (AXPs), soft gamma repeaters (SGRs), and dim isolated neutron stars (XDINs). Our results show that the X-ray luminosities and the rotational properties of these rather different neutron star populations can be acquired by neutron stars with fallback discs as a result of differences in their initial conditions, namely the initial disc mass, initial period and the dipole field strength. For the five HBRPs, unlike for AXPs, SGRs and XDINs, our results do not constrain the dipole field strengths of the sources. We obtain evolutionary paths leading to the properties of HBRPs in the propeller phase with dipole fields sufficiently strong to produce pulsed radio emission.
An Improved Elastic and Nonelastic Neutron Transport Algorithm for Space Radiation
NASA Technical Reports Server (NTRS)
Clowdsley, Martha S.; Wilson, John W.; Heinbockel, John H.; Tripathi, R. K.; Singleterry, Robert C., Jr.; Shinn, Judy L.
2000-01-01
A neutron transport algorithm including both elastic and nonelastic particle interaction processes for use in space radiation protection for arbitrary shield material is developed. The algorithm is based upon a multiple energy grouping and analysis of the straight-ahead Boltzmann equation by using a mean value theorem for integrals. The algorithm is then coupled to the Langley HZETRN code through a bidirectional neutron evaporation source term. Evaluation of the neutron fluence generated by the solar particle event of February 23, 1956, for an aluminum water shield-target configuration is then compared with MCNPX and LAHET Monte Carlo calculations for the same shield-target configuration. With the Monte Carlo calculation as a benchmark, the algorithm developed in this paper showed a great improvement in results over the unmodified HZETRN solution. In addition, a high-energy bidirectional neutron source based on a formula by Ranft showed even further improvement of the fluence results over previous results near the front of the water target where diffusion out the front surface is important. Effects of improved interaction cross sections are modest compared with the addition of the high-energy bidirectional source terms.
National Air Toxic Assessments (NATA) Results
The National Air Toxics Assessment was conducted by EPA in 2002 to assess air toxics emissions in order to identify and prioritize air toxics, emission source types and locations which are of greatest potential concern in terms of contributing to population risk. This data source provides downloadable information on emissions at the state, county and census tract level.
Washburn, Richard A.; Szabo, Amanda N.; Lambourne, Kate; Willis, Erik A.; Ptomey, Lauren T.; Honas, Jeffery J.; Herrmann, Stephen D.; Donnelly, Joseph E.
2014-01-01
Background Differences in biological changes from weight loss by energy restriction and/or exercise may be associated with differences in long-term weight loss/regain. Objective To assess the effect of weight loss method on long-term changes in weight, body composition and chronic disease risk factors. Data Sources PubMed and Embase were searched (January 1990-October 2013) for studies with data on the effect of energy restriction, exercise (aerobic and resistance) on long-term weight loss. Twenty articles were included in this review. Study Eligibility Criteria Primary source, peer reviewed randomized trials published in English with an active weight loss period of >6 months, or active weight loss with a follow-up period of any duration, conducted in overweight or obese adults were included. Study Appraisal and Synthesis Methods Considerable heterogeneity across trials existed for important study parameters, therefore a meta-analysis was considered inappropriate. Results were synthesized and grouped by comparisons (e.g. diet vs. aerobic exercise, diet vs. diet + aerobic exercise etc.) and study design (long-term or weight loss/follow-up). Results Forty percent of trials reported significantly greater long-term weight loss with diet compared with aerobic exercise, while results for differences in weight regain were inconclusive. Diet+aerobic exercise resulted in significantly greater weight loss than diet alone in 50% of trials. However, weight regain (∼55% of loss) was similar in diet and diet+aerobic exercise groups. Fat-free mass tended to be preserved when interventions included exercise. PMID:25333384
A new traffic model with a lane-changing viscosity term
NASA Astrophysics Data System (ADS)
Ko, Hung-Tang; Liu, Xiao-He; Guo, Ming-Min; Wu, Zheng
2015-09-01
In this paper, a new continuum traffic flow model is proposed, with a lane-changing source term in the continuity equation and a lane-changing viscosity term in the acceleration equation. Based on previous literature, the source term addresses the impact of speed difference and density difference between adjacent lanes, which provides better precision for free lane-changing simulation; the viscosity term turns lane-changing behavior to a “force” that may influence speed distribution. Using a flux-splitting scheme for the model discretization, two cases are investigated numerically. The case under a homogeneous initial condition shows that the numerical results by our model agree well with the analytical ones; the case with a small initial disturbance shows that our model can simulate the evolution of perturbation, including propagation, dissipation, cluster effect and stop-and-go phenomenon. Project supported by the National Natural Science Foundation of China (Grant Nos. 11002035 and 11372147) and Hui-Chun Chin and Tsung-Dao Lee Chinese Undergraduate Research Endowment (Grant No. CURE 14024).
He, Xiaowei; Liang, Jimin; Wang, Xiaorui; Yu, Jingjing; Qu, Xiaochao; Wang, Xiaodong; Hou, Yanbin; Chen, Duofang; Liu, Fang; Tian, Jie
2010-11-22
In this paper, we present an incomplete variables truncated conjugate gradient (IVTCG) method for bioluminescence tomography (BLT). Considering the sparse characteristic of the light source and insufficient surface measurement in the BLT scenarios, we combine a sparseness-inducing (ℓ1 norm) regularization term with a quadratic error term in the IVTCG-based framework for solving the inverse problem. By limiting the number of variables updated at each iterative and combining a variable splitting strategy to find the search direction more efficiently, it obtains fast and stable source reconstruction, even without a priori information of the permissible source region and multispectral measurements. Numerical experiments on a mouse atlas validate the effectiveness of the method. In vivo mouse experimental results further indicate its potential for a practical BLT system.
The sound of moving bodies. Ph.D. Thesis - Cambridge Univ.
NASA Technical Reports Server (NTRS)
Brentner, Kenneth Steven
1990-01-01
The importance of the quadrupole source term in the Ffowcs, Williams, and Hawkings (FWH) equation was addressed. The quadrupole source contains fundamental components of the complete fluid mechanics problem, which are ignored only at the risk of error. The results made it clear that any application of the acoustic analogy should begin with all of the source terms in the FWH theory. The direct calculation of the acoustic field as part of the complete unsteady fluid mechanics problem using CFD is considered. It was shown that aeroelastic calculation can indeed be made with CFD codes. The results indicate that the acoustic field is the most susceptible component of the computation to numerical error. Therefore, the ability to measure the damping of acoustic waves is absolutely essential both to develop acoustic computations. Essential groundwork for a new approach to the problem of sound generation by moving bodies is presented. This new computational acoustic approach holds the promise of solving many problems hitherto pushed aside.
NASA Astrophysics Data System (ADS)
Park, Junghyun; Hayward, Chris; Stump, Brian W.
2018-06-01
Ground truth sources in Utah during 2003-2013 are used to assess the contribution of temporal atmospheric conditions to infrasound detection and the predictive capabilities of atmospheric models. Ground truth sources consist of 28 long duration static rocket motor burn tests and 28 impulsive rocket body demolitions. Automated infrasound detections from a hybrid of regional seismometers and infrasound arrays use a combination of short-term time average/long-term time average ratios and spectral analyses. These detections are grouped into station triads using a Delaunay triangulation network and then associated to estimate phase velocity and azimuth to filter signals associated with a particular source location. The resulting range and azimuth distribution from sources to detecting stations varies seasonally and is consistent with predictions based on seasonal atmospheric models. Impulsive signals from rocket body detonations are observed at greater distances (>700 km) than the extended duration signals generated by the rocket burn test (up to 600 km). Infrasound energy attenuation associated with the two source types is quantified as a function of range and azimuth from infrasound amplitude measurements. Ray-tracing results using Ground-to-Space atmospheric specifications are compared to these observations and illustrate the degree to which the time variations in characteristics of the observations can be predicted over a multiple year time period.
Nikoloski, Zoran
2015-01-01
Plants as sessile organisms cannot escape their environment and have to adapt to any changes in the availability of sunlight and nutrients. The quantification of synthesis costs of metabolites, in terms of consumed energy, is a prerequisite to understand trade-offs arising from energetic limitations. Here, we examine the energy consumption of amino acid synthesis in Arabidopsis thaliana. To quantify these costs in terms of the energy equivalent ATP, we introduce an improved cost measure based on flux balance analysis and apply it to three state-of-the-art metabolic reconstructions to ensure robust results. We present the first systematic in silico analysis of the effect of nitrogen supply (nitrate/ammonium) on individual amino acid synthesis costs as well as of the effect of photoautotrophic and heterotrophic growth conditions, integrating day/night-specific regulation. Our results identify nitrogen supply as a key determinant of amino acid costs, in agreement with experimental evidence. In addition, the association of the determined costs with experimentally observed growth patterns suggests that metabolite synthesis costs are involved in shaping regulation of plant growth. Finally, we find that simultaneous uptake of both nitrogen sources can lead to efficient utilization of energy source, which may be the result of evolutionary optimization. PMID:25706533
Arnold, Anne; Sajitz-Hermstein, Max; Nikoloski, Zoran
2015-01-01
Plants as sessile organisms cannot escape their environment and have to adapt to any changes in the availability of sunlight and nutrients. The quantification of synthesis costs of metabolites, in terms of consumed energy, is a prerequisite to understand trade-offs arising from energetic limitations. Here, we examine the energy consumption of amino acid synthesis in Arabidopsis thaliana. To quantify these costs in terms of the energy equivalent ATP, we introduce an improved cost measure based on flux balance analysis and apply it to three state-of-the-art metabolic reconstructions to ensure robust results. We present the first systematic in silico analysis of the effect of nitrogen supply (nitrate/ammonium) on individual amino acid synthesis costs as well as of the effect of photoautotrophic and heterotrophic growth conditions, integrating day/night-specific regulation. Our results identify nitrogen supply as a key determinant of amino acid costs, in agreement with experimental evidence. In addition, the association of the determined costs with experimentally observed growth patterns suggests that metabolite synthesis costs are involved in shaping regulation of plant growth. Finally, we find that simultaneous uptake of both nitrogen sources can lead to efficient utilization of energy source, which may be the result of evolutionary optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pawloski, G A; Tompson, A F B; Carle, S F
The objectives of this report are to develop, summarize, and interpret a series of detailed unclassified simulations that forecast the nature and extent of radionuclide release and near-field migration in groundwater away from the CHESHIRE underground nuclear test at Pahute Mesa at the NTS over 1000 yrs. Collectively, these results are called the CHESHIRE Hydrologic Source Term (HST). The CHESHIRE underground nuclear test was one of 76 underground nuclear tests that were fired below or within 100 m of the water table between 1965 and 1992 in Areas 19 and 20 of the NTS. These areas now comprise the Pahutemore » Mesa Corrective Action Unit (CAU) for which a separate subregional scale flow and transport model is being developed by the UGTA Project to forecast the larger-scale migration of radionuclides from underground tests on Pahute Mesa. The current simulations are being developed, on one hand, to more fully understand the complex coupled processes involved in radionuclide migration, with a specific focus on the CHESHIRE test. While remaining unclassified, they are as site specific as possible and involve a level of modeling detail that is commensurate with the most fundamental processes, conservative assumptions, and representative data sets available. However, the simulation results are also being developed so that they may be simplified and interpreted for use as a source term boundary condition at the CHESHIRE location in the Pahute Mesa CAU model. In addition, the processes of simplification and interpretation will provide generalized insight as to how the source term behavior at other tests may be considered or otherwise represented in the Pahute Mesa CAU model.« less
Sun, Feng-xia; Zhang, Wei-hua; Xu, Ming-gang; Zhang, Wen-ju; Li, Zhao-qiang; Zhang, Jing-ye
2010-11-01
In order to explore the effects of long-term fertilization on the microbiological characters of red soil, soil samples were collected from a 19-year long-term experimental field in Qiyang of Hunan, with their microbial biomass carbon (MBC) and nitrogen (MBN) and microbial utilization ratio of carbon sources analyzed. The results showed that after 19-year fertilization, the soil MBC and MBN under the application of organic manure and of organic manure plus inorganic fertilizers were 231 and 81 mg x kg(-1) soil, and 148 and 73 mg x kg(-1) soil, respectively, being significantly higher than those under non-fertilization, inorganic fertilization, and inorganic fertilization plus straw incorporation. The ratio of soil MBN to total N under the application of organic manure and of organic manure plus inorganic fertilizers was averagely 6.0%, significantly higher than that under non-fertilization and inorganic fertilization. Biolog-ECO analysis showed that the average well color development (AWCD) value was in the order of applying organic manure plus inorganic fertilizers = applying organic manure > non-fertilization > inorganic fertilization = inorganic fertilization plus straw incorporation. Under the application of organic manure or of organic manure plus inorganic fertilizers, the microbial utilization rate of carbon sources, including carbohydrates, carboxylic acids, amino acids, polymers, phenols, and amines increased; while under inorganic fertilization plus straw incorporation, the utilization rate of polymers was the highest, and that of carbohydrates was the lowest. Our results suggested that long-term application of organic manure could increase the red soil MBC, MBN, and microbial utilization rate of carbon sources, improve soil fertility, and maintain a better crop productivity.
NASA Technical Reports Server (NTRS)
Hughes, Eric J.; Krotkov, Nickolay; da Silva, Arlindo; Colarco, Peter
2015-01-01
Simulation of volcanic emissions in climate models requires information that describes the eruption of the emissions into the atmosphere. While the total amount of gases and aerosols released from a volcanic eruption can be readily estimated from satellite observations, information about the source parameters, like injection altitude, eruption time and duration, is often not directly known. The AeroCOM volcanic emissions inventory provides estimates of eruption source parameters and has been used to initialize volcanic emissions in reanalysis projects, like MERRA. The AeroCOM volcanic emission inventory provides an eruptions daily SO2 flux and plume top altitude, yet an eruption can be very short lived, lasting only a few hours, and emit clouds at multiple altitudes. Case studies comparing the satellite observed dispersal of volcanic SO2 clouds to simulations in MERRA have shown mixed results. Some cases show good agreement with observations Okmok (2008), while for other eruptions the observed initial SO2 mass is half of that in the simulations, Sierra Negra (2005). In other cases, the initial SO2 amount agrees with the observations but shows very different dispersal rates, Soufriere Hills (2006). In the aviation hazards community, deriving accurate source terms is crucial for monitoring and short-term forecasting (24-h) of volcanic clouds. Back trajectory methods have been developed which use satellite observations and transport models to estimate the injection altitude, eruption time, and eruption duration of observed volcanic clouds. These methods can provide eruption timing estimates on a 2-hour temporal resolution and estimate the altitude and depth of a volcanic cloud. To better understand the differences between MERRA simulations and volcanic SO2 observations, back trajectory methods are used to estimate the source term parameters for a few volcanic eruptions and compared to their corresponding entry in the AeroCOM volcanic emission inventory. The nature of these mixed results is discussed with respect to the source term estimates.
Piecewise synonyms for enhanced UMLS source terminology integration.
Huang, Kuo-Chuan; Geller, James; Halper, Michael; Cimino, James J
2007-10-11
The UMLS contains more than 100 source vocabularies and is growing via the integration of others. When integrating a new source, the source terms already in the UMLS must first be found. The easiest approach to this is simple string matching. However, string matching usually does not find all concepts that should be found. A new methodology, based on the notion of piecewise synonyms, for enhancing the process of concept discovery in the UMLS is presented. This methodology is supported by first creating a general synonym dictionary based on the UMLS. Each multi-word source term is decomposed into its component words, allowing for the generation of separate synonyms for each word from the general synonym dictionary. The recombination of these synonyms into new terms creates an expanded pool of matching candidates for terms from the source. The methodology is demonstrated with respect to an existing UMLS source. It shows a 34% improvement over simple string matching.
[Schizophrenia and psychosis on the internet].
Schrank, Beate; Seyringer, Michaela-Elena; Berger, Peter; Katschnig, Heinz; Amering, Michaela
2006-09-01
The internet is an increasingly important source of information for patients concerning their illness. This has to be borne in mind concerning its growing influence on communications between patients and clinicians. The aim of this study is to assess the quality of German-language information on schizophrenia on the internet. Two searches of the terms schizophrenia and psychosis were conducted, using the Google search engine set to produce only German hits. The quality of the first hundred resulting sites was assessed according to a range of criteria, including diagnosis and therapy, links and interactive offers. Evidence-based medical information was provided by more than half of the sites resulting from the search term schizophrenia and by less than one third of psychosis hits. Information and discussion on the relationship between drugs and psychosis appeared almost exclusively under the term psychosis. It is suggested that mental health care professionals can use knowledge on what sort of information their patients are confronted with on the internet in order to assist them in profiting from this source of information.
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.C. Ryman
This calculation is a revision of a previous calculation (Ref. 7.5) that bears the same title and has the document identifier BBAC00000-01717-0210-00006 REV 01. The purpose of this revision is to remove TBV (to-be-verified) -41 10 associated with the output files of the previous version (Ref. 7.30). The purpose of this and the previous calculation is to generate source terms for a representative boiling water reactor (BWR) spent nuclear fuel (SNF) assembly for the first one million years after the SNF is discharged from the reactors. This calculation includes an examination of several ways to represent BWR assemblies and operatingmore » conditions in SAS2H in order to quantify the effects these representations may have on source terms. These source terms provide information characterizing the neutron and gamma spectra in particles per second, the decay heat in watts, and radionuclide inventories in curies. Source terms are generated for a range of burnups and enrichments (see Table 2) that are representative of the waste stream and stainless steel (SS) clad assemblies. During this revision, it was determined that the burnups used for the computer runs of the previous revision were actually about 1.7% less than the stated, or nominal, burnups. See Section 6.6 for a discussion of how to account for this effect before using any source terms from this calculation. The source term due to the activation of corrosion products deposited on the surfaces of the assembly from the coolant is also calculated. The results of this calculation support many areas of the Monitored Geologic Repository (MGR), which include thermal evaluation, radiation dose determination, radiological safety analyses, surface and subsurface facility designs, and total system performance assessment. This includes MGR items classified as Quality Level 1, for example, the Uncanistered Spent Nuclear Fuel Disposal Container (Ref. 7.27, page 7). Therefore, this calculation is subject to the requirements of the Quality Assurance Requirements and Description (Ref. 7.28). The performance of the calculation and development of this document are carried out in accordance with AP-3.124, ''Design Calculation and Analyses'' (Ref. 7.29).« less
NASA Astrophysics Data System (ADS)
Gica, E.
2016-12-01
The Short-term Inundation Forecasting for Tsunamis (SIFT) tool, developed by NOAA Center for Tsunami Research (NCTR) at the Pacific Marine Environmental Laboratory (PMEL), is used in forecast operations at the Tsunami Warning Centers in Alaska and Hawaii. The SIFT tool relies on a pre-computed tsunami propagation database, real-time DART buoy data, and an inversion algorithm to define the tsunami source. The tsunami propagation database is composed of 50×100km unit sources, simulated basin-wide for at least 24 hours. Different combinations of unit sources, DART buoys, and length of real-time DART buoy data can generate a wide range of results within the defined tsunami source. For an inexperienced SIFT user, the primary challenge is to determine which solution, among multiple solutions for a single tsunami event, would provide the best forecast in real time. This study investigates how the use of different tsunami sources affects simulated tsunamis at tide gauge locations. Using the tide gauge at Hilo, Hawaii, a total of 50 possible solutions for the 2011 Tohoku tsunami are considered. Maximum tsunami wave amplitude and root mean square error results are used to compare tide gauge data and the simulated tsunami time series. Results of this study will facilitate SIFT users' efforts to determine if the simulated tide gauge tsunami time series from a specific tsunami source solution would be within the range of possible solutions. This study will serve as the basis for investigating more historical tsunami events and tide gauge locations.
Scarton, Lou Ann; Del Fiol, Guilherme; Oakley-Girvan, Ingrid; Gibson, Bryan; Logan, Robert; Workman, T. Elizabeth
2018-01-01
Objective The research examined complementary and alternative medicine (CAM) information-seeking behaviors and preferences from short- to long-term cancer survival, including goals, motivations, and information sources. Methods A mixed-methods approach was used with cancer survivors from the “Assessment of Patients’ Experience with Cancer Care” 2004 cohort. Data collection included a mail survey and phone interviews using the critical incident technique (CIT). Results Seventy survivors from the 2004 study responded to the survey, and eight participated in the CIT interviews. Quantitative results showed that CAM usage did not change significantly between 2004 and 2015. The following themes emerged from the CIT: families’ and friends’ provision of the initial introduction to a CAM, use of CAM to manage the emotional and psychological impact of cancer, utilization of trained CAM practitioners, and online resources as a prominent source for CAM information. The majority of participants expressed an interest in an online information-sharing portal for CAM. Conclusion Patients continue to use CAM well into long-term cancer survivorship. Finding trustworthy sources for information on CAM presents many challenges such as reliability of source, conflicting information on efficacy, and unknown interactions with conventional medications. Study participants expressed interest in an online portal to meet these needs through patient testimonials and linkage of claims to the scientific literature. Such a portal could also aid medical librarians and clinicians in locating and evaluating CAM information on behalf of patients. PMID:29339938
NASA Astrophysics Data System (ADS)
Barrett, Steven R. H.; Britter, Rex E.
Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean dispersion are shown to produce results several orders of magnitude more efficiently with a loss of accuracy small compared to the absolute accuracy of advanced dispersion models near sources. The method can be readily incorporated into existing dispersion models, and may allow for additional computation time to be expended on modelling dispersion processes more accurately in future, rather than on accounting for source geometry.
Porous elastic system with nonlinear damping and sources terms
NASA Astrophysics Data System (ADS)
Freitas, Mirelson M.; Santos, M. L.; Langa, José A.
2018-02-01
We study the long-time behavior of porous-elastic system, focusing on the interplay between nonlinear damping and source terms. The sources may represent restoring forces, but may also be focusing thus potentially amplifying the total energy which is the primary scenario of interest. By employing nonlinear semigroups and the theory of monotone operators, we obtain several results on the existence of local and global weak solutions, and uniqueness of weak solutions. Moreover, we prove that such unique solutions depend continuously on the initial data. Under some restrictions on the parameters, we also prove that every weak solution to our system blows up in finite time, provided the initial energy is negative and the sources are more dominant than the damping in the system. Additional results are obtained via careful analysis involving the Nehari Manifold. Specifically, we prove the existence of a unique global weak solution with initial data coming from the "good" part of the potential well. For such a global solution, we prove that the total energy of the system decays exponentially or algebraically, depending on the behavior of the dissipation in the system near the origin. We also prove the existence of a global attractor.
Ihme, Matthias; Marsden, Alison L; Pitsch, Heinz
2008-02-01
A pattern search optimization method is applied to the generation of optimal artificial neural networks (ANNs). Optimization is performed using a mixed variable extension to the generalized pattern search method. This method offers the advantage that categorical variables, such as neural transfer functions and nodal connectivities, can be used as parameters in optimization. When used together with a surrogate, the resulting algorithm is highly efficient for expensive objective functions. Results demonstrate the effectiveness of this method in optimizing an ANN for the number of neurons, the type of transfer function, and the connectivity among neurons. The optimization method is applied to a chemistry approximation of practical relevance. In this application, temperature and a chemical source term are approximated as functions of two independent parameters using optimal ANNs. Comparison of the performance of optimal ANNs with conventional tabulation methods demonstrates equivalent accuracy by considerable savings in memory storage. The architecture of the optimal ANN for the approximation of the chemical source term consists of a fully connected feedforward network having four nonlinear hidden layers and 117 synaptic weights. An equivalent representation of the chemical source term using tabulation techniques would require a 500 x 500 grid point discretization of the parameter space.
Huang, Weidong; Li, Kun; Wang, Gan; Wang, Yingzhe
2013-01-01
Abstract In this article, we present a newly designed inverse umbrella surface aerator, and tested its performance in driving flow of an oxidation ditch. Results show that it has a better performance in driving the oxidation ditch than the original one with higher average velocity and more uniform flow field. We also present a computational fluid dynamics model for predicting the flow field in an oxidation ditch driven by a surface aerator. The improved momentum source term approach to simulate the flow field of the oxidation ditch driven by an inverse umbrella surface aerator was developed and validated through experiments. Four kinds of turbulent models were investigated with the approach, including the standard k−ɛ model, RNG k−ɛ model, realizable k−ɛ model, and Reynolds stress model, and the predicted data were compared with those calculated with the multiple rotating reference frame approach (MRF) and sliding mesh approach (SM). Results of the momentum source term approach are in good agreement with the experimental data, and its prediction accuracy is better than MRF, close to SM. It is also found that the momentum source term approach has lower computational expenses, is simpler to preprocess, and is easier to use. PMID:24302850
NASA Astrophysics Data System (ADS)
Ross, Z. E.; Ben-Zion, Y.; Zhu, L.
2015-02-01
We analyse source tensor properties of seven Mw > 4.2 earthquakes in the complex trifurcation area of the San Jacinto Fault Zone, CA, with a focus on isotropic radiation that may be produced by rock damage in the source volumes. The earthquake mechanisms are derived with generalized `Cut and Paste' (gCAP) inversions of three-component waveforms typically recorded by >70 stations at regional distances. The gCAP method includes parameters ζ and χ representing, respectively, the relative strength of the isotropic and CLVD source terms. The possible errors in the isotropic and CLVD components due to station variability is quantified with bootstrap resampling for each event. The results indicate statistically significant explosive isotropic components for at least six of the events, corresponding to ˜0.4-8 per cent of the total potency/moment of the sources. In contrast, the CLVD components for most events are not found to be statistically significant. Trade-off and correlation between the isotropic and CLVD components are studied using synthetic tests with realistic station configurations. The associated uncertainties are found to be generally smaller than the observed isotropic components. Two different tests with velocity model perturbation are conducted to quantify the uncertainty due to inaccuracies in the Green's functions. Applications of the Mann-Whitney U test indicate statistically significant explosive isotropic terms for most events consistent with brittle damage production at the source.
Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code
NASA Technical Reports Server (NTRS)
Waithe, Kenrick A.
2004-01-01
A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.
Comparison of advanced rechargeable batteries for autonomous underwater vehicles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Descroix, J.P.; Chagnon, G.
1994-12-31
For AUV to be promising in the field of military oceanic and scientific missions, it is of great importance that power sources must meet the system needs. In view of this, this article will address the present and near term options for electric power sources. Evaluation is based on a hypothetical AUV. It is expected that considerable results will be achieved with respect to the possible options and cost needed in the manufacture of such power sources. 5 refs.
NASA Technical Reports Server (NTRS)
Palosz, W.
2003-01-01
The amounts and composition of residual gases formed in sealed ampoules loaded with different sources (elements and II-VI and IV-VI compounds) after consecutive annealings were investigated. A given source was subjected to a series of heat treatments, with intermediate measurements and removal of the gas accumulated in the system. The results of these experiments are discussed in terms of the underlying thermochemical and kinetic phenomena and practical limitations of reducing the amount of residual gases in sealed ampoules.
Tuning Into Brown Dwarfs: Long-Term Radio Monitoring of Two Very Low Mass Dwarfs
NASA Astrophysics Data System (ADS)
Van Linge, Russell; Burgasser, Adam J.; Melis, Carl; Williams, Peter K. G.
2017-01-01
The very lowest-mass (VLM) stars and brown dwarfs, with effective temperatures T < 3000 K, exhibit mixed magnetic activity trends, with H-alpha and X-ray emission that declines rapidly beyond type M7/M8, but persistent radio emission in roughly 10-20% of sources. The dozen or so VLM radio emitters known show a broad range of emission characteristics and time-dependent behavior, including steady persistent emission, periodic oscillations, periodic polarized bursts, and aperiodic flares. Understanding the evolution of these variability patterns, and in particular whether they undergo solar-like cycles, requires long-term monitoring. We report the results of a long-term JVLA monitoring program of two magnetically-active VLM dwarf binaries, the young M7 2MASS 1314+1320AB and older L5 2MASS 1315-2649AB. On the bi-weekly cadence, 2MASS 1314 continues to show variability by revealing regular flaring while 2MASS 1315 continues to be a quiescent emitter. On the daily time scale, both sources show a mean flux density that can vary significantly just over a few days. These results suggest long-term radio behavior in radio-emitting VLM dwarfs is just as diverse and complex as short-term behavior.
English-Russian, Russian-English glossary of coal-cleaning terms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pekar, J.
1987-09-01
The document is an English-Russian, Russian-English glossary of coal-cleaning terms, compiled as a joint U.S./Soviet effort. The need for the glossary resulted from the growing number of language-specific terms used during information exchanges within the framework of the U.S./U.S.S.R. Working Group on Stationary Source Air Pollution Control Technology, under the U.S./U.S.S.R. Agreement of Cooperation in the Field of Environmental Protection.
Inverse modelling of radionuclide release rates using gamma dose rate observations
NASA Astrophysics Data System (ADS)
Hamburger, Thomas; Stohl, Andreas; von Haustein, Christoph; Thummerer, Severin; Wallner, Christian
2014-05-01
Severe accidents in nuclear power plants such as the historical accident in Chernobyl 1986 or the more recent disaster in the Fukushima Dai-ichi nuclear power plant in 2011 have drastic impacts on the population and environment. The hazardous consequences reach out on a national and continental scale. Environmental measurements and methods to model the transport and dispersion of the released radionuclides serve as a platform to assess the regional impact of nuclear accidents - both, for research purposes and, more important, to determine the immediate threat to the population. However, the assessments of the regional radionuclide activity concentrations and the individual exposure to radiation dose underlie several uncertainties. For example, the accurate model representation of wet and dry deposition. One of the most significant uncertainty, however, results from the estimation of the source term. That is, the time dependent quantification of the released spectrum of radionuclides during the course of the nuclear accident. The quantification of the source terms of severe nuclear accidents may either remain uncertain (e.g. Chernobyl, Devell et al., 1995) or rely on rather rough estimates of released key radionuclides given by the operators. Precise measurements are mostly missing due to practical limitations during the accident. Inverse modelling can be used to realise a feasible estimation of the source term (Davoine and Bocquet, 2007). Existing point measurements of radionuclide activity concentrations are therefore combined with atmospheric transport models. The release rates of radionuclides at the accident site are then obtained by improving the agreement between the modelled and observed concentrations (Stohl et al., 2012). The accuracy of the method and hence of the resulting source term depends amongst others on the availability, reliability and the resolution in time and space of the observations. Radionuclide activity concentrations are observed on a relatively sparse grid and the temporal resolution of available data may be low within the order of hours or a day. Gamma dose rates on the other hand are observed routinely on a much denser grid and higher temporal resolution. Gamma dose rate measurements contain no explicit information on the observed spectrum of radionuclides and have to be interpreted carefully. Nevertheless, they provide valuable information for the inverse evaluation of the source term due to their availability (Saunier et al., 2013). We present a new inversion approach combining an atmospheric dispersion model and observations of radionuclide activity concentrations and gamma dose rates to obtain the source term of radionuclides. We use the Lagrangian particle dispersion model FLEXPART (Stohl et al., 1998; Stohl et al., 2005) to model the atmospheric transport of the released radionuclides. The gamma dose rates are calculated from the modelled activity concentrations. The inversion method uses a Bayesian formulation considering uncertainties for the a priori source term and the observations (Eckhardt et al., 2008). The a priori information on the source term is a first guess. The gamma dose rate observations will be used with inverse modelling to improve this first guess and to retrieve a reliable source term. The details of this method will be presented at the conference. This work is funded by the Bundesamt für Strahlenschutz BfS, Forschungsvorhaben 3612S60026. References Davoine, X. and Bocquet, M., Atmos. Chem. Phys., 7, 1549-1564, 2007. Devell, L., et al., OCDE/GD(96)12, 1995. Eckhardt, S., et al., Atmos. Chem. Phys., 8, 3881-3897, 2008. Saunier, O., et al., Atmos. Chem. Phys., 13, 11403-11421, 2013. Stohl, A., et al., Atmos. Environ., 32, 4245-4264, 1998. Stohl, A., et al., Atmos. Chem. Phys., 5, 2461-2474, 2005. Stohl, A., et al., Atmos. Chem. Phys., 12, 2313-2343, 2012.
A Well-Balanced Path-Integral f-Wave Method for Hyperbolic Problems with Source Terms
2014-01-01
Systems of hyperbolic partial differential equations with source terms (balance laws) arise in many applications where it is important to compute accurate time-dependent solutions modeling small perturbations of equilibrium solutions in which the source terms balance the hyperbolic part. The f-wave version of the wave-propagation algorithm is one approach, but requires the use of a particular averaged value of the source terms at each cell interface in order to be “well balanced” and exactly maintain steady states. A general approach to choosing this average is developed using the theory of path conservative methods. A scalar advection equation with a decay or growth term is introduced as a model problem for numerical experiments. PMID:24563581
NASA Astrophysics Data System (ADS)
Hu, Minpeng; Liu, Yanmei; Wang, Jiahui; Dahlgren, Randy A.; Chen, Dingjiang
2018-06-01
Source apportionment is critical for guiding development of efficient watershed nitrogen (N) pollution control measures. The ReNuMa (Regional Nutrient Management) model, a semi-empirical, semi-process-oriented model with modest data requirements, has been widely used for riverine N source apportionment. However, the ReNuMa model contains limitations for addressing long-term N dynamics by ignoring temporal changes in atmospheric N deposition rates and N-leaching lag effects. This work modified the ReNuMa model by revising the source code to allow yearly changes in atmospheric N deposition and incorporation of N-leaching lag effects into N transport processes. The appropriate N-leaching lag time was determined from cross-correlation analysis between annual watershed individual N source inputs and riverine N export. Accuracy of the modified ReNuMa model was demonstrated through analysis of a 31-year water quality record (1980-2010) from the Yongan watershed in eastern China. The revisions considerably improved the accuracy (Nash-Sutcliff coefficient increased by ∼0.2) of the modified ReNuMa model for predicting riverine N loads. The modified model explicitly identified annual and seasonal changes in contributions of various N sources (i.e., point vs. nonpoint source, surface runoff vs. groundwater) to riverine N loads as well as the fate of watershed anthropogenic N inputs. Model results were consistent with previously modeled or observed lag time length as well as changes in riverine chloride and nitrate concentrations during the low-flow regime and available N levels in agricultural soils of this watershed. The modified ReNuMa model is applicable for addressing long-term changes in riverine N sources, providing decision-makers with critical information for guiding watershed N pollution control strategies.
Hamid, Laith; Al Farawn, Ali; Merlet, Isabelle; Japaridze, Natia; Heute, Ulrich; Stephani, Ulrich; Galka, Andreas; Wendling, Fabrice; Siniatchkin, Michael
2017-07-01
The clinical routine of non-invasive electroencephalography (EEG) is usually performed with 8-40 electrodes, especially in long-term monitoring, infants or emergency care. There is a need in clinical and scientific brain imaging to develop inverse solution methods that can reconstruct brain sources from these low-density EEG recordings. In this proof-of-principle paper we investigate the performance of the spatiotemporal Kalman filter (STKF) in EEG source reconstruction with 9-, 19- and 32- electrodes. We used simulated EEG data of epileptic spikes generated from lateral frontal and lateral temporal brain sources using state-of-the-art neuronal population models. For validation of source reconstruction, we compared STKF results to the location of the simulated source and to the results of low-resolution brain electromagnetic tomography (LORETA) standard inverse solution. STKF consistently showed less localization bias compared to LORETA, especially when the number of electrodes was decreased. The results encourage further research into the application of the STKF in source reconstruction of brain activity from low-density EEG recordings.
Long-Term Temporal Trends of Polychlorinated Biphenyls and Their Controlling Sources in China.
Zhao, Shizhen; Breivik, Knut; Liu, Guorui; Zheng, Minghui; Jones, Kevin C; Sweetman, Andrew J
2017-03-07
Polychlorinated biphenyls (PCBs) are industrial organic contaminants identified as persistent, bioaccumulative, toxic (PBT), and subject to long-range transport (LRT) with global scale significance. This study focuses on a reconstruction and prediction for China of long-term emission trends of intentionally and unintentionally produced (UP) ∑ 7 PCBs (UP-PCBs, from the manufacture of steel, cement and sinter iron) and their re-emissions from secondary sources (e.g., soils and vegetation) using a dynamic fate model (BETR-Global). Contemporary emission estimates combined with predictions from the multimedia fate model suggest that primary sources still dominate, although unintentional sources are predicted to become a main contributor from 2035 for PCB-28. Imported e-waste is predicted to play an increasing role until 2020-2030 on a national scale due to the decline of intentionally produced (IP) emissions. Hypothetical emission scenarios suggest that China could become a potential source to neighboring regions with a net output of ∼0.4 t year -1 by around 2050. However, future emission scenarios and hence model results will be dictated by the efficiency of control measures.
26 CFR 1.737-1 - Recognition of precontribution gain.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Property A1 and Property A2 is long-term, U.S.-source capital gain or loss. The character of gain on Property A3 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real... long-term, U.S.-source capital gain ($10,000 gain on Property A1 and $8,000 loss on Property A2) and $1...
Source term identification in atmospheric modelling via sparse optimization
NASA Astrophysics Data System (ADS)
Adam, Lukas; Branda, Martin; Hamburger, Thomas
2015-04-01
Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the concept of sparsity. In the paper, we summarize several optimization techniques which are used for finding sparse solutions and propose their modifications to handle selected constraints such as nonnegativity constraints and simple linear constraints, for example the minimal or maximal amount of total release. These techniques range from successive convex approximations to solution of one nonconvex problem. On simple examples, we explain these techniques and compare them from the point of implementation simplicity, approximation capability and convergence properties. Finally, these methods will be applied on the European Tracer Experiment (ETEX) data and the results will be compared with the current state of arts techniques such as regularized least squares or Bayesian approach. The obtained results show the surprisingly good results of these techniques. This research is supported by EEA/Norwegian Financial Mechanism under project 7F14287 STRADI.
NASA Astrophysics Data System (ADS)
Xia, Xilin; Liang, Qiuhua; Ming, Xiaodong; Hou, Jingming
2017-05-01
Numerical models solving the full 2-D shallow water equations (SWEs) have been increasingly used to simulate overland flows and better understand the transient flow dynamics of flash floods in a catchment. However, there still exist key challenges that have not yet been resolved for the development of fully dynamic overland flow models, related to (1) the difficulty of maintaining numerical stability and accuracy in the limit of disappearing water depth and (2) inaccurate estimation of velocities and discharges on slopes as a result of strong nonlinearity of friction terms. This paper aims to tackle these key research challenges and present a new numerical scheme for accurately and efficiently modeling large-scale transient overland flows over complex terrains. The proposed scheme features a novel surface reconstruction method (SRM) to correctly compute slope source terms and maintain numerical stability at small water depth, and a new implicit discretization method to handle the highly nonlinear friction terms. The resulting shallow water overland flow model is first validated against analytical and experimental test cases and then applied to simulate a hypothetic rainfall event in the 42 km2 Haltwhistle Burn, UK.
Homogenization of the Brush Problem with a Source Term in L 1
NASA Astrophysics Data System (ADS)
Gaudiello, Antonio; Guibé, Olivier; Murat, François
2017-07-01
We consider a domain which has the form of a brush in 3 D or the form of a comb in 2 D, i.e. an open set which is composed of cylindrical vertical teeth distributed over a fixed basis. All the teeth have a similar fixed height; their cross sections can vary from one tooth to another and are not supposed to be smooth; moreover the teeth can be adjacent, i.e. they can share parts of their boundaries. The diameter of every tooth is supposed to be less than or equal to ɛ, and the asymptotic volume fraction of the teeth (as ɛ tends to zero) is supposed to be bounded from below away from zero, but no periodicity is assumed on the distribution of the teeth. In this domain we study the asymptotic behavior (as ɛ tends to zero) of the solution of a second order elliptic equation with a zeroth order term which is bounded from below away from zero, when the homogeneous Neumann boundary condition is satisfied on the whole of the boundary. First, we revisit the problem where the source term belongs to L 2. This is a classical problem, but our homogenization result takes place in a geometry which is more general that the ones which have been considered before. Moreover we prove a corrector result which is new. Then, we study the case where the source term belongs to L 1. Working in the framework of renormalized solutions and introducing a definition of renormalized solutions for degenerate elliptic equations where only the vertical derivative is involved (such a definition is new), we identify the limit problem and prove a corrector result.
Fermi Large Area Telescope Second Source Catalog
NASA Technical Reports Server (NTRS)
Nolan, P. L.; Abdo, A. A.; Ackermann, M.; Ajello, M; Allafort, A.; Antolini, E; Bonnell, J.; Cannon, A.; Celik O.; Corbet, R.;
2012-01-01
We present the second catalog of high-energy gamma-ray sources detected by the Large Area Telescope (LAT), the primary science instrument on the Fermi Gamma-ray Space Telescope (Fermi), derived from data taken during the first 24 months of the science phase of the mission, which began on 2008 August 4. Source detection is based on the average flux over the 24-month period. The Second Fermi-LAT catalog (2FGL) includes source location regions, defined in terms of elliptical fits to the 95% confidence regions and spectral fits in terms of power-law, exponentially cutoff power-law, or log-normal forms. Also included are flux measurements in 5 energy bands and light curves on monthly intervals for each source. Twelve sources in the catalog are modeled as spatially extended. We provide a detailed comparison of the results from this catalog with those from the first Fermi-LAT catalog (1FGL). Although the diffuse Galactic and isotropic models used in the 2FGL analysis are improved compared to the 1FGL catalog, we attach caution flags to 162 of the sources to indicate possible confusion with residual imperfections in the diffuse model. The 2FGL catalog contains 1873 sources detected and characterized in the 100 11eV to 100 GeV range of which we consider 127 as being firmly identified and 1171 as being reliably associated with counterparts of known or likely gamma-ray-producing source classes.
Source Term Model for Steady Micro Jets in a Navier-Stokes Computer Code
NASA Technical Reports Server (NTRS)
Waithe, Kenrick A.
2005-01-01
A source term model for steady micro jets was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the mass flow and momentum created by a steady blowing micro jet. The model is obtained by adding the momentum and mass flow created by the jet to the Navier-Stokes equations. The model was tested by comparing with data from numerical simulations of a single, steady micro jet on a flat plate in two and three dimensions. The source term model predicted the velocity distribution well compared to the two-dimensional plate using a steady mass flow boundary condition, which was used to simulate a steady micro jet. The model was also compared to two three-dimensional flat plate cases using a steady mass flow boundary condition to simulate a steady micro jet. The three-dimensional comparison included a case with a grid generated to capture the circular shape of the jet and a case without a grid generated for the micro jet. The case without the jet grid mimics the application of the source term. The source term model compared well with both of the three-dimensional cases. Comparisons of velocity distribution were made before and after the jet and Mach and vorticity contours were examined. The source term model allows a researcher to quickly investigate different locations of individual or several steady micro jets. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.
Regularized Dual Averaging Image Reconstruction for Full-Wave Ultrasound Computed Tomography.
Matthews, Thomas P; Wang, Kun; Li, Cuiping; Duric, Neb; Anastasio, Mark A
2017-05-01
Ultrasound computed tomography (USCT) holds great promise for breast cancer screening. Waveform inversion-based image reconstruction methods account for higher order diffraction effects and can produce high-resolution USCT images, but are computationally demanding. Recently, a source encoding technique has been combined with stochastic gradient descent (SGD) to greatly reduce image reconstruction times. However, this method bundles the stochastic data fidelity term with the deterministic regularization term. This limitation can be overcome by replacing SGD with a structured optimization method, such as the regularized dual averaging method, that exploits knowledge of the composition of the cost function. In this paper, the dual averaging method is combined with source encoding techniques to improve the effectiveness of regularization while maintaining the reduced reconstruction times afforded by source encoding. It is demonstrated that each iteration can be decomposed into a gradient descent step based on the data fidelity term and a proximal update step corresponding to the regularization term. Furthermore, the regularization term is never explicitly differentiated, allowing nonsmooth regularization penalties to be naturally incorporated. The wave equation is solved by the use of a time-domain method. The effectiveness of this approach is demonstrated through computer simulation and experimental studies. The results suggest that the dual averaging method can produce images with less noise and comparable resolution to those obtained by the use of SGD.
The Scaling of Broadband Shock-Associated Noise with Increasing Temperature
NASA Technical Reports Server (NTRS)
Miller, Steven A.
2012-01-01
A physical explanation for the saturation of broadband shock-associated noise (BBSAN) intensity with increasing jet stagnation temperature has eluded investigators. An explanation is proposed for this phenomenon with the use of an acoustic analogy. For this purpose the acoustic analogy of Morris and Miller is examined. To isolate the relevant physics, the scaling of BBSAN at the peak intensity level at the sideline ( = 90 degrees) observer location is examined. Scaling terms are isolated from the acoustic analogy and the result is compared using a convergent nozzle with the experiments of Bridges and Brown and using a convergent-divergent nozzle with the experiments of Kuo, McLaughlin, and Morris at four nozzle pressure ratios in increments of total temperature ratios from one to four. The equivalent source within the framework of the acoustic analogy for BBSAN is based on local field quantities at shock wave shear layer interactions. The equivalent source combined with accurate calculations of the propagation of sound through the jet shear layer, using an adjoint vector Green s function solver of the linearized Euler equations, allows for predictions that retain the scaling with respect to stagnation pressure and allows for the accurate saturation of BBSAN with increasing stagnation temperature. This is a minor change to the source model relative to the previously developed models. The full development of the scaling term is shown. The sources and vector Green s function solver are informed by steady Reynolds-Averaged Navier-Stokes solutions. These solutions are examined as a function of stagnation temperature at the first shock wave shear layer interaction. It is discovered that saturation of BBSAN with increasing jet stagnation temperature occurs due to a balance between the amplification of the sound propagation through the shear layer and the source term scaling.A physical explanation for the saturation of broadband shock-associated noise (BBSAN) intensity with increasing jet stagnation temperature has eluded investigators. An explanation is proposed for this phenomenon with the use of an acoustic analogy. For this purpose the acoustic analogy of Morris and Miller is examined. To isolate the relevant physics, the scaling of BBSAN at the peak intensity level at the sideline psi = 90 degrees) observer location is examined. Scaling terms are isolated from the acoustic analogy and the result is compared using a convergent nozzle with the experiments of Bridges and Brown and using a convergent-divergent nozzle with the experiments of Kuo, McLaughlin, and Morris at four nozzle pressure ratios in increments of total temperature ratios from one to four. The equivalent source within the framework of the acoustic analogy for BBSAN is based on local field quantities at shock wave shear layer interactions. The equivalent source combined with accurate calculations of the propagation of sound through the jet shear layer, using an adjoint vector Green s function solver of the linearized Euler equations, allows for predictions that retain the scaling with respect to stagnation pressure and allows for the accurate saturation of BBSAN with increasing stagnation temperature. This is a minor change to the source model relative to the previously developed models. The full development of the scaling term is shown. The sources and vector Green s function solver are informed by steady Reynolds-Averaged Navier-Stokes solutions. These solutions are examined as a function of stagnation temperature at the first shock wave shear layer interaction. It is discovered that saturation of BBSAN with increasing jet stagnation temperature occurs due to a balance between the amplification of the sound propagation through the shear layer and the source term scaling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bixler, Nathan E.; Osborn, Douglas M.; Sallaberry, Cedric Jean-Marie
2014-02-01
This paper describes the convergence of MELCOR Accident Consequence Code System, Version 2 (MACCS2) probabilistic results of offsite consequences for the uncertainty analysis of the State-of-the-Art Reactor Consequence Analyses (SOARCA) unmitigated long-term station blackout scenario at the Peach Bottom Atomic Power Station. The consequence metrics evaluated are individual latent-cancer fatality (LCF) risk and individual early fatality risk. Consequence results are presented as conditional risk (i.e., assuming the accident occurs, risk per event) to individuals of the public as a result of the accident. In order to verify convergence for this uncertainty analysis, as recommended by the Nuclear Regulatory Commission’s Advisorymore » Committee on Reactor Safeguards, a ‘high’ source term from the original population of Monte Carlo runs has been selected to be used for: (1) a study of the distribution of consequence results stemming solely from epistemic uncertainty in the MACCS2 parameters (i.e., separating the effect from the source term uncertainty), and (2) a comparison between Simple Random Sampling (SRS) and Latin Hypercube Sampling (LHS) in order to validate the original results obtained with LHS. Three replicates (each using a different random seed) of size 1,000 each using LHS and another set of three replicates of size 1,000 using SRS are analyzed. The results show that the LCF risk results are well converged with either LHS or SRS sampling. The early fatality risk results are less well converged at radial distances beyond 2 miles, and this is expected due to the sparse data (predominance of “zero” results).« less
Estuarine turbidity, flushing, salinity, and circulation
NASA Technical Reports Server (NTRS)
Pritchard, D. W.
1972-01-01
The effects of estuarine turbidity, flushing, salinity, and circulation on the ecology of the Chesapeake Bay are discussed. The sources of fresh water, the variations in salinity, and the circulation patterns created by temperature and salinity changes are analyzed. The application of remote sensors for long term observation of water temperatures is described. The sources of sediment and the biological effects resulting from increased sediments and siltation are identified.
Numerical and experimental evaluations of the flow past nested chevrons
NASA Technical Reports Server (NTRS)
Foss, J. F.; Foss, J. K.; Spalart, P. R.
1989-01-01
An effort is made to contribute to the development of CFD by relating the successful use of vortex dynamics in the computation of the pressure drop past a planar array of chevron-shaped obstructions. An ensemble of results was used to compute the loss coefficient k, stimulating an experimental program for the assessment of the measured loss coefficient for the same geometry. The most provocative result of this study has been the representation of kinetic energy production in terms of vorticity source terms.
Effort-reward imbalance and its association with health among permanent and fixed-term workers
2010-01-01
Background In the past decade, the changing labor market seems to have rejected the traditional standards employment and has begun to support a variety of non-standard forms of work in their place. The purpose of our study was to compare the degree of job stress, sources of job stress, and association of high job stress with health among permanent and fixed-term workers. Methods Our study subjects were 709 male workers aged 30 to 49 years in a suburb of Tokyo, Japan. In 2008, we conducted a cross-sectional study to compare job stress using an effort-reward imbalance (ERI) model questionnaire. Lifestyles, subjective symptoms, and body mass index were also observed from the 2008 health check-up data. Results The rate of job stress of the high-risk group measured by ERI questionnaire was not different between permanent and fixed-term workers. However, the content of the ERI components differed. Permanent workers were distressed more by effort, overwork, or job demand, while fixed-term workers were distressed more by their job insecurity. Moreover, higher ERI was associated with existence of subjective symptoms (OR = 2.07, 95% CI: 1.42-3.03) and obesity (OR = 2.84, 95% CI:1.78-4.53) in fixed-term workers while this tendency was not found in permanent workers. Conclusions Our study showed that workers with different employment types, permanent and fixed-term, have dissimilar sources of job stress even though their degree of job stress seems to be the same. High ERI was associated with existing subjective symptoms and obesity in fixed-term workers. Therefore, understanding different sources of job stress and their association with health among permanent and fixed-term workers should be considered to prevent further health problems. PMID:21054838
Hynds, Paul D; Misstear, Bruce D; Gill, Laurence W
2013-09-30
While the safety of public drinking water supplies in the Republic of Ireland is governed and monitored at both local and national levels, there are currently no legislative tools in place relating to private supplies. It is therefore paramount that private well owners (and users) be aware of source specifications and potential contamination risks, to ensure adequate water quality. The objective of this study was to investigate the level of awareness among private well owners in the Republic of Ireland, relating to source characterisation and groundwater contamination issues. This was undertaken through interviews with 245 private well owners. Statistical analysis indicates that respondents' source type significantly influences owner awareness, particularly regarding well construction and design parameters. Water treatment, source maintenance and regular water quality testing are considered the three primary "protective actions" (or "stewardship activities") to consumption of contaminated groundwater and were reported as being absent in 64%, 72% and 40% of cases, respectively. Results indicate that the level of awareness exhibited by well users did not significantly affect the likelihood of their source being contaminated (source susceptibility); increased awareness on behalf of well users was associated with increased levels of protective action, particularly among borehole owners. Hence, lower levels of awareness may result in increased contraction of waterborne illnesses where contaminants have entered the well. Accordingly, focused educational strategies to increase awareness among private groundwater users are advocated in the short-term; the development and introdiction of formal legislation is recommended in the long-term, including an integrated programme of well inspections and risk assessments. Copyright © 2013 Elsevier Ltd. All rights reserved.
Greenberg, Jacob A.; Lujan, Daniel A.; DiMenna, Mark A.; Wearing, Helen J.; Hofkin, Bruce V.
2013-01-01
Culex quinquefasciatus Say (Diptera: Culicidae) and Aedes vexans Meigen are two of the most abundant mosquitoes in Bernalillo County, New Mexico, USA. In this study, a polymerase chain reaction based methodology was used to identify the sources of blood meals taken by these two species. Ae. vexans was found to take a large proportion of its meals from mammals. Although less specific in terms of its blood meal preferences, Cx. quinquefasciatus was found to feed more commonly on birds. The results for Ae. vexans are similar to those reported for this species in other parts of their geographic range. Cx. quinquefasciatus appears to be more variable in terms of its host feeding under different environmental or seasonal circumstances. The implications of these results for arbovirus transmission are discussed. PMID:24224615
Numerical study of supersonic combustion using a finite rate chemistry model
NASA Technical Reports Server (NTRS)
Chitsomboon, T.; Tiwari, S. N.; Kumar, A.; Drummond, J. P.
1986-01-01
The governing equations of two-dimensional chemically reacting flows are presented together with a global two-step chemistry model for H2-air combustion. The explicit unsplit MacCormack finite difference algorithm is used to advance the discrete system of the governing equations in time until convergence is attained. The source terms in the species equations are evaluated implicitly to alleviate stiffness associated with fast reactions. With implicit source terms, the species equations give rise to a block-diagonal system which can be solved very efficiently on vector-processing computers. A supersonic reacting flow in an inlet-combustor configuration is calculated for the case where H2 is injected into the flow from the side walls and the strut. Results of the calculation are compared against the results obtained by using a complete reaction model.
Source terms, shielding calculations and soil activation for a medical cyclotron.
Konheiser, J; Naumann, B; Ferrari, A; Brachem, C; Müller, S E
2016-12-01
Calculations of the shielding and estimates of soil activation for a medical cyclotron are presented in this work. Based on the neutron source term from the 18 O(p,n) 18 F reaction produced by a 28 MeV proton beam, neutron and gamma dose rates outside the building were estimated with the Monte Carlo code MCNP6 (Goorley et al 2012 Nucl. Technol. 180 298-315). The neutron source term was calculated with the MCNP6 code and FLUKA (Ferrari et al 2005 INFN/TC_05/11, SLAC-R-773) code as well as with supplied data by the manufacturer. MCNP and FLUKA calculations yielded comparable results, while the neutron yield obtained using the manufacturer-supplied information is about a factor of 5 smaller. The difference is attributed to the missing channels in the manufacturer-supplied neutron source terms which considers only the 18 O(p,n) 18 F reaction, whereas the MCNP and FLUKA calculations include additional neutron reaction channels. Soil activation was performed using the FLUKA code. The estimated dose rate based on MCNP6 calculations in the public area is about 0.035 µSv h -1 and thus significantly below the reference value of 0.5 µSv h -1 (2011 Strahlenschutzverordnung, 9 Auflage vom 01.11.2011, Bundesanzeiger Verlag). After 5 years of continuous beam operation and a subsequent decay time of 30 d, the activity concentration of the soil is about 0.34 Bq g -1 .
Di Benedetto, Cristiano; Barbaglio, Alice; Martinello, Tiziana; Alongi, Valentina; Fassini, Dario; Cullorà, Emanuele; Patruno, Marco; Bonasoro, Francesco; Barbosa, Mario Adolfo; Candia Carnevali, Maria Daniela; Sugni, Michela
2014-01-01
Collagen has become a key-molecule in cell culture studies and in the tissue engineering field. Industrially, the principal sources of collagen are calf skin and bones which, however, could be associated to risks of serious disease transmission. In fact, collagen derived from alternative and riskless sources is required, and marine organisms are among the safest and recently exploited ones. Sea urchins possess a circular area of soft tissue surrounding the mouth, the peristomial membrane (PM), mainly composed by mammalian-like collagen. The PM of the edible sea urchin Paracentrotus lividus therefore represents a potential unexploited collagen source, easily obtainable as a food industry waste product. Our results demonstrate that it is possible to extract native collagen fibrils from the PM and produce suitable substrates for in vitro system. The obtained matrices appear as a homogeneous fibrillar network (mean fibril diameter 30–400 nm and mesh < 2 μm) and display remarkable mechanical properties in term of stiffness (146 ± 48 MPa) and viscosity (60.98 ± 52.07 GPa·s). In vitro tests with horse pbMSC show a good biocompatibility in terms of overall cell growth. The obtained results indicate that the sea urchin P. lividus can be a valuable low-cost collagen source for mechanically resistant biomedical devices. PMID:25255130
The role of long-term familiarity and attentional maintenance in short-term memory for timbre.
Siedenburg, Kai; McAdams, Stephen
2017-04-01
We study short-term recognition of timbre using familiar recorded tones from acoustic instruments and unfamiliar transformed tones that do not readily evoke sound-source categories. Participants indicated whether the timbre of a probe sound matched with one of three previously presented sounds (item recognition). In Exp. 1, musicians better recognised familiar acoustic compared to unfamiliar synthetic sounds, and this advantage was particularly large in the medial serial position. There was a strong correlation between correct rejection rate and the mean perceptual dissimilarity of the probe to the tones from the sequence. Exp. 2 compared musicians' and non-musicians' performance with concurrent articulatory suppression, visual interference, and with a silent control condition. Both suppression tasks disrupted performance by a similar margin, regardless of musical training of participants or type of sounds. Our results suggest that familiarity with sound source categories and attention play important roles in short-term memory for timbre, which rules out accounts solely based on sensory persistence.
Li, Chunhui; Sun, Lian; Jia, Junxiang; Cai, Yanpeng; Wang, Xuan
2016-07-01
Source water areas are facing many potential water pollution risks. Risk assessment is an effective method to evaluate such risks. In this paper an integrated model based on k-means clustering analysis and set pair analysis was established aiming at evaluating the risks associated with water pollution in source water areas, in which the weights of indicators were determined through the entropy weight method. Then the proposed model was applied to assess water pollution risks in the region of Shiyan in which China's key source water area Danjiangkou Reservoir for the water source of the middle route of South-to-North Water Diversion Project is located. The results showed that eleven sources with relative high risk value were identified. At the regional scale, Shiyan City and Danjiangkou City would have a high risk value in term of the industrial discharge. Comparatively, Danjiangkou City and Yunxian County would have a high risk value in terms of agricultural pollution. Overall, the risk values of north regions close to the main stream and reservoir of the region of Shiyan were higher than that in the south. The results of risk level indicated that five sources were in lower risk level (i.e., level II), two in moderate risk level (i.e., level III), one in higher risk level (i.e., level IV) and three in highest risk level (i.e., level V). Also risks of industrial discharge are higher than that of the agricultural sector. It is thus essential to manage the pillar industry of the region of Shiyan and certain agricultural companies in the vicinity of the reservoir to reduce water pollution risks of source water areas. Copyright © 2016 Elsevier B.V. All rights reserved.
Household food waste separation behavior and the importance of convenience.
Bernstad, Anna
2014-07-01
Two different strategies aiming at increasing household source-separation of food waste were assessed through a case-study in a Swedish residential area (a) use of written information, distributed as leaflets amongst households and (b) installation of equipment for source-segregation of waste with the aim of increasing convenience food waste sorting in kitchens. Weightings of separately collected food waste before and after distribution of written information suggest that this resulted in neither a significant increased amount of separately collected food waste, nor an increased source-separation ratio. After installation of sorting equipment in households, both the amount of separately collected food waste as well as the source-separation ratio increased vastly. Long-term monitoring shows that results where longstanding. Results emphasize the importance of convenience and existence of infrastructure necessary for source-segregation of waste as important factors for household waste recycling, but also highlight the need of addressing these aspects where waste is generated, i.e. already inside the household. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ozawa, T.; Miyagi, Y.
2017-12-01
Shinmoe-dake located to SW Japan erupted in January 2011 and lava accumulated in the crater (e.g., Ozawa and Kozono, EPS, 2013). Last Vulcanian eruption occurred in September 2011, and after that, no eruption has occurred until now. Miyagi et al. (GRL, 2014) analyzed TerraSAR-X and Radarsat-2 SAR data acquired after the last eruption and found continuous inflation in the crater. Its inflation decayed with time, but had not terminated in May 2013. Since the time-series of inflation volume change rate fitted well to the exponential function with the constant term, we suggested that lava extrusion had continued in long-term due to deflation of shallow magma source and to magma supply from deeper source. To investigate its deformation after that, we applied InSAR to Sentinel-1 and ALOS-2 SAR data. Inflation decayed further, and almost terminated in the end of 2016. It means that this deformation has continued more than five years from the last eruption. We have found that the time series of inflation volume change rate fits better to the double-exponential function than single-exponential function with the constant term. The exponential component with the short time constant has almost settled in one year from the last eruption. Although InSAR result from TerraSAR-X data of November 2011 and May 2013 indicated deflation of shallow source under the crater, such deformation has not been obtained from recent SAR data. It suggests that this component has been due to deflation of shallow magma source with excess pressure. In this study, we found the possibility that long-term component also decayed exponentially. Then this factor may be deflation of deep source or delayed vesiculation.
Economic dispatch optimization for system integrating renewable energy sources
NASA Astrophysics Data System (ADS)
Jihane, Kartite; Mohamed, Cherkaoui
2018-05-01
Nowadays, the use of energy is growing especially in transportation and electricity industries. However this energy is based on conventional sources which pollute the environment. Multi-source system is seen as the best solution to sustainable development. This paper proposes the Economic Dispatch (ED) of hybrid renewable power system. The hybrid system is composed of ten thermal generators, photovoltaic (PV) generator and wind turbine generator. To show the importance of renewable energy sources (RES) in the energy mix we have ran the simulation for system integrated PV only and PV plus wind. The result shows that the system with renewable energy sources (RES) is more compromising than the system without RES in terms of fuel cost.
Degenerative meniscus: Pathogenesis, diagnosis, and treatment options
Howell, Richard; Kumar, Neil S; Patel, Nimit; Tom, James
2014-01-01
The symptomatic degenerative meniscus continues to be a source of discomfort for a significant number of patients. With vascular penetration of less than one-third of the adult meniscus, healing potential in the setting of chronic degeneration remains low. Continued hoop and shear stresses upon the degenerative meniscus results in gross failure, often in the form of complex tears in the posterior horn and midbody. Patient history and physical examination are critical to determine the true source of pain, particularly with the significant incidence of simultaneous articular pathology. Joint line tenderness, a positive McMurray test, and mechanical catching or locking can be highly suggestive of a meniscal source of knee pain and dysfunction. Radiographs and magnetic resonance imaging are frequently utilized to examine for osteoarthritis and to verify the presence of meniscal tears, in addition to ruling out other sources of pain. Non-operative therapy focused on non-steroidal anti-inflammatory drugs and physical therapy may be able to provide pain relief as well as improve mechanical function of the knee joint. For patients refractory to conservative therapy, arthroscopic partial meniscectomy can provide short-term gains regarding pain relief, especially when combined with an effective, regular physiotherapy program. Patients with clear mechanical symptoms and meniscal pathology may benefit from arthroscopic partial meniscectomy, but surgery is not a guaranteed success, especially with concomitant articular pathology. Ultimately, the long-term outcomes of either treatment arm provide similar results for most patients. Further study is needed regarding the short and long-term outcomes regarding conservative and surgical therapy, with a particular focus on the economic impact of treatment as well. PMID:25405088
A Semi-implicit Treatment of Porous Media in Steady-State CFD.
Domaingo, Andreas; Langmayr, Daniel; Somogyi, Bence; Almbauer, Raimund
There are many situations in computational fluid dynamics which require the definition of source terms in the Navier-Stokes equations. These source terms not only allow to model the physics of interest but also have a strong impact on the reliability, stability, and convergence of the numerics involved. Therefore, sophisticated numerical approaches exist for the description of such source terms. In this paper, we focus on the source terms present in the Navier-Stokes or Euler equations due to porous media-in particular the Darcy-Forchheimer equation. We introduce a method for the numerical treatment of the source term which is independent of the spatial discretization and based on linearization. In this description, the source term is treated in a fully implicit way whereas the other flow variables can be computed in an implicit or explicit manner. This leads to a more robust description in comparison with a fully explicit approach. The method is well suited to be combined with coarse-grid-CFD on Cartesian grids, which makes it especially favorable for accelerated solution of coupled 1D-3D problems. To demonstrate the applicability and robustness of the proposed method, a proof-of-concept example in 1D, as well as more complex examples in 2D and 3D, is presented.
Development of a Hard X-ray Beam Position Monitor for Insertion Device Beams at the APS
NASA Astrophysics Data System (ADS)
Decker, Glenn; Rosenbaum, Gerd; Singh, Om
2006-11-01
Long-term pointing stability requirements at the Advanced Photon Source (APS) are very stringent, at the level of 500 nanoradians peak-to-peak or better over a one-week time frame. Conventional rf beam position monitors (BPMs) close to the insertion device source points are incapable of assuring this level of stability, owing to mechanical, thermal, and electronic stability limitations. Insertion device gap-dependent systematic errors associated with the present ultraviolet photon beam position monitors similarly limit their ability to control long-term pointing stability. We report on the development of a new BPM design sensitive only to hard x-rays. Early experimental results will be presented.
Rodriguez-Falces, Javier
2015-03-01
A concept of major importance in human electrophysiology studies is the process by which activation of an excitable cell results in a rapid rise and fall of the electrical membrane potential, the so-called action potential. Hodgkin and Huxley proposed a model to explain the ionic mechanisms underlying the formation of action potentials. However, this model is unsuitably complex for teaching purposes. In addition, the Hodgkin and Huxley approach describes the shape of the action potential only in terms of ionic currents, i.e., it is unable to explain the electrical significance of the action potential or describe the electrical field arising from this source using basic concepts of electromagnetic theory. The goal of the present report was to propose a new model to describe the electrical behaviour of the action potential in terms of elementary electrical sources (in particular, dipoles). The efficacy of this model was tested through a closed-book written exam. The proposed model increased the ability of students to appreciate the distributed character of the action potential and also to recognize that this source spreads out along the fiber as function of space. In addition, the new approach allowed students to realize that the amplitude and sign of the extracellular electrical potential arising from the action potential are determined by the spatial derivative of this intracellular source. The proposed model, which incorporates intuitive graphical representations, has improved students' understanding of the electrical potentials generated by bioelectrical sources and has heightened their interest in bioelectricity. Copyright © 2015 The American Physiological Society.
Geocoronal hydrogen studies using Fabry Perot interferometers, part 2: Long-term observations
NASA Astrophysics Data System (ADS)
Nossal, S. M.; Mierkiewicz, E. J.; Roesler, F. L.; Reynolds, R. J.; Haffner, L. M.
2006-09-01
Long-term data sets are required to investigate sources of natural variability in the upper atmosphere. Understanding the influence of sources of natural variability such as the solar cycle is needed to characterize the thermosphere + exosphere, to understand coupling processes between atmospheric regions, and to isolate signatures of natural variability from those due to human-caused change. Multi-year comparisons of thermospheric + exospheric Balmer α emissions require cross-calibrated and well-understood instrumentation, a stable calibration source, reproducible observing conditions, separation of the terrestrial from the Galactic emission line, and consistent data analysis accounting for differences in viewing geometry. We discuss how we address these criteria in the acquisition and analysis of a mid-latitude geocoronal Balmer α column emission data set now spanning two solar cycles and taken mainly from Wisconsin and Kitt Peak, Arizona. We also discuss results and outstanding challenges for increasing the accuracy and use of these observations.
NASA Astrophysics Data System (ADS)
Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne
2014-01-01
Inverse modelling techniques can be used to estimate the amount of radionuclides and the temporal profile of the source term released in the atmosphere during the accident of the Fukushima Daiichi nuclear power plant in March 2011. In Winiarek et al. (2012b), the lower bounds of the caesium-137 and iodine-131 source terms were estimated with such techniques, using activity concentration measurements. The importance of an objective assessment of prior errors (the observation errors and the background errors) was emphasised for a reliable inversion. In such critical context where the meteorological conditions can make the source term partly unobservable and where only a few observations are available, such prior estimation techniques are mandatory, the retrieved source term being very sensitive to this estimation. We propose to extend the use of these techniques to the estimation of prior errors when assimilating observations from several data sets. The aim is to compute an estimate of the caesium-137 source term jointly using all available data about this radionuclide, such as activity concentrations in the air, but also daily fallout measurements and total cumulated fallout measurements. It is crucial to properly and simultaneously estimate the background errors and the prior errors relative to each data set. A proper estimation of prior errors is also a necessary condition to reliably estimate the a posteriori uncertainty of the estimated source term. Using such techniques, we retrieve a total released quantity of caesium-137 in the interval 11.6-19.3 PBq with an estimated standard deviation range of 15-20% depending on the method and the data sets. The “blind” time intervals of the source term have also been strongly mitigated compared to the first estimations with only activity concentration data.
NASA Technical Reports Server (NTRS)
Cogley, A. C.
1975-01-01
A Green's function formulation is used to derive basic reciprocity relations for planar radiative transfer in a general medium with internal illumination. Reciprocity (or functional symmetry) allows an explicit and generalized development of the equivalence between source and probability functions. Assuming similar symmetry in three-dimensional space, a general relationship is derived between planar-source intensity and point-source total directional energy. These quantities are expressed in terms of standard (universal) functions associated with the planar medium, while all results are derived from the differential equation of radiative transfer.
Expectancy Disconfirmation and Attitude Change.
McPeek, Robert W; Edwards, John D
1975-08-01
An experiment was conducted testing the hypothesis that sources delivering unexpected communications (long-haired males arguing against marijuana usage and seminarians arguing in its favor) would be more persuasive than communicators of expected messages (promarijuana hippies and antimarijuana seminarians). Greater attitude change for unexpected sources was found only when the message was antimarijuana. Unexpected communicators also were rated as more sincere and honest than expected sources. Possible reasons for the failure of the expectancy effect to hold for promarijuana communications were suggested, and the results were discussed in terms of a variety of social-psychological theories.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Usang, M. D., E-mail: mark-dennis@nuclearmalaysia.gov.my; Hamzah, N. S., E-mail: mark-dennis@nuclearmalaysia.gov.my; Abi, M. J. B., E-mail: mark-dennis@nuclearmalaysia.gov.my
ORIGEN 2.2 are employed to obtain data regarding γ source term and the radio-activity of irradiated TRIGA fuel. The fuel composition are specified in grams for use as input data. Three types of fuel are irradiated in the reactor, each differs from the other in terms of the amount of Uranium compared to the total weight. Each fuel are irradiated for 365 days with 50 days time step. We obtain results on the total radioactivity of the fuel, the composition of activated materials, composition of fission products and the photon spectrum of the burned fuel. We investigate the differences ofmore » results using BWR and PWR library for ORIGEN. Finally, we compare the composition of major nuclides after 1 year irradiation of both ORIGEN library with results from WIMS. We found only minor disagreements between the yields of PWR and BWR libraries. In comparison with WIMS, the errors are a little bit more pronounced. To overcome this errors, the irradiation power used in ORIGEN could be increased a little, so that the differences in the yield of ORIGEN and WIMS could be reduced. A more permanent solution is to use a different code altogether to simulate burnup such as DRAGON and ORIGEN-S. The result of this study are essential for the design of radiation shielding from the fuel.« less
NASA Astrophysics Data System (ADS)
Koliopanos, Filippos; Vasilopoulos, Georgios
2018-06-01
Aims: We study the temporal and spectral characteristics of SMC X-3 during its recent (2016) outburst to probe accretion onto highly magnetized neutron stars (NSs) at the Eddington limit. Methods: We obtained XMM-Newton observations of SMC X-3 and combined them with long-term observations by Swift. We performed a detailed analysis of the temporal and spectral behavior of the source, as well as its short- and long-term evolution. We have also constructed a simple toy-model (based on robust theoretical predictions) in order to gain insight into the complex emission pattern of SMC X-3. Results: We confirm the pulse period of the system that has been derived by previous works and note that the pulse has a complex three-peak shape. We find that the pulsed emission is dominated by hard photons, while at energies below 1 keV, the emission does not pulsate. We furthermore find that the shape of the pulse profile and the short- and long-term evolution of the source light-curve can be explained by invoking a combination of a "fan" and a "polar" beam. The results of our temporal study are supported by our spectroscopic analysis, which reveals a two-component emission, comprised of a hard power law and a soft thermal component. We find that the latter produces the bulk of the non-pulsating emission and is most likely the result of reprocessing the primary hard emission by optically thick material that partly obscures the central source. We also detect strong emission lines from highly ionized metals. The strength of the emission lines strongly depends on the phase. Conclusions: Our findings are in agreement with previous works. The energy and temporal evolution as well as the shape of the pulse profile and the long-term spectra evolution of the source are consistent with the expected emission pattern of the accretion column in the super-critical regime, while the large reprocessing region is consistent with the analysis of previously studied X-ray pulsars observed at high accretion rates. This reprocessing region is consistent with recently proposed theoretical and observational works that suggested that highly magnetized NSs occupy a considerable fraction of ultraluminous X-ray sources.
NASA Astrophysics Data System (ADS)
Bartlett, Rachel E.; Bollasina, Massimo A.; Booth, Ben B. B.; Dunstone, Nick J.; Marenco, Franco; Messori, Gabriele; Bernie, Dan J.
2018-03-01
Anthropogenic aerosols could dominate over greenhouse gases in driving near-term hydroclimate change, especially in regions with high present-day aerosol loading such as Asia. Uncertainties in near-future aerosol emissions represent a potentially large, yet unexplored, source of ambiguity in climate projections for the coming decades. We investigated the near-term sensitivity of the Asian summer monsoon to aerosols by means of transient modelling experiments using HadGEM2-ES under two existing climate change mitigation scenarios selected to have similar greenhouse gas forcing, but to span a wide range of plausible global sulfur dioxide emissions. Increased sulfate aerosols, predominantly from East Asian sources, lead to large regional dimming through aerosol-radiation and aerosol-cloud interactions. This results in surface cooling and anomalous anticyclonic flow over land, while abating the western Pacific subtropical high. The East Asian monsoon circulation weakens and precipitation stagnates over Indochina, resembling the observed southern-flood-northern-drought pattern over China. Large-scale circulation adjustments drive suppression of the South Asian monsoon and a westward extension of the Maritime Continent convective region. Remote impacts across the Northern Hemisphere are also generated, including a northwestward shift of West African monsoon rainfall induced by the westward displacement of the Indian Ocean Walker cell, and temperature anomalies in northern midlatitudes linked to propagation of Rossby waves from East Asia. These results indicate that aerosol emissions are a key source of uncertainty in near-term projection of regional and global climate; a careful examination of the uncertainties associated with aerosol pathways in future climate assessments must be highly prioritised.
Bernsmann, K; Rosenthal, A; Sati, M; Ansari, B; Wiese, M
2001-01-01
The anterior cruciate ligament (ACL) is of great importance for the knee joint function. In the case of a complete ligament injury there is hardly any chance for complete recovery. The clear advantages of an operative reconstruction by replacing the ACL has been shown in many trails. The accurate placement of the graft's insertions has a significant effect on the mid- and probably long-term outcome of this procedure. Reviewing the literature, there are poor long-term results of ACL replacement in 5 to 52% of all cases, depending on the score system. One of the main reasons for unacceptable results is graft misplacement. This led to the construction of a CAS system for ACL replacement. The system assists this surgical procedure by navigating the exact position of the drilling holes. The Potential deformation quantity of the transplant can be controlled by this system in real time. 40 computer-assisted ACL replacements have been performed under active use of the CAS system. The short-term results are encouraging, no special complications have been seen so far. Prospective long-term follow-up studies are ongoing. ACL reconstruction by manual devices has many sources of error. The CAS system is able to give the surgeon reasonable views that are unachieveable by conventional surgery. He is therefore able to control a source of error and to optimise the results. The feasibility of this device in clinical routine use has been proven.
NASA Astrophysics Data System (ADS)
Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne
2013-04-01
A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativeness of the measurements, the instrumental errors, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, and specially in a situation of sparse observability, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. In Winiarek et al. (2012), we proposed to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We applied the method to the estimation of the Fukushima Daiichi cesium-137 and iodine-131 source terms using activity concentrations in the air. The results were compared to an L-curve estimation technique, and to Desroziers's scheme. Additionally to the estimations of released activities, we provided related uncertainties (12 PBq with a std. of 15 - 20 % for cesium-137 and 190 - 380 PBq with a std. of 5 - 10 % for iodine-131). We also enlightened that, because of the low number of available observations (few hundreds) and even if orders of magnitude were consistent, the reconstructed activities significantly depended on the method used to estimate the prior errors. In order to use more data, we propose to extend the methods to the use of several data types, such as activity concentrations in the air and fallout measurements. The idea is to simultaneously estimate the prior errors related to each dataset, in order to fully exploit the information content of each one. Using the activity concentration measurements, but also daily fallout data from prefectures and cumulated deposition data over a region lying approximately 150 km around the nuclear power plant, we can use a few thousands of data in our inverse modeling algorithm to reconstruct the Cesium-137 source term. To improve the parameterization of removal processes, rainfall fields have also been corrected using outputs from the mesoscale meteorological model WRF and ground station rainfall data. As expected, the different methods yield closer results as the number of data increases. Reference : Winiarek, V., M. Bocquet, O. Saunier, A. Mathieu (2012), Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant : Application to the reconstruction of the cesium-137 and iodine-131 source terms from the Fukushima Daiichi power plant, J. Geophys. Res., 117, D05122, doi:10.1029/2011JD016932.
NASA Technical Reports Server (NTRS)
Yee, Helen M. C.; Kotov, D. V.; Wang, Wei; Shu, Chi-Wang
2013-01-01
The goal of this paper is to relate numerical dissipations that are inherited in high order shock-capturing schemes with the onset of wrong propagation speed of discontinuities. For pointwise evaluation of the source term, previous studies indicated that the phenomenon of wrong propagation speed of discontinuities is connected with the smearing of the discontinuity caused by the discretization of the advection term. The smearing introduces a nonequilibrium state into the calculation. Thus as soon as a nonequilibrium value is introduced in this manner, the source term turns on and immediately restores equilibrium, while at the same time shifting the discontinuity to a cell boundary. The present study is to show that the degree of wrong propagation speed of discontinuities is highly dependent on the accuracy of the numerical method. The manner in which the smearing of discontinuities is contained by the numerical method and the overall amount of numerical dissipation being employed play major roles. Moreover, employing finite time steps and grid spacings that are below the standard Courant-Friedrich-Levy (CFL) limit on shockcapturing methods for compressible Euler and Navier-Stokes equations containing stiff reacting source terms and discontinuities reveals surprising counter-intuitive results. Unlike non-reacting flows, for stiff reactions with discontinuities, employing a time step and grid spacing that are below the CFL limit (based on the homogeneous part or non-reacting part of the governing equations) does not guarantee a correct solution of the chosen governing equations. Instead, depending on the numerical method, time step and grid spacing, the numerical simulation may lead to (a) the correct solution (within the truncation error of the scheme), (b) a divergent solution, (c) a wrong propagation speed of discontinuities solution or (d) other spurious solutions that are solutions of the discretized counterparts but are not solutions of the governing equations. The present investigation for three very different stiff system cases confirms some of the findings of Lafon & Yee (1996) and LeVeque & Yee (1990) for a model scalar PDE. The findings might shed some light on the reported difficulties in numerical combustion and problems with stiff nonlinear (homogeneous) source terms and discontinuities in general.
NASA Astrophysics Data System (ADS)
Parazoo, Nicholas C.; Koven, Charles D.; Lawrence, David M.; Romanovsky, Vladimir; Miller, Charles E.
2018-01-01
Thaw and release of permafrost carbon (C) due to climate change is likely to offset increased vegetation C uptake in northern high-latitude (NHL) terrestrial ecosystems. Models project that this permafrost C feedback may act as a slow leak, in which case detection and attribution of the feedback may be difficult. The formation of talik, a subsurface layer of perennially thawed soil, can accelerate permafrost degradation and soil respiration, ultimately shifting the C balance of permafrost-affected ecosystems from long-term C sinks to long-term C sources. It is imperative to understand and characterize mechanistic links between talik, permafrost thaw, and respiration of deep soil C to detect and quantify the permafrost C feedback. Here, we use the Community Land Model (CLM) version 4.5, a permafrost and biogeochemistry model, in comparison to long-term deep borehole data along North American and Siberian transects, to investigate thaw-driven C sources in NHL ( > 55° N) from 2000 to 2300. Widespread talik at depth is projected across most of the NHL permafrost region (14 million km2) by 2300, 6.2 million km2 of which is projected to become a long-term C source, emitting 10 Pg C by 2100, 50 Pg C by 2200, and 120 Pg C by 2300, with few signs of slowing. Roughly half of the projected C source region is in predominantly warm sub-Arctic permafrost following talik onset. This region emits only 20 Pg C by 2300, but the CLM4.5 estimate may be biased low by not accounting for deep C in yedoma. Accelerated decomposition of deep soil C following talik onset shifts the ecosystem C balance away from surface dominant processes (photosynthesis and litter respiration), but sink-to-source transition dates are delayed by 20-200 years by high ecosystem productivity, such that talik peaks early ( ˜ 2050s, although borehole data suggest sooner) and C source transition peaks late ( ˜ 2150-2200). The remaining C source region in cold northern Arctic permafrost, which shifts to a net source early (late 21st century), emits 5 times more C (95 Pg C) by 2300, and prior to talik formation due to the high decomposition rates of shallow, young C in organic-rich soils coupled with low productivity. Our results provide important clues signaling imminent talik onset and C source transition, including (1) late cold-season (January-February) soil warming at depth ( ˜ 2 m), (2) increasing cold-season emissions (November-April), and (3) enhanced respiration of deep, old C in warm permafrost and young, shallow C in organic-rich cold permafrost soils. Our results suggest a mosaic of processes that govern carbon source-to-sink transitions at high latitudes and emphasize the urgency of monitoring soil thermal profiles, organic C age and content, cold-season CO2 emissions, and atmospheric 14CO2 as key indicators of the permafrost C feedback.
Long Term 2 Second Round Source Water Monitoring and Bin Placement Memo
The Long Term 2 Enhanced Surface Water Treatment Rule (LT2ESWTR) applies to all public water systems served by a surface water source or public water systems served by a ground water source under the direct influence of surface water.
Uncertainty, variability, and earthquake physics in ground‐motion prediction equations
Baltay, Annemarie S.; Hanks, Thomas C.; Abrahamson, Norm A.
2017-01-01
Residuals between ground‐motion data and ground‐motion prediction equations (GMPEs) can be decomposed into terms representing earthquake source, path, and site effects. These terms can be cast in terms of repeatable (epistemic) residuals and the random (aleatory) components. Identifying the repeatable residuals leads to a GMPE with reduced uncertainty for a specific source, site, or path location, which in turn can yield a lower hazard level at small probabilities of exceedance. We illustrate a schematic framework for this residual partitioning with a dataset from the ANZA network, which straddles the central San Jacinto fault in southern California. The dataset consists of more than 3200 1.15≤M≤3 earthquakes and their peak ground accelerations (PGAs), recorded at close distances (R≤20 km). We construct a small‐magnitude GMPE for these PGA data, incorporating VS30 site conditions and geometrical spreading. Identification and removal of the repeatable source, path, and site terms yield an overall reduction in the standard deviation from 0.97 (in ln units) to 0.44, for a nonergodic assumption, that is, for a single‐source location, single site, and single path. We give examples of relationships between independent seismological observables and the repeatable terms. We find a correlation between location‐based source terms and stress drops in the San Jacinto fault zone region; an explanation of the site term as a function of kappa, the near‐site attenuation parameter; and a suggestion that the path component can be related directly to elastic structure. These correlations allow the repeatable source location, site, and path terms to be determined a priori using independent geophysical relationships. Those terms could be incorporated into location‐specific GMPEs for more accurate and precise ground‐motion prediction.
Interlaboratory study of the ion source memory effect in 36Cl accelerator mass spectrometry
NASA Astrophysics Data System (ADS)
Pavetich, Stefan; Akhmadaliev, Shavkat; Arnold, Maurice; Aumaître, Georges; Bourlès, Didier; Buchriegler, Josef; Golser, Robin; Keddadouche, Karim; Martschini, Martin; Merchel, Silke; Rugel, Georg; Steier, Peter
2014-06-01
Understanding and minimization of contaminations in the ion source due to cross-contamination and long-term memory effect is one of the key issues for accurate accelerator mass spectrometry (AMS) measurements of volatile elements. The focus of this work is on the investigation of the long-term memory effect for the volatile element chlorine, and the minimization of this effect in the ion source of the Dresden accelerator mass spectrometry facility (DREAMS). For this purpose, one of the two original HVE ion sources at the DREAMS facility was modified, allowing the use of larger sample holders having individual target apertures. Additionally, a more open geometry was used to improve the vacuum level. To evaluate this improvement in comparison to other up-to-date ion sources, an interlaboratory comparison had been initiated. The long-term memory effect of the four Cs sputter ion sources at DREAMS (two sources: original and modified), ASTER (Accélérateur pour les Sciences de la Terre, Environnement, Risques) and VERA (Vienna Environmental Research Accelerator) had been investigated by measuring samples of natural 35Cl/37Cl-ratio and samples highly-enriched in 35Cl (35Cl/37Cl ∼ 999). Besides investigating and comparing the individual levels of long-term memory, recovery time constants could be calculated. The tests show that all four sources suffer from long-term memory, but the modified DREAMS ion source showed the lowest level of contamination. The recovery times of the four ion sources were widely spread between 61 and 1390 s, where the modified DREAMS ion source with values between 156 and 262 s showed the fastest recovery in 80% of the measurements.
Fermi large area telescope second source catalog
Nolan, P. L.; Abdo, A. A.; Ackermann, M.; ...
2012-03-28
Here, we present the second catalog of high-energy γ-ray sources detected by the Large Area Telescope (LAT), the primary science instrument on the Fermi Gamma-ray Space Telescope (Fermi), derived from data taken during the first 24 months of the science phase of the mission, which began on 2008 August 4. Source detection is based on the average flux over the 24 month period. The second Fermi-LAT catalog (2FGL) includes source location regions, defined in terms of elliptical fits to the 95% confidence regions and spectral fits in terms of power-law, exponentially cutoff power-law, or log-normal forms. Also included are fluxmore » measurements in five energy bands and light curves on monthly intervals for each source. Twelve sources in the catalog are modeled as spatially extended. Furthermore, we provide a detailed comparison of the results from this catalog with those from the first Fermi-LAT catalog (1FGL). Although the diffuse Galactic and isotropic models used in the 2FGL analysis are improved compared to the 1FGL catalog, we attach caution flags to 162 of the sources to indicate possible confusion with residual imperfections in the diffuse model. Finally, the 2FGL catalog contains 1873 sources detected and characterized in the 100 MeV to 100 GeV range of which we consider 127 as being firmly identified and 1171 as being reliably associated with counterparts of known or likely γ-ray-producing source classes.« less
FERMI LARGE AREA TELESCOPE SECOND SOURCE CATALOG
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nolan, P. L.; Ajello, M.; Allafort, A.
We present the second catalog of high-energy {gamma}-ray sources detected by the Large Area Telescope (LAT), the primary science instrument on the Fermi Gamma-ray Space Telescope (Fermi), derived from data taken during the first 24 months of the science phase of the mission, which began on 2008 August 4. Source detection is based on the average flux over the 24 month period. The second Fermi-LAT catalog (2FGL) includes source location regions, defined in terms of elliptical fits to the 95% confidence regions and spectral fits in terms of power-law, exponentially cutoff power-law, or log-normal forms. Also included are flux measurementsmore » in five energy bands and light curves on monthly intervals for each source. Twelve sources in the catalog are modeled as spatially extended. We provide a detailed comparison of the results from this catalog with those from the first Fermi-LAT catalog (1FGL). Although the diffuse Galactic and isotropic models used in the 2FGL analysis are improved compared to the 1FGL catalog, we attach caution flags to 162 of the sources to indicate possible confusion with residual imperfections in the diffuse model. The 2FGL catalog contains 1873 sources detected and characterized in the 100 MeV to 100 GeV range of which we consider 127 as being firmly identified and 1171 as being reliably associated with counterparts of known or likely {gamma}-ray-producing source classes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barry, Kenneth
The Nuclear Energy Institute (NEI) Small Modular Reactor (SMR) Licensing Task Force (TF) has been evaluating licensing issues unique and important to iPWRs, ranking these issues, and developing NEI position papers for submittal to the U.S. Nuclear Regulatory Commission (NRC) during the past three years. Papers have been developed and submitted to the NRC in a range of areas including: Price-Anderson Act, NRC annual fees, security, modularity, and staffing. In December, 2012, NEI completed a draft position paper on SMR source terms and participated in an NRC public meeting presenting a summary of this paper, which was subsequently submitted tomore » the NRC. One important conclusion of the source term paper was the evaluation and selection of high importance areas where additional research would have a significant impact on source terms. The highest ranked research area was iPWR containment aerosol natural deposition. The NRC accepts the use of existing aerosol deposition correlations in Regulatory Guide 1.183, but these were developed for large light water reactor (LWR) containments. Application of these correlations to an iPWR design has resulted in greater than a ten-fold reduction of containment airborne aerosol inventory as compared to large LWRs. Development and experimental justification of containment aerosol natural deposition correlations specifically for the unique iPWR containments is expected to result in a large reduction of design basis and beyond-design-basis accident source terms with concomitantly smaller dose to workers and the public. Therefore, NRC acceptance of iPWR containment aerosol natural deposition correlations will directly support the industry’s goal of reducing the Emergency Planning Zone (EPZ) for SMRs. Based on the results in this work, it is clear that thermophoresis is relatively unimportant for iPWRs. Gravitational settling is well understood, and may be the dominant process for a dry environment. Diffusiophoresis and enhanced settling by particle growth are the dominant processes for determining DFs for expected conditions in an iPWR containment. These processes are dependent on the areato-volume (A/V) ratio, which should benefit iPWR designs because these reactors have higher A/Vs compared to existing LWRs.« less
NASA Technical Reports Server (NTRS)
Yee, H. C.; Shinn, J. L.
1986-01-01
Some numerical aspects of finite-difference algorithms for nonlinear multidimensional hyperbolic conservation laws with stiff nonhomogenous (source) terms are discussed. If the stiffness is entirely dominated by the source term, a semi-implicit shock-capturing method is proposed provided that the Jacobian of the soruce terms possesses certain properties. The proposed semi-implicit method can be viewed as a variant of the Bussing and Murman point-implicit scheme with a more appropriate numerical dissipation for the computation of strong shock waves. However, if the stiffness is not solely dominated by the source terms, a fully implicit method would be a better choice. The situation is complicated by problems that are higher than one dimension, and the presence of stiff source terms further complicates the solution procedures for alternating direction implicit (ADI) methods. Several alternatives are discussed. The primary motivation for constructing these schemes was to address thermally and chemically nonequilibrium flows in the hypersonic regime. Due to the unique structure of the eigenvalues and eigenvectors for fluid flows of this type, the computation can be simplified, thus providing a more efficient solution procedure than one might have anticipated.
Watershed nitrogen and phosphorus balance: The upper Potomac River basin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaworski, N.A.; Groffman, P.M.; Keller, A.A.
1992-01-01
Nitrogen and phosphorus mass balances were estimated for the portion of the Potomac River basin watershed located above Washington, D.C. The total nitrogen (N) balance included seven input source terms, six sinks, and one 'change-in-storage' term, but was simplified to five input terms and three output terms. The phosphorus (P) baance had four input and three output terms. The estimated balances are based on watershed data from seven information sources. Major sources of nitrogen are animal waste and atmospheric deposition. The major sources of phosphorus are animal waste and fertilizer. The major sink for nitrogen is combined denitrification, volatilization, andmore » change-in-storage. The major sink for phosphorus is change-in-storage. River exports of N and P were 17% and 8%, respectively, of the total N and P inputs. Over 60% of the N and P were volatilized or stored. The major input and output terms on the budget are estimated from direct measurements, but the change-in-storage term is calculated by difference. The factors regulating retention and storage processes are discussed and research needs are identified.« less
Validation of a mixture-averaged thermal diffusion model for premixed lean hydrogen flames
NASA Astrophysics Data System (ADS)
Schlup, Jason; Blanquart, Guillaume
2018-03-01
The mixture-averaged thermal diffusion model originally proposed by Chapman and Cowling is validated using multiple flame configurations. Simulations using detailed hydrogen chemistry are done on one-, two-, and three-dimensional flames. The analysis spans flat and stretched, steady and unsteady, and laminar and turbulent flames. Quantitative and qualitative results using the thermal diffusion model compare very well with the more complex multicomponent diffusion model. Comparisons are made using flame speeds, surface areas, species profiles, and chemical source terms. Once validated, this model is applied to three-dimensional laminar and turbulent flames. For these cases, thermal diffusion causes an increase in the propagation speed of the flames as well as increased product chemical source terms in regions of high positive curvature. The results illustrate the necessity for including thermal diffusion, and the accuracy and computational efficiency of the mixture-averaged thermal diffusion model.
Computational study of radiation doses at UNLV accelerator facility
NASA Astrophysics Data System (ADS)
Hodges, Matthew; Barzilov, Alexander; Chen, Yi-Tung; Lowe, Daniel
2017-09-01
A Varian K15 electron linear accelerator (linac) has been considered for installation at University of Nevada, Las Vegas (UNLV). Before experiments can be performed, it is necessary to evaluate the photon and neutron spectra as generated by the linac, as well as the resulting dose rates within the accelerator facility. A computational study using MCNPX was performed to characterize the source terms for the bremsstrahlung converter. The 15 MeV electron beam available in the linac is above the photoneutron threshold energy for several materials in the linac assembly, and as a result, neutrons must be accounted for. The angular and energy distributions for bremsstrahlung flux generated by the interaction of the 15 MeV electron beam with the linac target were determined. This source term was used in conjunction with the K15 collimators to determine the dose rates within the facility.
The Plant Research Unit: Long-Term Plant Growth Support for Space Station
NASA Technical Reports Server (NTRS)
Heathcote, D. G.; Brown, C. S.; Goins, G. D.; Kliss, M.; Levine, H.; Lomax, P. A.; Porter, R. L.; Wheeler, R.
1996-01-01
The specifications of the plant research unit (PRU) plant habitat, designed for space station operations, are presented. A prototype brassboard model of the PRU is described, and the results of the subsystems tests are outlined. The effects of the long term red light emitting diode (LED) illumination as the sole source for plant development were compared with red LEDs supplemented with blue wavelengths, and white fluorescent sources. It was found that wheat and Arabidopsis were able to complete a life cycle under red LEDs alone, but with differences in physiology and morphology. The differences noted were greatest for the Arabidopsis, where the time to flowering was increased under red illumination. The addition of 10 percent of blue light was effective in eliminating the observed differences. The results of the comparative testing of three nutrient delivery systems for the PRU are discussed.
Supersonic jet noise - Its generation, prediction and effects on people and structures
NASA Technical Reports Server (NTRS)
Preisser, J. S.; Golub, R. A.; Seiner, J. M.; Powell, C. A.
1990-01-01
This paper presents the results of a study aimed at quantifying the effects of jet source noise reduction, increases in aircraft lift, and reduced aircraft thrust on the take-off noise associated with supersonic civil transports. Supersonic jet noise sources are first described, and their frequency and directivity dependence are defined. The study utilizes NASA's Aircraft Noise Prediction Program in a parametric study to weigh the relative benefits of several approaches to low noise. The baseline aircraft concept used in these predictions is the AST-205-1 powered by GE21/J11-B14A scaled engines. Noise assessment is presented in terms of effective perceived noise levels at the FAA's centerline and sideline measuring locations for current subsonic aircraft, and in terms of audiologically perceived sound of people and other indirect effects. The results show that significant noise benefit can be achieved through proper understanding and utilization of all available approaches.
Hybrid BEM/empirical approach for scattering of correlated sources in rocket noise prediction
NASA Astrophysics Data System (ADS)
Barbarino, Mattia; Adamo, Francesco P.; Bianco, Davide; Bartoccini, Daniele
2017-09-01
Empirical models such as the Eldred standard model are commonly used for rocket noise prediction. Such models directly provide a definition of the Sound Pressure Level through the quadratic pressure term by uncorrelated sources. In this paper, an improvement of the Eldred Standard model has been formulated. This new formulation contains an explicit expression for the acoustic pressure of each noise source, in terms of amplitude and phase, in order to investigate the sources correlation effects and to propagate them through a wave equation. In particular, the correlation effects between adjacent and not-adjacent sources have been modeled and analyzed. The noise prediction obtained with the revised Eldred-based model has then been used for formulating an empirical/BEM (Boundary Element Method) hybrid approach that allows an evaluation of the scattering effects. In the framework of the European Space Agency funded program VECEP (VEga Consolidation and Evolution Programme), these models have been applied for the prediction of the aeroacoustics loads of the VEGA (Vettore Europeo di Generazione Avanzata - Advanced Generation European Carrier Rocket) launch vehicle at lift-off and the results have been compared with experimental data.
NASA Astrophysics Data System (ADS)
Camero-Arranz, Ascension; Finger, M. H.; Wilson-Hodge, C.; Caballero, I.; Kretschmar, P.; Jenke, P. A.; Beklen, E.
2010-03-01
We present a long-term timing analysis of the accreting X-ray pulsar A 0535+26 using data from Fermi/GBM, RXTE and Swift/BAT. A new orbital ephemeris is obtained from normal outbursts experienced by this source since 2005, and a long-term pulse profile study is carried out. In this study we include results from the current outburst. This outburst is believed to be much larger than the previous ones.
NASA Astrophysics Data System (ADS)
Yang, Yang; Li, Xiukun
2016-06-01
Separation of the components of rigid acoustic scattering by underwater objects is essential in obtaining the structural characteristics of such objects. To overcome the problem of rigid structures appearing to have the same spectral structure in the time domain, time-frequency Blind Source Separation (BSS) can be used in combination with image morphology to separate the rigid scattering components of different objects. Based on a highlight model, the separation of the rigid scattering structure of objects with time-frequency distribution is deduced. Using a morphological filter, different characteristics in a Wigner-Ville Distribution (WVD) observed for single auto term and cross terms can be simplified to remove any cross-term interference. By selecting time and frequency points of the auto terms signal, the accuracy of BSS can be improved. An experimental simulation has been used, with changes in the pulse width of the transmitted signal, the relative amplitude and the time delay parameter, in order to analyzing the feasibility of this new method. Simulation results show that the new method is not only able to separate rigid scattering components, but can also separate the components when elastic scattering and rigid scattering exist at the same time. Experimental results confirm that the new method can be used in separating the rigid scattering structure of underwater objects.
Widdowson, M.A.; Chapelle, F.H.; Brauner, J.S.; ,
2003-01-01
A method is developed for optimizing monitored natural attenuation (MNA) and the reduction in the aqueous source zone concentration (??C) required to meet a site-specific regulatory target concentration. The mathematical model consists of two one-dimensional equations of mass balance for the aqueous phase contaminant, to coincide with up to two distinct zones of transformation, and appropriate boundary and intermediate conditions. The solution is written in terms of zone-dependent Peclet and Damko??hler numbers. The model is illustrated at a chlorinated solvent site where MNA was implemented following source treatment using in-situ chemical oxidation. The results demonstrate that by not taking into account a variable natural attenuation capacity (NAC), a lower target ??C is predicted, resulting in unnecessary source concentration reduction and cost with little benefit to achieving site-specific remediation goals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zachara, John M.; Chen, Xingyuan; Murray, Chris
A tightly spaced well-field within a groundwater uranium (U) plume in the groundwater-surface water transition zone was monitored for a three year period for groundwater elevation and dissolved solutes. The plume discharges to the Columbia River, which displays a dramatic spring stage surge resulting from mountain snowmelt. Groundwater exhibits a low hydrologic gradient and chemical differences with river water. River water intrudes the site in spring. Specific aims were to assess the impacts of river intrusion on dissolved uranium (Uaq), specific conductance (SpC), and other solutes, and to discriminate between transport, geochemical, and source term heterogeneity effects. Time series trendsmore » for Uaq and SpC were complex and displayed large temporal well-to well variability as a result of water table elevation fluctuations, river water intrusion, and changes in groundwater flow directions. The wells were clustered into subsets exhibiting common temporal behaviors resulting from the intrusion dynamics of river water and the location of source terms. Concentration hot spots were observed in groundwater that varied in location with increasing water table elevation. Heuristic reactive transport modeling with PFLOTRAN demonstrated that mobilized U was transported between wells and source terms in complex trajectories, and was diluted as river water entered and exited the groundwater system. While uranium time-series concentration trends varied significantly from year to year as a result of climate-caused differences in the spring hydrograph, common and partly predictable response patterns were observed that were driven by water table elevation, and the extent and duration of the river water intrusion event.« less
NASA Astrophysics Data System (ADS)
Gosling, Simon N.; Bryce, Erin K.; Dixon, P. Grady; Gabriel, Katharina M. A.; Gosling, Elaine Y.; Hanes, Jonathan M.; Hondula, David M.; Liang, Liang; Bustos Mac Lean, Priscilla Ayleen; Muthers, Stefan; Nascimento, Sheila Tavares; Petralli, Martina; Vanos, Jennifer K.; Wanka, Eva R.
2014-03-01
Here we present, for the first time, a glossary of biometeorological terms. The glossary aims to address the need for a reliable source of biometeorological definitions, thereby facilitating communication and mutual understanding in this rapidly expanding field. A total of 171 terms are defined, with reference to 234 citations. It is anticipated that the glossary will be revisited in coming years, updating terms and adding new terms, as appropriate. The glossary is intended to provide a useful resource to the biometeorology community, and to this end, readers are encouraged to contact the lead author to suggest additional terms for inclusion in later versions of the glossary as a result of new and emerging developments in the field.
A glossary for biometeorology.
Gosling, Simon N; Bryce, Erin K; Dixon, P Grady; Gabriel, Katharina M A; Gosling, Elaine Y; Hanes, Jonathan M; Hondula, David M; Liang, Liang; Bustos Mac Lean, Priscilla Ayleen; Muthers, Stefan; Nascimento, Sheila Tavares; Petralli, Martina; Vanos, Jennifer K; Wanka, Eva R
2014-03-01
Here we present, for the first time, a glossary of biometeorological terms. The glossary aims to address the need for a reliable source of biometeorological definitions, thereby facilitating communication and mutual understanding in this rapidly expanding field. A total of 171 terms are defined, with reference to 234 citations. It is anticipated that the glossary will be revisited in coming years, updating terms and adding new terms, as appropriate. The glossary is intended to provide a useful resource to the biometeorology community, and to this end, readers are encouraged to contact the lead author to suggest additional terms for inclusion in later versions of the glossary as a result of new and emerging developments in the field.
Reducing mortality risk by targeting specific air pollution sources: Suva, Fiji.
Isley, C F; Nelson, P F; Taylor, M P; Stelcer, E; Atanacio, A J; Cohen, D D; Mani, F S; Maata, M
2018-01-15
Health implications of air pollution vary dependent upon pollutant sources. This work determines the value, in terms of reduced mortality, of reducing ambient particulate matter (PM 2.5 : effective aerodynamic diameter 2.5μm or less) concentration due to different emission sources. Suva, a Pacific Island city with substantial input from combustion sources, is used as a case-study. Elemental concentration was determined, by ion beam analysis, for PM 2.5 samples from Suva, spanning one year. Sources of PM 2.5 have been quantified by positive matrix factorisation. A review of recent literature has been carried out to delineate the mortality risk associated with these sources. Risk factors have then been applied for Suva, to calculate the possible mortality reduction that may be achieved through reduction in pollutant levels. Higher risk ratios for black carbon and sulphur resulted in mortality predictions for PM 2.5 from fossil fuel combustion, road vehicle emissions and waste burning that surpass predictions for these sources based on health risk of PM 2.5 mass alone. Predicted mortality for Suva from fossil fuel smoke exceeds the national toll from road accidents in Fiji. The greatest benefit for Suva, in terms of reduced mortality, is likely to be accomplished by reducing emissions from fossil fuel combustion (diesel), vehicles and waste burning. Copyright © 2017. Published by Elsevier B.V.
BWR ASSEMBLY SOURCE TERMS FOR WASTE PACKAGE DESIGN
DOE Office of Scientific and Technical Information (OSTI.GOV)
T.L. Lotz
1997-02-15
This analysis is prepared by the Mined Geologic Disposal System (MGDS) Waste Package Development Department (WPDD) to provide boiling water reactor (BWR) assembly radiation source term data for use during Waste Package (WP) design. The BWR assembly radiation source terms are to be used for evaluation of radiolysis effects at the WP surface, and for personnel shielding requirements during assembly or WP handling operations. The objectives of this evaluation are to generate BWR assembly radiation source terms that bound selected groupings of BWR assemblies, with regard to assembly average burnup and cooling time, which comprise the anticipated MGDS BWR commercialmore » spent nuclear fuel (SNF) waste stream. The source term data is to be provided in a form which can easily be utilized in subsequent shielding/radiation dose calculations. Since these calculations may also be used for Total System Performance Assessment (TSPA), with appropriate justification provided by TSPA, or radionuclide release rate analysis, the grams of each element and additional cooling times out to 25 years will also be calculated and the data included in the output files.« less
The use of mud as an alternative source for bioelectricity using microbial fuel cells
NASA Astrophysics Data System (ADS)
Darmawan, Raden; Widjaja, Arief; Juliastuti, Sri Rachmania; Hendrianie, Nuniek; Hidaya, Chanifah; Sari, Dessy Rosita; Suwito, Morimura, Shigeru; Tominaga, Masato
2017-05-01
Alternative energy sources to substitute fossil-based energy is expected, as the fossil energy reserves decreasing every day. Mud is considered to be economical as the material sources for generating the electricity where it could be found easily and abundantly in Indonesia. The existence of a lot of mud that contains organic material has great potential as a source of electrical energy using microbial fuel cells (MFCs). It provides a promising technology by degrading organic compounds to yield the sustainable energy. The different sampling sites were determined to find out the electricity production, i.e. mud from soil water, brackish water and sea water using an anode immersed of 10 cm2. The results suggest that the electricity generation of the three areas are 0.331, 0.327 and 0.398 V (in terms of voltage); 0.221, 0.050 and 0.325 mA (in terms of electric current), respectively. It is investigated that the mud obtained the sea water exhibits the highest power potential compared to that obtained from the brackish and soil water.
Assessment of macroseismic intensity in the Nile basin, Egypt
NASA Astrophysics Data System (ADS)
Fergany, Elsayed
2018-01-01
This work intends to assess deterministic seismic hazard and risk analysis in terms of the maximum expected intensity map of the Egyptian Nile basin sector. Seismic source zone model of Egypt was delineated based on updated compatible earthquake catalog in 2015, focal mechanisms, and the common tectonic elements. Four effective seismic source zones were identified along the Nile basin. The observed macroseismic intensity data along the basin was used to develop intensity prediction equation defined in terms of moment magnitude. Expected maximum intensity map was proven based on the developed intensity prediction equation, identified effective seismic source zones, and maximum expected magnitude for each zone along the basin. The earthquake hazard and risk analysis was discussed and analyzed in view of the maximum expected moment magnitude and the maximum expected intensity values for each effective source zone. Moderate expected magnitudes are expected to put high risk at Cairo and Aswan regions. The results of this study could be a recommendation for the planners in charge to mitigate the seismic risk at these strategic zones of Egypt.
Screening and validation of EXTraS data products
NASA Astrophysics Data System (ADS)
Carpano, Stefania; Haberl, F.; De Luca, A.; Tiengo, A.: Israel, G.; Rodriguez, G.; Belfiore, A.; Rosen, S.; Read, A.; Wilms, J.; Kreikenbohm, A.; Law-Green, D.
2015-09-01
The EXTraS project (Exploring the X-ray Transient and variable Sky) is aimed at fullyexploring the serendipitous content of the XMM-Newton EPIC database in the timedomain. The project is funded within the EU/FP7-Cooperation Space framework and is carried out by a collaboration including INAF (Italy), IUSS (Italy), CNR/IMATI (Italy), University of Leicester (UK), MPE (Germany) and ECAP (Germany). The several tasks consist in characterise aperiodicvariability for all 3XMM sources, search for short-term periodic variability on hundreds of thousands sources, detect new transient sources that are missed by standard source detection and hence not belonging to the 3XMM catalogue, search for long term variability by measuring fluxes or upper limits for both pointed and slew observations, and finally perform multiwavelength characterisation andclassification. Screening and validation of the different products is essentially in order to reject flawed results, generated by the automatic pipelines. We present here the screening tool we developed in the form of a Graphical User Interface and our plans for a systematic screening of the different catalogues.
NASA Astrophysics Data System (ADS)
Ferro, Andrea R.; Klepeis, Neil E.; Ott, Wayne R.; Nazaroff, William W.; Hildemann, Lynn M.; Switzer, Paul
Residential interior door positions influence the pollutant concentrations that result from short-term indoor sources, such as cigarettes, candles, and incense. To elucidate this influence, we reviewed past studies and conducted new experiments in three residences: a single-story 714 m 3 ranch-style house, a 510 m 3 two-story split-level house, and a 200 m 3 two-story house. During the experiments, we released sulfur hexafluoride or carbon monoxide tracer gas over short periods (≤30 min) and measured concentrations in the source room and at least one other (receptor) room for various interior door opening positions. We found that closing a door between rooms effectively prevented transport of air pollutants, reducing the average concentration in the receptor room relative to the source room by 57-100% over exposure periods of 1-8 h. When intervening doors were partially or fully open, the reduction in average concentrations ranged from 3% to 99%, varying as a function of door opening width and the distance between source and receptor rooms.
On the scale dependence of earthquake stress drop
NASA Astrophysics Data System (ADS)
Cocco, Massimo; Tinti, Elisa; Cirella, Antonella
2016-10-01
We discuss the debated issue of scale dependence in earthquake source mechanics with the goal of providing supporting evidence to foster the adoption of a coherent interpretative framework. We examine the heterogeneous distribution of source and constitutive parameters during individual ruptures and their scaling with earthquake size. We discuss evidence that slip, slip-weakening distance and breakdown work scale with seismic moment and are interpreted as scale dependent parameters. We integrate our estimates of earthquake stress drop, computed through a pseudo-dynamic approach, with many others available in the literature for both point sources and finite fault models. We obtain a picture of the earthquake stress drop scaling with seismic moment over an exceptional broad range of earthquake sizes (-8 < MW < 9). Our results confirm that stress drop values are scattered over three order of magnitude and emphasize the lack of corroborating evidence that stress drop scales with seismic moment. We discuss these results in terms of scale invariance of stress drop with source dimension to analyse the interpretation of this outcome in terms of self-similarity. Geophysicists are presently unable to provide physical explanations of dynamic self-similarity relying on deterministic descriptions of micro-scale processes. We conclude that the interpretation of the self-similar behaviour of stress drop scaling is strongly model dependent. We emphasize that it relies on a geometric description of source heterogeneity through the statistical properties of initial stress or fault-surface topography, in which only the latter is constrained by observations.
Modeling the contribution of point sources and non-point sources to Thachin River water pollution.
Schaffner, Monika; Bader, Hans-Peter; Scheidegger, Ruth
2009-08-15
Major rivers in developing and emerging countries suffer increasingly of severe degradation of water quality. The current study uses a mathematical Material Flow Analysis (MMFA) as a complementary approach to address the degradation of river water quality due to nutrient pollution in the Thachin River Basin in Central Thailand. This paper gives an overview of the origins and flow paths of the various point- and non-point pollution sources in the Thachin River Basin (in terms of nitrogen and phosphorus) and quantifies their relative importance within the system. The key parameters influencing the main nutrient flows are determined and possible mitigation measures discussed. The results show that aquaculture (as a point source) and rice farming (as a non-point source) are the key nutrient sources in the Thachin River Basin. Other point sources such as pig farms, households and industries, which were previously cited as the most relevant pollution sources in terms of organic pollution, play less significant roles in comparison. This order of importance shifts when considering the model results for the provincial level. Crosschecks with secondary data and field studies confirm the plausibility of our simulations. Specific nutrient loads for the pollution sources are derived; these can be used for a first broad quantification of nutrient pollution in comparable river basins. Based on an identification of the sensitive model parameters, possible mitigation scenarios are determined and their potential to reduce the nutrient load evaluated. A comparison of simulated nutrient loads with measured nutrient concentrations shows that nutrient retention in the river system may be significant. Sedimentation in the slow flowing surface water network as well as nitrogen emission to the air from the warm oxygen deficient waters are certainly partly responsible, but also wetlands along the river banks could play an important role as nutrient sinks.
NASA Astrophysics Data System (ADS)
Nagai, Haruyasu; Terada, Hiroaki; Tsuduki, Katsunori; Katata, Genki; Ota, Masakazu; Furuno, Akiko; Akari, Shusaku
2017-09-01
In order to assess the radiological dose to the public resulting from the Fukushima Daiichi Nuclear Power Station (FDNPS) accident in Japan, especially for the early phase of the accident when no measured data are available for that purpose, the spatial and temporal distribution of radioactive materials in the environment are reconstructed by computer simulations. In this study, by refining the source term of radioactive materials discharged into the atmosphere and modifying the atmospheric transport, dispersion and deposition model (ATDM), the atmospheric dispersion simulation of radioactive materials is improved. Then, a database of spatiotemporal distribution of radioactive materials in the air and on the ground surface is developed from the output of the simulation. This database is used in other studies for the dose assessment by coupling with the behavioral pattern of evacuees from the FDNPS accident. By the improvement of the ATDM simulation to use a new meteorological model and sophisticated deposition scheme, the ATDM simulations reproduced well the 137Cs and 131I deposition patterns. For the better reproducibility of dispersion processes, further refinement of the source term was carried out by optimizing it to the improved ATDM simulation by using new monitoring data.
Relaxation approximations to second-order traffic flow models by high-resolution schemes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nikolos, I.K.; Delis, A.I.; Papageorgiou, M.
2015-03-10
A relaxation-type approximation of second-order non-equilibrium traffic models, written in conservation or balance law form, is considered. Using the relaxation approximation, the nonlinear equations are transformed to a semi-linear diagonilizable problem with linear characteristic variables and stiff source terms with the attractive feature that neither Riemann solvers nor characteristic decompositions are in need. In particular, it is only necessary to provide the flux and source term functions and an estimate of the characteristic speeds. To discretize the resulting relaxation system, high-resolution reconstructions in space are considered. Emphasis is given on a fifth-order WENO scheme and its performance. The computations reportedmore » demonstrate the simplicity and versatility of relaxation schemes as numerical solvers.« less
EXPERIENCES FROM THE SOURCE-TERM ANALYSIS OF A LOW AND INTERMEDIATE LEVEL RADWASTE DISPOSAL FACILITY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park,Jin Beak; Park, Joo-Wan; Lee, Eun-Young
2003-02-27
Enhancement of a computer code SAGE for evaluation of the Korean concept for a LILW waste disposal facility is discussed. Several features of source term analysis are embedded into SAGE to analyze: (1) effects of degradation mode of an engineered barrier, (2) effects of dispersion phenomena in the unsaturated zone and (3) effects of time dependent sorption coefficient in the unsaturated zone. IAEA's Vault Safety Case (VSC) approach is used to demonstrate the ability of this assessment code. Results of MASCOT are used for comparison purposes. These enhancements of the safety assessment code, SAGE, can contribute to realistic evaluation ofmore » the Korean concept of the LILW disposal project in the near future.« less
Tang, Xiao-Bin; Meng, Jia; Wang, Peng; Cao, Ye; Huang, Xi; Wen, Liang-Sheng; Chen, Da
2016-04-01
A small-sized UAV (NH-UAV) airborne system with two gamma spectrometers (LaBr3 detector and HPGe detector) was developed to monitor activity concentration in serious nuclear accidents, such as the Fukushima nuclear accident. The efficiency calibration and determination of minimum detectable activity concentration (MDAC) of the specific system were studied by MC simulations at different flight altitudes, different horizontal distances from the detection position to the source term center and different source term sizes. Both air and ground radiation were considered in the models. The results obtained may provide instructive suggestions for in-situ radioactivity measurements of NH-UAV. Copyright © 2016 Elsevier Ltd. All rights reserved.
26 CFR 31.3401(a)(14)-1 - Group-term life insurance.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 26 Internal Revenue 15 2010-04-01 2010-04-01 false Group-term life insurance. 31.3401(a)(14)-1... SOURCE Collection of Income Tax at Source § 31.3401(a)(14)-1 Group-term life insurance. (a) The cost of group-term life insurance on the life of an employee is excepted from wages, and hence is not subject to...
26 CFR 31.3401(a)(14)-1 - Group-term life insurance.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 26 Internal Revenue 15 2011-04-01 2011-04-01 false Group-term life insurance. 31.3401(a)(14)-1... SOURCE Collection of Income Tax at Source § 31.3401(a)(14)-1 Group-term life insurance. (a) The cost of group-term life insurance on the life of an employee is excepted from wages, and hence is not subject to...
26 CFR 31.3401(a)(14)-1 - Group-term life insurance.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 26 Internal Revenue 15 2012-04-01 2012-04-01 false Group-term life insurance. 31.3401(a)(14)-1... SOURCE Collection of Income Tax at Source § 31.3401(a)(14)-1 Group-term life insurance. (a) The cost of group-term life insurance on the life of an employee is excepted from wages, and hence is not subject to...
26 CFR 31.3401(a)(14)-1 - Group-term life insurance.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 26 Internal Revenue 15 2014-04-01 2014-04-01 false Group-term life insurance. 31.3401(a)(14)-1... SOURCE Collection of Income Tax at Source § 31.3401(a)(14)-1 Group-term life insurance. (a) The cost of group-term life insurance on the life of an employee is excepted from wages, and hence is not subject to...
26 CFR 31.3401(a)(14)-1 - Group-term life insurance.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 26 Internal Revenue 15 2013-04-01 2013-04-01 false Group-term life insurance. 31.3401(a)(14)-1... SOURCE Collection of Income Tax at Source § 31.3401(a)(14)-1 Group-term life insurance. (a) The cost of group-term life insurance on the life of an employee is excepted from wages, and hence is not subject to...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-08
... and marketing evaluation and strategies; and outreach and implementation of the project results. The... devise strategies and means to efficiently harvest the redfish resource in the Gulf of Maine (GOM) while... in terms of their potential effects on results. Sources of variability include: Area fished; seasonal...
NASA Astrophysics Data System (ADS)
Navas-Montilla, A.; Murillo, J.
2016-07-01
In this work, an arbitrary order HLL-type numerical scheme is constructed using the flux-ADER methodology. The proposed scheme is based on an augmented Derivative Riemann solver that was used for the first time in Navas-Montilla and Murillo (2015) [1]. Such solver, hereafter referred to as Flux-Source (FS) solver, was conceived as a high order extension of the augmented Roe solver and led to the generation of a novel numerical scheme called AR-ADER scheme. Here, we provide a general definition of the FS solver independently of the Riemann solver used in it. Moreover, a simplified version of the solver, referred to as Linearized-Flux-Source (LFS) solver, is presented. This novel version of the FS solver allows to compute the solution without requiring reconstruction of derivatives of the fluxes, nevertheless some drawbacks are evidenced. In contrast to other previously defined Derivative Riemann solvers, the proposed FS and LFS solvers take into account the presence of the source term in the resolution of the Derivative Riemann Problem (DRP), which is of particular interest when dealing with geometric source terms. When applied to the shallow water equations, the proposed HLLS-ADER and AR-ADER schemes can be constructed to fulfill the exactly well-balanced property, showing that an arbitrary quadrature of the integral of the source inside the cell does not ensure energy balanced solutions. As a result of this work, energy balanced flux-ADER schemes that provide the exact solution for steady cases and that converge to the exact solution with arbitrary order for transient cases are constructed.
Unsupervised Segmentation of Head Tissues from Multi-modal MR Images for EEG Source Localization.
Mahmood, Qaiser; Chodorowski, Artur; Mehnert, Andrew; Gellermann, Johanna; Persson, Mikael
2015-08-01
In this paper, we present and evaluate an automatic unsupervised segmentation method, hierarchical segmentation approach (HSA)-Bayesian-based adaptive mean shift (BAMS), for use in the construction of a patient-specific head conductivity model for electroencephalography (EEG) source localization. It is based on a HSA and BAMS for segmenting the tissues from multi-modal magnetic resonance (MR) head images. The evaluation of the proposed method was done both directly in terms of segmentation accuracy and indirectly in terms of source localization accuracy. The direct evaluation was performed relative to a commonly used reference method brain extraction tool (BET)-FMRIB's automated segmentation tool (FAST) and four variants of the HSA using both synthetic data and real data from ten subjects. The synthetic data includes multiple realizations of four different noise levels and several realizations of typical noise with a 20% bias field level. The Dice index and Hausdorff distance were used to measure the segmentation accuracy. The indirect evaluation was performed relative to the reference method BET-FAST using synthetic two-dimensional (2D) multimodal magnetic resonance (MR) data with 3% noise and synthetic EEG (generated for a prescribed source). The source localization accuracy was determined in terms of localization error and relative error of potential. The experimental results demonstrate the efficacy of HSA-BAMS, its robustness to noise and the bias field, and that it provides better segmentation accuracy than the reference method and variants of the HSA. They also show that it leads to a more accurate localization accuracy than the commonly used reference method and suggest that it has potential as a surrogate for expert manual segmentation for the EEG source localization problem.
Weber, Samuel R.; Lomax, James W.; Pargament, Kenneth I.
2017-01-01
Research into religion and mental health is increasing, but nonbelievers in terms of religion are often overlooked. Research has shown that nonbelievers experience various forms of psychological distress and that the negative perception of nonbelievers by others is a potential source of distress. This review builds on that research by identifying another potential source of psychological distress for nonbelievers: engagement with the healthcare system. Poor understanding of nonbelievers by healthcare professionals may lead to impaired communication in the healthcare setting, resulting in distress. Attempts by nonbelievers to avoid distress may result in different patterns of healthcare utilization. Awareness of these concerns may help healthcare providers to minimize distress among their nonbelieving patients. PMID:28379161
Weber, Samuel R; Lomax, James W; Pargament, Kenneth I
2017-04-05
Research into religion and mental health is increasing, but nonbelievers in terms of religion are often overlooked. Research has shown that nonbelievers experience various forms of psychological distress and that the negative perception of nonbelievers by others is a potential source of distress. This review builds on that research by identifying another potential source of psychological distress for nonbelievers: engagement with the healthcare system. Poor understanding of nonbelievers by healthcare professionals may lead to impaired communication in the healthcare setting, resulting in distress. Attempts by nonbelievers to avoid distress may result in different patterns of healthcare utilization. Awareness of these concerns may help healthcare providers to minimize distress among their nonbelieving patients.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schiefelbein, C.; Ho, T.
Changes in the physical properties (measured in terms of vitrinite reflectance, elemental analysis, and C-13 nuclear magnetic resonance) of an immature coal (0.46% R{sub o}) from Craig County, Colorado, that was thermally altered using hydrous pyrolysis were used to establish a correspondence between hydrous pyrolysis time/temperature reaction conditions and relative maturity (expressed in terms of vitrinite reflectance). This correspondence was used to determine the oil generation maturity limits for an immature hydrogen-rich (Type I fluorescing amorphous oil-prone kerogen) source rock from an offshore Congo well that was thermally altered using the same reaction conditions as applied to the immature coal.more » The resulting changes in the physical properties of the altered source rock, measured in terms of decreasing reactive carbon content (from Rock-Eval pyrolysis), were used to construct a hydrocarbon yield curve from which the relative maturity associated with the onset, main phase, and peak of oil generation was determined. Results, substantiated by anhydrous pyrolysis techniques, indicate that the source rock from Congo has a late onset of appreciable ({gt}10% transformation) oil generation (0.9% R{sub o} {plus minus} 0.1%), generates maximum quantities of oil from about 1.1 to 1.3% R{sub o}, and reaches the end (or peak) of the primary oil generating window at approximately 1.4% R{sub o} ({plus minus}0.1%) when secondary cracking reactions become important. However, the bottom of the oil window can be extended to about 1.6% R{sub o} because the heavy molecular weight degradation by-products (asphaltenes) that are not efficiently expelled from source rocks continue to degrade into progressively lower molecular weight hydrocarbons.« less
NASA Astrophysics Data System (ADS)
Gelmez Burakgazi, Sevinc; Yildirim, Ali; Weeth Feinstein, Noah
2016-04-01
Rooted in science education and science communication studies, this study examines 4th and 5th grade students' perceptions of science information sources (SIS) and their use in communicating science to students. It combines situated learning theory with uses and gratifications theory in a qualitative phenomenological analysis. Data were gathered through classroom observations and interviews in four Turkish elementary schools. Focus group interviews with 47 students and individual interviews with 17 teachers and 10 parents were conducted. Participants identified a wide range of SIS, including TV, magazines, newspapers, internet, peers, teachers, families, science centers/museums, science exhibitions, textbooks, science books, and science camps. Students reported using various SIS in school-based and non-school contexts to satisfy their cognitive, affective, personal, and social integrative needs. SIS were used for science courses, homework/project assignments, examination/test preparations, and individual science-related research. Students assessed SIS in terms of the perceived accessibility of the sources, the quality of the content, and the content presentation. In particular, some sources such as teachers, families, TV, science magazines, textbooks, and science centers/museums ("directive sources") predictably led students to other sources such as teachers, families, internet, and science books ("directed sources"). A small number of sources crossed context boundaries, being useful in both school and out. Results shed light on the connection between science education and science communication in terms of promoting science learning.
Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong
2008-12-01
How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.
12 CFR 201.4 - Availability and terms of credit.
Code of Federal Regulations, 2014 CFR
2014-01-01
... overnight, as a backup source of funding to a depository institution that is in generally sound financial... to a few weeks as a backup source of funding to a depository institution if, in the judgment of the... very short-term basis, usually overnight, as a backup source of funding to a depository institution...
NASA Astrophysics Data System (ADS)
Davoine, X.; Bocquet, M.
2007-03-01
The reconstruction of the Chernobyl accident source term has been previously carried out using core inventories, but also back and forth confrontations between model simulations and activity concentration or deposited activity measurements. The approach presented in this paper is based on inverse modelling techniques. It relies both on the activity concentration measurements and on the adjoint of a chemistry-transport model. The location of the release is assumed to be known, and one is looking for a source term available for long-range transport that depends both on time and altitude. The method relies on the maximum entropy on the mean principle and exploits source positivity. The inversion results are mainly sensitive to two tuning parameters, a mass scale and the scale of the prior errors in the inversion. To overcome this hardship, we resort to the statistical L-curve method to estimate balanced values for these two parameters. Once this is done, many of the retrieved features of the source are robust within a reasonable range of parameter values. Our results favour the acknowledged three-step scenario, with a strong initial release (26 to 27 April), followed by a weak emission period of four days (28 April-1 May) and again a release, longer but less intense than the initial one (2 May-6 May). The retrieved quantities of iodine-131, caesium-134 and caesium-137 that have been released are in good agreement with the latest reported estimations. Yet, a stronger apportionment of the total released activity is ascribed to the first period and less to the third one. Finer chronological details are obtained, such as a sequence of eruptive episodes in the first two days, likely related to the modulation of the boundary layer diurnal cycle. In addition, the first two-day release surges are found to have effectively reached an altitude up to the top of the domain (5000 m).
Acoustic constituents of prosodic typology
NASA Astrophysics Data System (ADS)
Komatsu, Masahiko
Different languages sound different, and considerable part of it derives from the typological difference of prosody. Although such difference is often referred to as lexical accent types (stress accent, pitch accent, and tone; e.g. English, Japanese, and Chinese respectively) and rhythm types (stress-, syllable-, and mora-timed rhythms; e.g. English, Spanish, and Japanese respectively), it is unclear whether these types are determined in terms of acoustic properties, The thesis intends to provide a potential basis for the description of prosody in terms of acoustics. It argues for the hypothesis that the source component of the source-filter model (acoustic features) approximately corresponds to prosody (linguistic features) through several experimental-phonetic studies. The study consists of four parts. (1) Preliminary experiment: Perceptual language identification tests were performed using English and Japanese speech samples whose frequency spectral information (i.e. non-source component) is heavily reduced. The results indicated that humans can discriminate languages with such signals. (2) Discussion on the linguistic information that the source component contains: This part constitutes the foundation of the argument of the thesis. Perception tests of consonants with the source signal indicated that the source component carries the information on broad categories of phonemes that contributes to the creation of rhythm. (3) Acoustic analysis: The speech samples of Chinese, English, Japanese, and Spanish, differing in prosodic types, were analyzed. These languages showed difference in acoustic characteristics of the source component. (4) Perceptual experiment: A language identification test for the above four languages was performed using the source signal with its acoustic features parameterized. It revealed that humans can discriminate prosodic types solely with the source features and that the discrimination is easier as acoustic information increases. The series of studies showed the correspondence of the source component to prosodic features. In linguistics, prosodic types have not been discussed purely in terms of acoustics; they are usually related to the function of prosody or phonological units such as phonemes. The present thesis focuses on acoustics and makes a contribution to establishing the crosslinguistic description system of prosody.
NASA Astrophysics Data System (ADS)
Bieringer, Paul E.; Rodriguez, Luna M.; Vandenberghe, Francois; Hurst, Jonathan G.; Bieberbach, George; Sykes, Ian; Hannan, John R.; Zaragoza, Jake; Fry, Richard N.
2015-12-01
Accurate simulations of the atmospheric transport and dispersion (AT&D) of hazardous airborne materials rely heavily on the source term parameters necessary to characterize the initial release and meteorological conditions that drive the downwind dispersion. In many cases the source parameters are not known and consequently based on rudimentary assumptions. This is particularly true of accidental releases and the intentional releases associated with terrorist incidents. When available, meteorological observations are often not representative of the conditions at the location of the release and the use of these non-representative meteorological conditions can result in significant errors in the hazard assessments downwind of the sensors, even when the other source parameters are accurately characterized. Here, we describe a computationally efficient methodology to characterize both the release source parameters and the low-level winds (eg. winds near the surface) required to produce a refined downwind hazard. This methodology, known as the Variational Iterative Refinement Source Term Estimation (STE) Algorithm (VIRSA), consists of a combination of modeling systems. These systems include a back-trajectory based source inversion method, a forward Gaussian puff dispersion model, a variational refinement algorithm that uses both a simple forward AT&D model that is a surrogate for the more complex Gaussian puff model and a formal adjoint of this surrogate model. The back-trajectory based method is used to calculate a ;first guess; source estimate based on the available observations of the airborne contaminant plume and atmospheric conditions. The variational refinement algorithm is then used to iteratively refine the first guess STE parameters and meteorological variables. The algorithm has been evaluated across a wide range of scenarios of varying complexity. It has been shown to improve the source parameters for location by several hundred percent (normalized by the distance from source to the closest sampler), and improve mass estimates by several orders of magnitude. Furthermore, it also has the ability to operate in scenarios with inconsistencies between the wind and airborne contaminant sensor observations and adjust the wind to provide a better match between the hazard prediction and the observations.
Hu, Jianlin; Goldberg, Debbie; Reynolds, Peggy; Hertz, Andrew; Bernstein, Leslie; Kleeman, Michael J.
2015-01-01
Background Although several cohort studies report associations between chronic exposure to fine particles (PM2.5) and mortality, few have studied the effects of chronic exposure to ultrafine (UF) particles. In addition, few studies have estimated the effects of the constituents of either PM2.5 or UF particles. Methods We used a statewide cohort of > 100,000 women from the California Teachers Study who were followed from 2001 through 2007. Exposure data at the residential level were provided by a chemical transport model that computed pollutant concentrations from > 900 sources in California. Besides particle mass, monthly concentrations of 11 species and 8 sources or primary particles were generated at 4-km grids. We used a Cox proportional hazards model to estimate the association between the pollutants and all-cause, cardiovascular, ischemic heart disease (IHD), and respiratory mortality. Results We observed statistically significant (p < 0.05) associations of IHD with PM2.5 mass, nitrate, elemental carbon (EC), copper (Cu), and secondary organics and the sources gas- and diesel-fueled vehicles, meat cooking, and high-sulfur fuel combustion. The hazard ratio estimate of 1.19 (95% CI: 1.08, 1.31) for IHD in association with a 10-μg/m3 increase in PM2.5 is consistent with findings from the American Cancer Society cohort. We also observed significant positive associations between IHD and several UF components including EC, Cu, metals, and mobile sources. Conclusions Using an emissions-based model with a 4-km spatial scale, we observed significant positive associations between IHD mortality and both fine and ultrafine particle species and sources. Our results suggest that the exposure model effectively measured local exposures and facilitated the examination of the relative toxicity of particle species. Citation Ostro B, Hu J, Goldberg D, Reynolds P, Hertz A, Bernstein L, Kleeman MJ. 2015. Associations of mortality with long-term exposures to fine and ultrafine particles, species and sources: results from the California Teachers Study cohort. Environ Health Perspect 123:549–556; http://dx.doi.org/10.1289/ehp.1408565 PMID:25633926
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ford, S R; Dreger, D S; Phillips, W S
2008-07-16
Inversions for regional attenuation (1/Q) of Lg are performed in two different regions. The path attenuation component of the Lg spectrum is isolated using the coda-source normalization method, which corrects the Lg spectral amplitude for the source using the stable, coda-derived source spectra. Tomographic images of Northern California agree well with one-dimensional (1-D) Lg Q estimated from five different methods. We note there is some tendency for tomographic smoothing to increase Q relative to targeted 1-D methods. For example in the San Francisco Bay Area, which contains high attenuation relative to the rest of it's region, Q is over-estimated bymore » {approx}30. Coda-source normalized attenuation tomography is also carried out for the Yellow Sea/Korean Peninsula (YSKP) where output parameters (site, source, and path terms) are compared with those from the amplitude tomography method of Phillips et al. (2005) as well as a new method that ties the source term to the MDAC formulation (Walter and Taylor, 2001). The source terms show similar scatter between coda-source corrected and MDAC source perturbation methods, whereas the amplitude method has the greatest correlation with estimated true source magnitude. The coda-source better represents the source spectra compared to the estimated magnitude and could be the cause of the scatter. The similarity in the source terms between the coda-source and MDAC-linked methods shows that the latter method may approximate the effect of the former, and therefore could be useful in regions without coda-derived sources. The site terms from the MDAC-linked method correlate slightly with global Vs30 measurements. While the coda-source and amplitude ratio methods do not correlate with Vs30 measurements, they do correlate with one another, which provides confidence that the two methods are consistent. The path Q{sup -1} values are very similar between the coda-source and amplitude ratio methods except for small differences in the Da-xin-anling Mountains, in the northern YSKP. However there is one large difference between the MDAC-linked method and the others in the region near stations TJN and INCN, which point to site-effect as the cause for the difference.« less
10 CFR 40.41 - Terms and conditions of licenses.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 1 2010-01-01 2010-01-01 false Terms and conditions of licenses. 40.41 Section 40.41 Energy NUCLEAR REGULATORY COMMISSION DOMESTIC LICENSING OF SOURCE MATERIAL Licenses § 40.41 Terms and... the regulations in this part shall confine his possession and use of source or byproduct material to...
Nonlinear synthesis of infrasound propagation through an inhomogeneous, absorbing atmosphere.
de Groot-Hedlin, C D
2012-08-01
An accurate and efficient method to predict infrasound amplitudes from large explosions in the atmosphere is required for diverse source types, including bolides, volcanic eruptions, and nuclear and chemical explosions. A finite-difference, time-domain approach is developed to solve a set of nonlinear fluid dynamic equations for total pressure, temperature, and density fields rather than acoustic perturbations. Three key features for the purpose of synthesizing nonlinear infrasound propagation in realistic media are that it includes gravitational terms, it allows for acoustic absorption, including molecular vibration losses at frequencies well below the molecular vibration frequencies, and the environmental models are constrained to have axial symmetry, allowing a three-dimensional simulation to be reduced to two dimensions. Numerical experiments are performed to assess the algorithm's accuracy and the effect of source amplitudes and atmospheric variability on infrasound waveforms and shock formation. Results show that infrasound waveforms steepen and their associated spectra are shifted to higher frequencies for nonlinear sources, leading to enhanced infrasound attenuation. Results also indicate that nonlinear infrasound amplitudes depend strongly on atmospheric temperature and pressure variations. The solution for total field variables and insertion of gravitational terms also allows for the computation of other disturbances generated by explosions, including gravity waves.
Simulating the Heliosphere with Kinetic Hydrogen and Dynamic MHD Source Terms
Heerikhuisen, Jacob; Pogorelov, Nikolai; Zank, Gary
2013-04-01
The interaction between the ionized plasma of the solar wind (SW) emanating from the sun and the partially ionized plasma of the local interstellar medium (LISM) creates the heliosphere. The heliospheric interface is characterized by the tangential discontinuity known as the heliopause that separates the SW and LISM plasmas, and a termination shock on the SW side along with a possible bow shock on the LISM side. Neutral Hydrogen of interstellar origin plays a critical role in shaping the heliospheric interface, since it freely traverses the heliopause. Charge-exchange between H-atoms and plasma protons couples the ions and neutrals, but themore » mean free paths are large, resulting in non-equilibrated energetic ion and neutral components. In our model, source terms for the MHD equations are generated using a kinetic approach for hydrogen, and the key computational challenge is to resolve these sources with sufficient statistics. For steady-state simulations, statistics can accumulate over arbitrarily long time intervals. In this paper we discuss an approach for improving the statistics in time-dependent calculations, and present results from simulations of the heliosphere where the SW conditions at the inner boundary of the computation vary according to an idealized solar cycle.« less
TE/TM decomposition of electromagnetic sources
NASA Technical Reports Server (NTRS)
Lindell, Ismo V.
1988-01-01
Three methods are given by which bounded EM sources can be decomposed into two parts radiating transverse electric (TE) and transverse magnetic (TM) fields with respect to a given constant direction in space. The theory applies source equivalence and nonradiating source concepts, which lead to decomposition methods based on a recursive formula or two differential equations for the determination of the TE and TM components of the original source. Decompositions for a dipole in terms of point, line, and plane sources are studied in detail. The planar decomposition is seen to match to an earlier result given by Clemmow (1963). As an application of the point decomposition method, it is demonstrated that the general exact image expression for the Sommerfeld half-space problem, previously derived through heuristic reasoning, can be more straightforwardly obtained through the present decomposition method.
Part 1 of a Computational Study of a Drop-Laden Mixing Layer
NASA Technical Reports Server (NTRS)
Okong'o, Nora A.; Bellan, Josette
2004-01-01
This first of three reports on a computational study of a drop-laden temporal mixing layer presents the results of direct numerical simulations (DNS) of well-resolved flow fields and the derivation of the large-eddy simulation (LES) equations that would govern the larger scales of a turbulent flow field. The mixing layer consisted of two counterflowing gas streams, one of which was initially laden with evaporating liquid drops. The gas phase was composed of two perfect gas species, the carrier gas and the vapor emanating from the drops, and was computed in an Eulerian reference frame, whereas each drop was tracked individually in a Lagrangian manner. The flow perturbations that were initially imposed on the layer caused mixing and eventual transition to turbulence. The DNS database obtained included transitional states for layers with various liquid mass loadings. For the DNS, the gas-phase equations were the compressible Navier-Stokes equations for conservation of momentum and additional conservation equations for total energy and species mass. These equations included source terms representing the effect of the drops on the mass, momentum, and energy of the gas phase. From the DNS equations, the expression for the irreversible entropy production (dissipation) was derived and used to determine the dissipation due to the source terms. The LES equations were derived by spatially filtering the DNS set and the magnitudes of the terms were computed at transitional states, leading to a hierarchy of terms to guide simplification of the LES equations. It was concluded that effort should be devoted to the accurate modeling of both the subgridscale fluxes and the filtered source terms, which were the dominant unclosed terms appearing in the LES equations.
Enhancing GADRAS Source Term Inputs for Creation of Synthetic Spectra.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horne, Steven M.; Harding, Lee
The Gamma Detector Response and Analysis Software (GADRAS) team has enhanced the source term input for the creation of synthetic spectra. These enhancements include the following: allowing users to programmatically provide source information to GADRAS through memory, rather than through a string limited to 256 characters; allowing users to provide their own source decay database information; and updating the default GADRAS decay database to fix errors and include coincident gamma information.
Localization of sound sources in a room with one microphone
NASA Astrophysics Data System (ADS)
Peić Tukuljac, Helena; Lissek, Hervé; Vandergheynst, Pierre
2017-08-01
Estimation of the location of sound sources is usually done using microphone arrays. Such settings provide an environment where we know the difference between the received signals among different microphones in the terms of phase or attenuation, which enables localization of the sound sources. In our solution we exploit the properties of the room transfer function in order to localize a sound source inside a room with only one microphone. The shape of the room and the position of the microphone are assumed to be known. The design guidelines and limitations of the sensing matrix are given. Implementation is based on the sparsity in the terms of voxels in a room that are occupied by a source. What is especially interesting about our solution is that we provide localization of the sound sources not only in the horizontal plane, but in the terms of the 3D coordinates inside the room.
A theoretical prediction of the acoustic pressure generated by turbulence-flame front interactions
NASA Technical Reports Server (NTRS)
Huff, R. G.
1984-01-01
The equations of momentum annd continuity are combined and linearized yielding the one dimensional nonhomogeneous acoustic wave equation. Three terms in the non-homogeneous equation act as acoustic sources and are taken to be forcing functions acting on the homogeneous wave equation. The three source terms are: fluctuating entropy, turbulence gradients, and turbulence-flame interactions. Each source term is discussed. The turbulence-flame interaction source is used as the basis for computing the source acoustic pressure from the Fourier transformed wave equation. Pressure fluctuations created in turbopump gas generators and turbines may act as a forcing function for turbine and propellant tube vibrations in Earth to orbit space propulsion systems and could reduce their life expectancy. A preliminary assessment of the acoustic pressure fluctuations in such systems is presented.
A theoretical prediction of the acoustic pressure generated by turbulence-flame front interactions
NASA Technical Reports Server (NTRS)
Huff, R. G.
1984-01-01
The equations of momentum and continuity are combined and linearized yielding the one dimensional nonhomogeneous acoustic wave equation. Three terms in the non-homogeneous equation act as acoustic sources and are taken to be forcing functions acting on the homogeneous wave equation. The three source terms are: fluctuating entropy, turbulence gradients, and turbulence-flame interactions. Each source term is discussed. The turbulence-flame interaction source is used as the basis for computing the source acoustic pressure from the Fourier transformed wave equation. Pressure fluctuations created in turbopump gas generators and turbines may act as a forcing function for turbine and propellant tube vibrations in earth to orbit space propulsion systems and could reduce their life expectancy. A preliminary assessment of the acoustic pressure fluctuations in such systems is presented.
Gault, Lora V.; Shultz, Mary; Davies, Kathy J.
2002-01-01
Objectives: This study compared the mapping of natural language patron terms to the Medical Subject Headings (MeSH) across six MeSH interfaces for the MEDLINE database. Methods: Test data were obtained from search requests submitted by patrons to the Library of the Health Sciences, University of Illinois at Chicago, over a nine-month period. Search request statements were parsed into separate terms or phrases. Using print sources from the National Library of Medicine, Each parsed patron term was assigned corresponding MeSH terms. Each patron term was entered into each of the selected interfaces to determine how effectively they mapped to MeSH. Data were collected for mapping success, accessibility of MeSH term within mapped list, and total number of MeSH choices within each list. Results: The selected MEDLINE interfaces do not map the same patron term in the same way, nor do they consistently lead to what is considered the appropriate MeSH term. Conclusions: If searchers utilize the MEDLINE database to its fullest potential by mapping to MeSH, the results of the mapping will vary between interfaces. This variance may ultimately impact the search results. These differences should be considered when choosing a MEDLINE interface and when instructing end users. PMID:11999175
Parazoo, Nicholas C.; Koven, Charles D.; Lawrence, David M.; ...
2018-01-12
Thaw and release of permafrost carbon (C) due to climate change is likely to offset increased vegetation C uptake in northern high-latitude (NHL) terrestrial ecosystems. Models project that this permafrost C feedback may act as a slow leak, in which case detection and attribution of the feedback may be difficult. The formation of talik, a subsurface layer of perennially thawed soil, can accelerate permafrost degradation and soil respiration, ultimately shifting the C balance of permafrost-affected ecosystems from long-term C sinks to long-term C sources. It is imperative to understand and characterize mechanistic links between talik, permafrost thaw, and respiration ofmore » deep soil C to detect and quantify the permafrost C feedback. Here, we use the Community Land Model (CLM) version 4.5, a permafrost and biogeochemistry model, in comparison to long-term deep borehole data along North American and Siberian transects, to investigate thaw-driven C sources in NHL ( > 55°N) from 2000 to 2300. Widespread talik at depth is projected across most of the NHL permafrost region (14 million km 2) by 2300, 6.2 million km 2 of which is projected to become a long-term C source, emitting 10 Pg C by 2100, 50 Pg C by 2200, and 120 Pg C by 2300, with few signs of slowing. Roughly half of the projected C source region is in predominantly warm sub-Arctic permafrost following talik onset. This region emits only 20 Pg C by 2300, but the CLM4.5 estimate may be biased low by not accounting for deep C in yedoma. Accelerated decomposition of deep soil C following talik onset shifts the ecosystem C balance away from surface dominant processes (photosynthesis and litter respiration), but sink-to-source transition dates are delayed by 20–200 years by high ecosystem productivity, such that talik peaks early (~2050s, although borehole data suggest sooner) and C source transition peaks late (~2150–2200). The remaining C source region in cold northern Arctic permafrost, which shifts to a net source early (late 21st century), emits 5 times more C (95 Pg C) by 2300, and prior to talik formation due to the high decomposition rates of shallow, young C in organic-rich soils coupled with low productivity. Our results provide important clues signaling imminent talik onset and C source transition, including (1) late cold-season (January–February) soil warming at depth (~2m), (2) increasing cold-season emissions (November–April), and (3) enhanced respiration of deep, old C in warm permafrost and young, shallow C in organic-rich cold permafrost soils. Our results suggest a mosaic of processes that govern carbon source-to-sink transitions at high latitudes and emphasize the urgency of monitoring soil thermal profiles, organic C age and content, cold-season CO 2 emissions, and atmospheric 14CO 2 as key indicators of the permafrost C feedback« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parazoo, Nicholas C.; Koven, Charles D.; Lawrence, David M.
Thaw and release of permafrost carbon (C) due to climate change is likely to offset increased vegetation C uptake in northern high-latitude (NHL) terrestrial ecosystems. Models project that this permafrost C feedback may act as a slow leak, in which case detection and attribution of the feedback may be difficult. The formation of talik, a subsurface layer of perennially thawed soil, can accelerate permafrost degradation and soil respiration, ultimately shifting the C balance of permafrost-affected ecosystems from long-term C sinks to long-term C sources. It is imperative to understand and characterize mechanistic links between talik, permafrost thaw, and respiration ofmore » deep soil C to detect and quantify the permafrost C feedback. Here, we use the Community Land Model (CLM) version 4.5, a permafrost and biogeochemistry model, in comparison to long-term deep borehole data along North American and Siberian transects, to investigate thaw-driven C sources in NHL ( > 55°N) from 2000 to 2300. Widespread talik at depth is projected across most of the NHL permafrost region (14 million km 2) by 2300, 6.2 million km 2 of which is projected to become a long-term C source, emitting 10 Pg C by 2100, 50 Pg C by 2200, and 120 Pg C by 2300, with few signs of slowing. Roughly half of the projected C source region is in predominantly warm sub-Arctic permafrost following talik onset. This region emits only 20 Pg C by 2300, but the CLM4.5 estimate may be biased low by not accounting for deep C in yedoma. Accelerated decomposition of deep soil C following talik onset shifts the ecosystem C balance away from surface dominant processes (photosynthesis and litter respiration), but sink-to-source transition dates are delayed by 20–200 years by high ecosystem productivity, such that talik peaks early (~2050s, although borehole data suggest sooner) and C source transition peaks late (~2150–2200). The remaining C source region in cold northern Arctic permafrost, which shifts to a net source early (late 21st century), emits 5 times more C (95 Pg C) by 2300, and prior to talik formation due to the high decomposition rates of shallow, young C in organic-rich soils coupled with low productivity. Our results provide important clues signaling imminent talik onset and C source transition, including (1) late cold-season (January–February) soil warming at depth (~2m), (2) increasing cold-season emissions (November–April), and (3) enhanced respiration of deep, old C in warm permafrost and young, shallow C in organic-rich cold permafrost soils. Our results suggest a mosaic of processes that govern carbon source-to-sink transitions at high latitudes and emphasize the urgency of monitoring soil thermal profiles, organic C age and content, cold-season CO 2 emissions, and atmospheric 14CO 2 as key indicators of the permafrost C feedback« less
NASA Technical Reports Server (NTRS)
Brinton, John (Technical Monitor); Silver, Eric
2005-01-01
We completed modifications to the new microcalorimeter system dedicated for use on the EBIT at NIST, which included: 1) a redesign of the x-ray calibration source from a direct electron impact source to one that irradiates the microcalorimeter with fluorescent x-rays. The resulting calibration lines are free of bremsstrahlung background; 2) the microcalorimeter electronic circuit was significantly improved to ensure long-term stability for lengthy experimental runs
The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.
Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre
2016-10-01
Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.
NASA Astrophysics Data System (ADS)
Fransiscus, Yunus; Purwanto, Edy
2017-05-01
A cultivation process of Chlorella vulgaris has been done in different treatment to investigate the optimum condition for lipid production. Firstly, autotroph and heterotroph condition have been applied to test the significance impact of carbon availability to the growth and lipid production of Chlorella vulgaris. And for the same purpose, heterotroph condition using glucose, fructose and sucrose as carbon sources was independently implemented. The growth rate of Chlorella vulgaris in autotroph condition was much slower than those in heterotroph. The different sources of carbon gave no significant different in the growth pattern, but in term of lipid production it was presented a considerable result. At lower concentration (3 and 6 gr/L) of carbon sources there was only slight different in lipid production level. At higher concentration (12 gr/L) glucose as a carbon source produced the highest result, 60.18% (w/w) compared to fructose and sucrose that produced 27.34% (w/w) and 18.19% (w/w) respectively.
NASA Astrophysics Data System (ADS)
Wilson, Jeffrey D.; Chaffee, Dalton W.; Wilson, Nathaniel C.; Lekki, John D.; Tokars, Roger P.; Pouch, John J.; Roberts, Tony D.; Battle, Philip R.; Floyd, Bertram; Lind, Alexander J.; Cavin, John D.; Helmick, Spencer R.
2016-09-01
A high generation rate photon-pair source using a dual element periodically-poled potassium titanyl phosphate (PP KTP) waveguide is described. The fully integrated photon-pair source consists of a 1064-nm pump diode laser, fiber-coupled to a dual element waveguide within which a pair of 1064-nm photons are up-converted to a single 532-nm photon in the first stage. In the second stage, the 532-nm photon is down-converted to an entangled photon-pair at 800 nm and 1600 nm which are fiber-coupled at the waveguide output. The photon-pair source features a high pair generation rate, a compact power-efficient package, and continuous wave (CW) or pulsed operation. This is a significant step towards the long term goal of developing sources for high-rate Quantum Key Distribution (QKD) to enable Earth-space secure communications. Characterization and test results are presented. Details and preliminary results of a laboratory free space QKD experiment with the B92 protocol are also presented.
NASA Technical Reports Server (NTRS)
Wilson, Jeffrey D.; Chaffee, Dalton W.; Wilson, Nathaniel C.; Lekki, John D.; Tokars, Roger P.; Pouch, John J.; Roberts, Tony D.; Battle, Philip; Floyd, Bertram M.; Lind, Alexander J.;
2016-01-01
A high generation rate photon-pair source using a dual element periodically-poled potassium titanyl phosphate (PP KTP) waveguide is described. The fully integrated photon-pair source consists of a 1064-nanometer pump diode laser, fiber-coupled to a dual element waveguide within which a pair of 1064-nanometer photons are up-converted to a single 532-nanometer photon in the first stage. In the second stage, the 532-nanometer photon is down-converted to an entangled photon-pair at 800 nanometer and 1600 nanometer which are fiber-coupled at the waveguide output. The photon-pair source features a high pair generation rate, a compact power-efficient package, and continuous wave (CW) or pulsed operation. This is a significant step towards the long term goal of developing sources for high-rate Quantum Key Distribution (QKD) to enable Earth-space secure communications. Characterization and test results are presented. Details and preliminary results of a laboratory free-space QKD experiment with the B92 protocol are also presented.
Aerosol Source Attributions and Source-Receptor Relationships Across the Northern Hemisphere
NASA Technical Reports Server (NTRS)
Bian, Huisheng; Chin, Mian; Kucsera, Tom; Pan, Xiaohua; Darmenov, Anton; Colarco, Peter; Torres, Omar; Shults, Michael
2014-01-01
Emissions and long-range transport of air pollution pose major concerns on air quality and climate change. To better assess the impact of intercontinental transport of air pollution on regional and global air quality, ecosystems, and near-term climate change, the UN Task Force on Hemispheric Transport of Air Pollution (HTAP) is organizing a phase II activity (HTAP2) that includes global and regional model experiments and data analysis, focusing on ozone and aerosols. This study presents the initial results of HTAP2 global aerosol modeling experiments. We will (a) evaluate the model results with surface and aircraft measurements, (b) examine the relative contributions of regional emission and extra-regional source on surface PM concentrations and column aerosol optical depth (AOD) over several NH pollution and dust source regions and the Arctic, and (c) quantify the source-receptor relationships in the pollution regions that reflect the sensitivity of regional aerosol amount to the regional and extra-regional emission reductions.
Short-Term Memory Stages in Sign vs. Speech: The Source of the Serial Span Discrepancy
Hall, Matthew L.
2011-01-01
Speakers generally outperform signers when asked to recall a list of unrelated verbal items. This phenomenon is well established, but its source has remained unclear. In this study, we evaluate the relative contribution of the three main processing stages of short-term memory – perception, encoding, and recall – in this effect. The present study factorially manipulates whether American Sign Language (ASL) or English was used for perception, memory encoding, and recall in hearing ASL-English bilinguals. Results indicate that using ASL during both perception and encoding contributes to the serial span discrepancy. Interestingly, performing recall in ASL slightly increased span, ruling out the view that signing is in general a poor choice for short-term memory. These results suggest that despite the general equivalence of sign and speech in other memory domains, speech-based representations are better suited for the specific task of perception and memory encoding of a series of unrelated verbal items in serial order through the phonological loop. This work suggests that interpretation of performance on serial recall tasks in English may not translate straightforwardly to serial tasks in sign language. PMID:21450284
NASA Astrophysics Data System (ADS)
Thrysøe, A. S.; Løiten, M.; Madsen, J.; Naulin, V.; Nielsen, A. H.; Rasmussen, J. Juul
2018-03-01
The conditions in the edge and scrape-off layer (SOL) of magnetically confined plasmas determine the overall performance of the device, and it is of great importance to study and understand the mechanics that drive transport in those regions. If a significant amount of neutral molecules and atoms is present in the edge and SOL regions, those will influence the plasma parameters and thus the plasma confinement. In this paper, it is displayed how neutrals, described by a fluid model, introduce source terms in a plasma drift-fluid model due to inelastic collisions. The resulting source terms are included in a four-field drift-fluid model, and it is shown how an increasing neutral particle density in the edge and SOL regions influences the plasma particle transport across the last-closed-flux-surface. It is found that an appropriate gas puffing rate allows for the edge density in the simulation to be self-consistently maintained due to ionization of neutrals in the confined region.
Dynamic power balance analysis in JET
NASA Astrophysics Data System (ADS)
Matthews, G. F.; Silburn, S. A.; Challis, C. D.; Eich, T.; Iglesias, D.; King, D.; Sieglin, B.; Contributors, JET
2017-12-01
The full scale realisation of nuclear fusion as an energy source requires a detailed understanding of power and energy balance in current experimental devices. In this we explore whether a global power balance model in which some of the calibration factors applied to the source or sink terms are fitted to the data can provide insight into possible causes of any discrepancies in power and energy balance seen in the JET tokamak. We show that the dynamics in the power balance can only be properly reproduced by including the changes in the thermal stored energy which therefore provides an additional opportunity to cross calibrate other terms in the power balance equation. Although the results are inconclusive with respect to the original goal of identifying the source of the discrepancies in the energy balance, we do find that with optimised parameters an extremely good prediction of the total power measured at the outer divertor target can be obtained over a wide range of pulses with time resolution up to ∼25 ms.
NASA Astrophysics Data System (ADS)
Kamiyama, M.; Orourke, M. J.; Flores-Berrones, R.
1992-09-01
A new type of semi-empirical expression for scaling strong-motion peaks in terms of seismic source, propagation path, and local site conditions is derived. Peak acceleration, peak velocity, and peak displacement are analyzed in a similar fashion because they are interrelated. However, emphasis is placed on the peak velocity which is a key ground motion parameter for lifeline earthquake engineering studies. With the help of seismic source theories, the semi-empirical model is derived using strong motions obtained in Japan. In the derivation, statistical considerations are used in the selection of the model itself and the model parameters. Earthquake magnitude M and hypocentral distance r are selected as independent variables and the dummy variables are introduced to identify the amplification factor due to individual local site conditions. The resulting semi-empirical expressions for the peak acceleration, velocity, and displacement are then compared with strong-motion data observed during three earthquakes in the U.S. and Mexico.
NASA Astrophysics Data System (ADS)
Saunier, Olivier; Mathieu, Anne; Didier, Damien; Tombette, Marilyne; Quélo, Denis; Winiarek, Victor; Bocquet, Marc
2013-04-01
The Chernobyl nuclear accident and more recently the Fukushima accident highlighted that the largest source of error on consequences assessment is the source term estimation including the time evolution of the release rate and its distribution between radioisotopes. Inverse modelling methods have proved to be efficient to assess the source term due to accidental situation (Gudiksen, 1989, Krysta and Bocquet, 2007, Stohl et al 2011, Winiarek et al 2012). These methods combine environmental measurements and atmospheric dispersion models. They have been recently applied to the Fukushima accident. Most existing approaches are designed to use air sampling measurements (Winiarek et al, 2012) and some of them use also deposition measurements (Stohl et al, 2012, Winiarek et al, 2013). During the Fukushima accident, such measurements are far less numerous and not as well distributed within Japan than the dose rate measurements. To efficiently document the evolution of the contamination, gamma dose rate measurements were numerous, well distributed within Japan and they offered a high temporal frequency. However, dose rate data are not as easy to use as air sampling measurements and until now they were not used in inverse modelling approach. Indeed, dose rate data results from all the gamma emitters present in the ground and in the atmosphere in the vicinity of the receptor. They do not allow one to determine the isotopic composition or to distinguish the plume contribution from wet deposition. The presented approach proposes a way to use dose rate measurement in inverse modeling approach without the need of a-priori information on emissions. The method proved to be efficient and reliable when applied on the Fukushima accident. The emissions for the 8 main isotopes Xe-133, Cs-134, Cs-136, Cs-137, Ba-137m, I-131, I-132 and Te-132 have been assessed. The Daiichi power plant events (such as ventings, explosions…) known to have caused atmospheric releases are well identified in the retrieved source term, except for unit 3 explosion where no measurement was available. The comparisons between the simulations of atmospheric dispersion and deposition of the retrieved source term show a good agreement with environmental observations. Moreover, an important outcome of this study is that the method proved to be perfectly suited to crisis management and should contribute to improve our response in case of a nuclear accident.
NASA Astrophysics Data System (ADS)
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Šindelářová, Kateřina; Hýža, Miroslav; Stohl, Andreas
2017-10-01
In the fall of 2011, iodine-131 (131I) was detected at several radionuclide monitoring stations in central Europe. After investigation, the International Atomic Energy Agency (IAEA) was informed by Hungarian authorities that 131I was released from the Institute of Isotopes Ltd. in Budapest, Hungary. It was reported that a total activity of 342 GBq of 131I was emitted between 8 September and 16 November 2011. In this study, we use the ambient concentration measurements of 131I to determine the location of the release as well as its magnitude and temporal variation. As the location of the release and an estimate of the source strength became eventually known, this accident represents a realistic test case for inversion models. For our source reconstruction, we use no prior knowledge. Instead, we estimate the source location and emission variation using only the available 131I measurements. Subsequently, we use the partial information about the source term available from the Hungarian authorities for validation of our results. For the source determination, we first perform backward runs of atmospheric transport models and obtain source-receptor sensitivity (SRS) matrices for each grid cell of our study domain. We use two dispersion models, FLEXPART and Hysplit, driven with meteorological analysis data from the global forecast system (GFS) and from European Centre for Medium-range Weather Forecasts (ECMWF) weather forecast models. Second, we use a recently developed inverse method, least-squares with adaptive prior covariance (LS-APC), to determine the 131I emissions and their temporal variation from the measurements and computed SRS matrices. For each grid cell of our simulation domain, we evaluate the probability that the release was generated in that cell using Bayesian model selection. The model selection procedure also provides information about the most suitable dispersion model for the source term reconstruction. Third, we select the most probable location of the release with its associated source term and perform a forward model simulation to study the consequences of the iodine release. Results of these procedures are compared with the known release location and reported information about its time variation. We find that our algorithm could successfully locate the actual release site. The estimated release period is also in agreement with the values reported by IAEA and the reported total released activity of 342 GBq is within the 99 % confidence interval of the posterior distribution of our most likely model.
A shock capturing technique for hypersonic, chemically relaxing flows
NASA Technical Reports Server (NTRS)
Eberhardt, S.; Brown, K.
1986-01-01
A fully coupled, shock capturing technique is presented for chemically reacting flows at high Mach numbers. The technique makes use of a total variation diminishing (TVD) dissipation operator which results in sharp, crisp shocks. The eigenvalues and eigenvectors of the fully coupled system, which includes species conversion equations in addition to the gas dynamics equations, are analytically derived for a general reacting gas. Species production terms for a model dissociating gas are introduced and are included in the algorithm. The convective terms are solved using a first-order TVD scheme while the source terms are solved using a fourth-order Runge-Kutta scheme to enhance stability. Results from one-dimensional numerical experiments are shown for a two species and a three species gas.
Predicting Near-Term Water Quality from Satellite Observations of Watershed Conditions
NASA Astrophysics Data System (ADS)
Weiss, W. J.; Wang, L.; Hoffman, K.; West, D.; Mehta, A. V.; Lee, C.
2017-12-01
Despite the strong influence of watershed conditions on source water quality, most water utilities and water resource agencies do not currently have the capability to monitor watershed sources of contamination with great temporal or spatial detail. Typically, knowledge of source water quality is limited to periodic grab sampling; automated monitoring of a limited number of parameters at a few select locations; and/or monitoring relevant constituents at a treatment plant intake. While important, such observations are not sufficient to inform proactive watershed or source water management at a monthly or seasonal scale. Satellite remote sensing data on the other hand can provide a snapshot of an entire watershed at regular, sub-monthly intervals, helping analysts characterize watershed conditions and identify trends that could signal changes in source water quality. Accordingly, the authors are investigating correlations between satellite remote sensing observations of watersheds and source water quality, at a variety of spatial and temporal scales and lags. While correlations between remote sensing observations and direct in situ measurements of water quality have been well described in the literature, there are few studies that link remote sensing observations across a watershed with near-term predictions of water quality. In this presentation, the authors will describe results of statistical analyses and discuss how these results are being used to inform development of a desktop decision support tool to support predictive application of remote sensing data. Predictor variables under evaluation include parameters that describe vegetative conditions; parameters that describe climate/weather conditions; and non-remote sensing, in situ measurements. Water quality parameters under investigation include nitrogen, phosphorus, organic carbon, chlorophyll-a, and turbidity.
NASA Astrophysics Data System (ADS)
Kis, A.; Lemperger, I.; Wesztergom, V.; Menvielle, M.; Szalai, S.; Novák, A.; Hada, T.; Matsukiyo, S.; Lethy, A. M.
2016-12-01
Magnetotelluric method is widely applied for investigation of subsurface structures by imaging the spatial distribution of electric conductivity. The method is based on the experimental determination of surface electromagnetic impedance tensor (Z) by surface geomagnetic and telluric registrations in two perpendicular orientation. In practical explorations the accurate estimation of Z necessitates the application of robust statistical methods for two reasons:1) the geomagnetic and telluric time series' are contaminated by man-made noise components and2) the non-homogeneous behavior of ionospheric current systems in the period range of interest (ELF-ULF and longer periods) results in systematic deviation of the impedance of individual time windows.Robust statistics manage both load of Z for the purpose of subsurface investigations. However, accurate analysis of the long term temporal variation of the first and second statistical moments of Z may provide valuable information about the characteristics of the ionospheric source current systems. Temporal variation of extent, spatial variability and orientation of the ionospheric source currents has specific effects on the surface impedance tensor. Twenty year long geomagnetic and telluric recordings of the Nagycenk Geophysical Observatory provides unique opportunity to reconstruct the so called magnetotelluric source effect and obtain information about the spatial and temporal behavior of ionospheric source currents at mid-latitudes. Detailed investigation of time series of surface electromagnetic impedance tensor has been carried out in different frequency classes of the ULF range. The presentation aims to provide a brief review of our results related to long term periodic modulations, up to solar cycle scale and about eventual deviations of the electromagnetic impedance and so the reconstructed equivalent ionospheric source effects.
The long-term problems of contaminated land: Sources, impacts and countermeasures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baes, C.F. III
1986-11-01
This report examines the various sources of radiological land contamination; its extent; its impacts on man, agriculture, and the environment; countermeasures for mitigating exposures; radiological standards; alternatives for achieving land decontamination and cleanup; and possible alternatives for utilizing the land. The major potential sources of extensive long-term land contamination with radionuclides, in order of decreasing extent, are nuclear war, detonation of a single nuclear weapon (e.g., a terrorist act), serious reactor accidents, and nonfission nuclear weapons accidents that disperse the nuclear fuels (termed ''broken arrows'').
Implementation of the Leaching Environmental Assessment Framework
New leaching tests are available in the U.S. for developing more accurate source terms for use in fate and transport models. For beneficial use or disposal, the use of the leaching environmental assessment framework (LEAF) will provide leaching results that reflect field condit...
A radio source occultation experiment with comet Austin 1982g, with unusual results
NASA Technical Reports Server (NTRS)
De Pater, I.; Ip, W.-H.
1984-01-01
A radio source occultation by comet Austin 1982g was observed on September 15-16, 1982. A change in the apparent position of 1242 + 41 by 1.3 arcsec occurred when the source was 220,000 km away from the cometary ion tail. If this change was due to refraction by the cometary plasma, it indicates an electron density of the plasma of about 10,000/cu cm. When the radio source was on the other side of the plasma tail, at a distance of 230,000 km, the position angle of the electric vector of the radio source changed gradually over about 140 deg within two hours. This observation cannot be explained in terms of ionospheric Faraday rotation, and results from either an intrinsic change in the radio source or Faraday rotation in the cometary plasma due to a change in the direction and/or strength of the magnetic field. In the latter case, the cometary coma must have an electron density and a magnetic field strength orders of magnitude larger than current theories predict.
ERIC Educational Resources Information Center
Horner, Jan; Michaud-Oystryk, Nicole
1995-01-01
An experiment investigated whether the format in which information is stored affects the outcomes of ready reference transactions in terms of efficiency and accuracy. Results indicate that bibliographic questions are more efficiently answered online, while factual questions are more efficiently answered with print sources. Results of the study are…
Sensitivity Analysis Tailored to Constrain 21st Century Terrestrial Carbon-Uptake
NASA Astrophysics Data System (ADS)
Muller, S. J.; Gerber, S.
2013-12-01
The long-term fate of terrestrial carbon (C) in response to climate change remains a dominant source of uncertainty in Earth-system model projections. Increasing atmospheric CO2 could be mitigated by long-term net uptake of C, through processes such as increased plant productivity due to "CO2-fertilization". Conversely, atmospheric conditions could be exacerbated by long-term net release of C, through processes such as increased decomposition due to higher temperatures. This balance is an important area of study, and a major source of uncertainty in long-term (>year 2050) projections of planetary response to climate change. We present results from an innovative application of sensitivity analysis to LM3V, a dynamic global vegetation model (DGVM), intended to identify observed/observable variables that are useful for constraining long-term projections of C-uptake. We analyzed the sensitivity of cumulative C-uptake by 2100, as modeled by LM3V in response to IPCC AR4 scenario climate data (1860-2100), to perturbations in over 50 model parameters. We concurrently analyzed the sensitivity of over 100 observable model variables, during the extant record period (1970-2010), to the same parameter changes. By correlating the sensitivities of observable variables with the sensitivity of long-term C-uptake we identified model calibration variables that would also constrain long-term C-uptake projections. LM3V employs a coupled carbon-nitrogen cycle to account for N-limitation, and we find that N-related variables have an important role to play in constraining long-term C-uptake. This work has implications for prioritizing field campaigns to collect global data that can help reduce uncertainties in the long-term land-atmosphere C-balance. Though results of this study are specific to LM3V, the processes that characterize this model are not completely divorced from other DGVMs (or reality), and our approach provides valuable insights into how data can be leveraged to be better constrain projections for the land carbon sink.
High-Order Residual-Distribution Hyperbolic Advection-Diffusion Schemes: 3rd-, 4th-, and 6th-Order
NASA Technical Reports Server (NTRS)
Mazaheri, Alireza R.; Nishikawa, Hiroaki
2014-01-01
In this paper, spatially high-order Residual-Distribution (RD) schemes using the first-order hyperbolic system method are proposed for general time-dependent advection-diffusion problems. The corresponding second-order time-dependent hyperbolic advection- diffusion scheme was first introduced in [NASA/TM-2014-218175, 2014], where rapid convergences over each physical time step, with typically less than five Newton iterations, were shown. In that method, the time-dependent hyperbolic advection-diffusion system (linear and nonlinear) was discretized by the second-order upwind RD scheme in a unified manner, and the system of implicit-residual-equations was solved efficiently by Newton's method over every physical time step. In this paper, two techniques for the source term discretization are proposed; 1) reformulation of the source terms with their divergence forms, and 2) correction to the trapezoidal rule for the source term discretization. Third-, fourth, and sixth-order RD schemes are then proposed with the above techniques that, relative to the second-order RD scheme, only cost the evaluation of either the first derivative or both the first and the second derivatives of the source terms. A special fourth-order RD scheme is also proposed that is even less computationally expensive than the third-order RD schemes. The second-order Jacobian formulation was used for all the proposed high-order schemes. The numerical results are then presented for both steady and time-dependent linear and nonlinear advection-diffusion problems. It is shown that these newly developed high-order RD schemes are remarkably efficient and capable of producing the solutions and the gradients to the same order of accuracy of the proposed RD schemes with rapid convergence over each physical time step, typically less than ten Newton iterations.
Who Meets the Contraceptive Needs of Young Women in Sub-Saharan Africa?
Radovich, Emma; Dennis, Mardieh L; Wong, Kerry L M; Ali, Moazzam; Lynch, Caroline A; Cleland, John; Owolabi, Onikepe; Lyons-Amos, Mark; Benova, Lenka
2018-03-01
Despite efforts to expand contraceptive access for young people, few studies have considered where young women (age 15-24) in low- and middle-income countries obtain modern contraceptives and how the capacity and content of care of sources used compares with older users. We examined the first source of respondents' current modern contraceptive method using the most recent Demographic and Health Survey since 2000 for 33 sub-Saharan African countries. We classified providers according to sector (public/private) and capacity to provide a range of short- and long-term methods (limited/comprehensive). We also compared the content of care obtained from different providers. Although the public and private sectors were both important sources of family planning (FP), young women (15-24) used more short-term methods obtained from limited-capacity, private providers, compared with older women. The use of long-term methods among young women was low, but among those users, more than 85% reported a public sector source. Older women (25+) were significantly more likely to utilize a comprehensive provider in either sector compared with younger women. Although FP users of all ages reported poor content of care across all providers, young women had even lower content of care. The results suggest that method and provider choice are strongly linked, and recent efforts to increase access to long-term methods among young women may be restricted by where they seek care. Interventions to increase adolescents' access to a range of FP methods and quality counseling should target providers frequently used by young people, including limited-capacity providers in the private sector. Copyright © 2017 The Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Murru, Maura; Falcone, Giuseppe; Console, Rodolfo
2016-04-01
The present study is carried out in the framework of the Center for Seismic Hazard (CPS) INGV, under the agreement signed in 2015 with the Department of Civil Protection for developing a new model of seismic hazard of the country that can update the current reference (MPS04-S1; zonesismiche.mi.ingv.it and esse1.mi.ingv.it) released between 2004 and 2006. In this initiative, we participate with the Long-Term Stress Transfer (LTST) Model to provide the annual occurrence rate of a seismic event on the entire Italian territory, from a Mw4.5 minimum magnitude, considering bins of 0.1 magnitude units on geographical cells of 0.1° x 0.1°. Our methodology is based on the fusion of a statistical time-dependent renewal model (Brownian Passage Time, BPT, Matthews at al., 2002) with a physical model which considers the permanent effect in terms of stress that undergoes a seismogenic source in result of the earthquakes that occur on surrounding sources. For each considered catalog (historical, instrumental and individual seismogenic sources) we determined a distinct rate value for each cell of 0.1° x 0.1° for the next 50 yrs. If the cell falls within one of the sources in question, we adopted the respective value of rate, which is referred only to the magnitude of the event characteristic. This value of rate is divided by the number of grid cells that fall on the horizontal projection of the source. If instead the cells fall outside of any seismic source we considered the average value of the rate obtained from the historical and the instrumental catalog, using the method of Frankel (1995). The annual occurrence rate was computed for any of the three considered distributions (Poisson, BPT and BPT with inclusion of stress transfer).
NASA Astrophysics Data System (ADS)
Phillips-Smith, Catherine; Jeong, Cheol-Heon; Healy, Robert M.; Dabek-Zlotorzynska, Ewa; Celo, Valbona; Brook, Jeffrey R.; Evans, Greg
2017-08-01
The province of Alberta, Canada, is home to three oil sands regions which, combined, contain the third largest deposit of oil in the world. Of these, the Athabasca oil sands region is the largest. As part of Environment and Climate Change Canada's program in support of the Joint Canada-Alberta Implementation Plan for Oil Sands Monitoring program, concentrations of trace elements in PM2. 5 (particulate matter smaller than 2.5 µm in diameter) were measured through two campaigns that involved different methodologies: a long-term filter campaign and a short-term intensive campaign. In the long-term campaign, 24 h filter samples were collected once every 6 days over a 2-year period (December 2010-November 2012) at three air monitoring stations in the regional municipality of Wood Buffalo. For the intensive campaign (August 2013), hourly measurements were made with an online instrument at one air monitoring station; daily filter samples were also collected. The hourly and 24 h filter data were analyzed individually using positive matrix factorization. Seven emission sources of PM2. 5 trace elements were thereby identified: two types of upgrader emissions, soil, haul road dust, biomass burning, and two sources of mixed origin. The upgrader emissions, soil, and haul road dust sources were identified through both the methodologies and both methodologies identified a mixed source, but these exhibited more differences than similarities. The second upgrader emissions and biomass burning sources were only resolved by the hourly and filter methodologies, respectively. The similarity of the receptor modeling results from the two methodologies provided reassurance as to the identity of the sources. Overall, much of the PM2. 5-related trace elements were found to be anthropogenic, or at least to be aerosolized through anthropogenic activities. These emissions may in part explain the previously reported higher levels of trace elements in snow, water, and biota samples collected near the oil sands operations.
Ma, Xiao-xue; Wang, La-chun; Liao, Ling-ling
2015-01-01
Identifying the temp-spatial distribution and sources of water pollutants is of great significance for efficient water quality management pollution control in Wenruitang River watershed, China. A total of twelve water quality parameters, including temperature, pH, dissolved oxygen (DO), total nitrogen (TN), ammonia nitrogen (NH4+ -N), electrical conductivity (EC), turbidity (Turb), nitrite-N (NO2-), nitrate-N(NO3-), phosphate-P(PO4(3-), total organic carbon (TOC) and silicate (SiO3(2-)), were analyzed from September, 2008 to October, 2009. Geographic information system(GIS) and principal component analysis(PCA) were used to determine the spatial distribution and to apportion the sources of pollutants. The results demonstrated that TN, NH4+ -N, PO4(3-) were the main pollutants during flow period, wet period, dry period, respectively, which was mainly caused by urban point sources and agricultural and rural non-point sources. In spatial terms, the order of pollution was tertiary river > secondary river > primary river, while the water quality was worse in city zones than in the suburb and wetland zone regardless of the river classification. In temporal terms, the order of pollution was dry period > wet period > flow period. Population density, land use type and water transfer affected the water quality in Wenruitang River.
Mass discharge assessment at a brominated DNAPL site: Effects of known DNAPL source mass removal
NASA Astrophysics Data System (ADS)
Johnston, C. D.; Davis, G. B.; Bastow, T. P.; Woodbury, R. J.; Rao, P. S. C.; Annable, M. D.; Rhodes, S.
2014-08-01
Management and closure of contaminated sites is increasingly being proposed on the basis of mass flux of dissolved contaminants in groundwater. Better understanding of the links between source mass removal and contaminant mass fluxes in groundwater would allow greater acceptance of this metric in dealing with contaminated sites. Our objectives here were to show how measurements of the distribution of contaminant mass flux and the overall mass discharge emanating from the source under undisturbed groundwater conditions could be related to the processes and extent of source mass depletion. In addition, these estimates of mass discharge were sought in the application of agreed remediation targets set in terms of pumped groundwater quality from offsite wells. Results are reported from field studies conducted over a 5-year period at a brominated DNAPL (tetrabromoethane, TBA; and tribromoethene, TriBE) site located in suburban Perth, Western Australia. Groundwater fluxes (qw; L3/L2/T) and mass fluxes (Jc; M/L2/T) of dissolved brominated compounds were simultaneously estimated by deploying Passive Flux Meters (PFMs) in wells in a heterogeneous layered aquifer. PFMs were deployed in control plane (CP) wells immediately down-gradient of the source zone, before (2006) and after (2011) 69-85% of the source mass was removed, mainly by groundwater pumping from the source zone. The high-resolution (26-cm depth interval) measures of qw and Jc along the source CP allowed investigation of the DNAPL source-zone architecture and impacts of source mass removal. Comparable estimates of total mass discharge (MD; M/T) across the source zone CP reduced from 104 g day- 1 to 24-31 g day- 1 (70-77% reductions). Importantly, this mass discharge reduction was consistent with the estimated proportion of source mass remaining at the site (15-31%). That is, a linear relationship between mass discharge and source mass is suggested. The spatial detail of groundwater and mass flux distributions also provided further evidence of the source zone architecture and DNAPL mass depletion processes. This was especially apparent in different mass-depletion rates from distinct parts of the CP. High mass fluxes and groundwater fluxes located near the base of the aquifer dominated in terms of the dissolved mass flux in the profile, although not in terms of concentrations. Reductions observed in Jc and MD were used to better target future remedial efforts. Integration of the observations from the PFM deployments and the source mass depletion provided a basis for establishing flux-based management criteria for the site.
Mass discharge assessment at a brominated DNAPL site: Effects of known DNAPL source mass removal.
Johnston, C D; Davis, G B; Bastow, T P; Woodbury, R J; Rao, P S C; Annable, M D; Rhodes, S
2014-08-01
Management and closure of contaminated sites is increasingly being proposed on the basis of mass flux of dissolved contaminants in groundwater. Better understanding of the links between source mass removal and contaminant mass fluxes in groundwater would allow greater acceptance of this metric in dealing with contaminated sites. Our objectives here were to show how measurements of the distribution of contaminant mass flux and the overall mass discharge emanating from the source under undisturbed groundwater conditions could be related to the processes and extent of source mass depletion. In addition, these estimates of mass discharge were sought in the application of agreed remediation targets set in terms of pumped groundwater quality from offsite wells. Results are reported from field studies conducted over a 5-year period at a brominated DNAPL (tetrabromoethane, TBA; and tribromoethene, TriBE) site located in suburban Perth, Western Australia. Groundwater fluxes (qw; L(3)/L(2)/T) and mass fluxes (Jc; M/L(2)/T) of dissolved brominated compounds were simultaneously estimated by deploying Passive Flux Meters (PFMs) in wells in a heterogeneous layered aquifer. PFMs were deployed in control plane (CP) wells immediately down-gradient of the source zone, before (2006) and after (2011) 69-85% of the source mass was removed, mainly by groundwater pumping from the source zone. The high-resolution (26-cm depth interval) measures of qw and Jc along the source CP allowed investigation of the DNAPL source-zone architecture and impacts of source mass removal. Comparable estimates of total mass discharge (MD; M/T) across the source zone CP reduced from 104gday(-1) to 24-31gday(-1) (70-77% reductions). Importantly, this mass discharge reduction was consistent with the estimated proportion of source mass remaining at the site (15-31%). That is, a linear relationship between mass discharge and source mass is suggested. The spatial detail of groundwater and mass flux distributions also provided further evidence of the source zone architecture and DNAPL mass depletion processes. This was especially apparent in different mass-depletion rates from distinct parts of the CP. High mass fluxes and groundwater fluxes located near the base of the aquifer dominated in terms of the dissolved mass flux in the profile, although not in terms of concentrations. Reductions observed in Jc and MD were used to better target future remedial efforts. Integration of the observations from the PFM deployments and the source mass depletion provided a basis for establishing flux-based management criteria for the site. Copyright © 2013 Elsevier B.V. All rights reserved.
Biotic Nitrogen Enrichment Regulates Calcium Sources to Forests
NASA Astrophysics Data System (ADS)
Pett-Ridge, J. C.; Perakis, S. S.; Hynicka, J. D.
2015-12-01
Calcium is an essential nutrient in forest ecosystems that is susceptible to leaching loss and depletion. Calcium depletion can affect plant and animal productivity, soil acid buffering capacity, and fluxes of carbon and water. Excess nitrogen supply and associated soil acidification are often implicated in short-term calcium loss from soils, but the long-term role of nitrogen enrichment on calcium sources and resupply is unknown. Here we use strontium isotopes (87Sr/86Sr) as a proxy for calcium to investigate how soil nitrogen enrichment from biological nitrogen fixation interacts with bedrock calcium to regulate both short-term available supplies and the long-term sources of calcium in montane conifer forests. Our study examines 22 sites in western Oregon, spanning a 20-fold range of bedrock calcium on sedimentary and basaltic lithologies. In contrast to previous studies emphasizing abiotic control of weathering as a determinant of long-term ecosystem calcium dynamics and sources (via bedrock fertility, climate, or topographic/tectonic controls) we find instead that that biotic nitrogen enrichment of soil can strongly regulate calcium sources and supplies in forest ecosystems. For forests on calcium-rich basaltic bedrock, increasing nitrogen enrichment causes calcium sources to shift from rock-weathering to atmospheric dominance, with minimal influence from other major soil forming factors, despite regionally high rates of tectonic uplift and erosion that can rejuvenate weathering supply of soil minerals. For forests on calcium-poor sedimentary bedrock, we find that atmospheric inputs dominate regardless of degree of nitrogen enrichment. Short-term measures of soil and ecosystem calcium fertility are decoupled from calcium source sustainability, with fundamental implications for understanding nitrogen impacts, both in natural ecosystems and in the context of global change. Our finding that long-term nitrogen enrichment increases forest reliance on atmospheric calcium helps explain reports of greater ecological calcium limitation in an increasingly nitrogen-rich world.
Probing the nature of AX J0043-737: Not an 87 ms pulsar in the Small Magellanic Cloud
NASA Astrophysics Data System (ADS)
Maitra, C.; Ballet, J.; Esposito, P.; Haberl, F.; Tiengo, A.; Filipović, M. D.; Acero, F.
2018-05-01
Aims: AX J0043-737 is a source in the ASCA catalogue whose nature is uncertain. It is most commonly classified as a Crab-like pulsar in the Small Magellanic Cloud (SMC) following apparent detection of pulsations at 87 ms from a single ASCA observation. A follow-up ASCA observation was not able to confirm this, and the X-ray detection of the source has not been reported since. Methods: We studied the nature of the source with a dedicated XMM-Newton observation. We ascertained the source position, searched for the most probable counterpart, and studied the X-ray spectrum. We also analysed other archival observations with the source in the field of view to study its long-term variability. Results: With the good position localisation capability of XMM-Newton, we identify the counterpart of the source as MQS J004241.66-734041.3, an active galactic nucleus (AGN) behind the SMC at a redshift of 0.95. The X-ray spectrum can be fitted with an absorbed power law with a photon-index of Γ = 1.7, which is consistent with that expected from AGNs. By comparing the current XMM-Newton observation with an archival XMM-Newton and two other ASCA observations of the source, we find signatures of long-term variability, another common phenomenon in AGNs. All of the above are consistent with AX J0043-737 being an AGN behind the SMC.
Danylov, A A; Light, A R; Waldman, J; Erickson, N
2015-12-10
Measurements of the frequency stability of a far-infrared molecular laser have been made by mixing the harmonic of an ultrastable microwave source with a portion of the laser output signal in a terahertz (THz) Schottky diode balanced mixer. A 3 GHz difference-frequency signal was used in a frequency discriminator circuit to lock the laser to the microwave source. Comparisons of the short- and long-term laser frequency stability under free-running and locked conditions show a significant improvement with locking. Short-term frequency jitter was reduced by an order of magnitude, from approximately 40 to 4 kHz, and long-term drift was reduced by more than three orders of magnitude, from approximately 250 kHz to 80 Hz. The results, enabled by the efficient Schottky diode balanced mixer downconverter, demonstrate that ultrastable microwave-based frequency stabilization of THz optically pumped lasers (OPLs) will now be possible at frequencies extending well above 4.0 THz.
NASA Astrophysics Data System (ADS)
Moser, Gerald; Brenzinger, Kristof; Gorenflo, Andre; Clough, Tim; Braker, Gesche; Müller, Christoph
2017-04-01
To reduce the emissions of greenhouse gases (CO2, CH4 & N2O) it is important to quantify main sources and identify the respective ecosystem processes. While the main sources of N2O emissions in agro-ecosystems under current conditions are well known, the influence of a projected higher level of CO2 on the main ecosystem processes responsible for N2O emissions has not been investigated in detail. A major result of the Giessen FACE in a managed temperate grassland was that a +20% CO2 level caused a positive feedback due to increased emissions of N2O to 221% related to control condition. To be able to trace the sources of additional N2O emissions a 15N tracing study was conducted. We measured the N2O emission and its 15N signature, together with the 15N signature of soil and plant samples. The results were analyzed using a 15N tracing model which quantified the main changes in N transformation rates under elevated CO2. Directly after 15N fertilizer application a much higher dynamic of N transformations was observed than in the long run. Absolute mineralisation and DNRA rates were lower under elevated CO2 in the short term but higher in the long term. During the one year study period beginning with the 15N labelling a 1.8-fold increase of N2O emissions occurred under elevated CO2. The source of increased N2O was associated with NO3- in the first weeks after 15N application. Elevated CO2 affected denitrification rates, which resulted in increased N2O emissions due to a change of gene transcription rates (nosZ/(nirK+nirS)) and resulting enzyme activity (see: Brenzinger et al.). Here we show that the reported enhanced N2O emissions for the first 8 FACE years do prevail even in the long-term (> 15 years). The effect of elevated CO2 on N2O production/emission can be explained by altered activity ratios within a stable microbial community.
Advanced Reactor PSA Methodologies for System Reliability Analysis and Source Term Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, D.; Brunett, A.; Passerini, S.
Beginning in 2015, a project was initiated to update and modernize the probabilistic safety assessment (PSA) of the GE-Hitachi PRISM sodium fast reactor. This project is a collaboration between GE-Hitachi and Argonne National Laboratory (Argonne), and funded in part by the U.S. Department of Energy. Specifically, the role of Argonne is to assess the reliability of passive safety systems, complete a mechanistic source term calculation, and provide component reliability estimates. The assessment of passive system reliability focused on the performance of the Reactor Vessel Auxiliary Cooling System (RVACS) and the inherent reactivity feedback mechanisms of the metal fuel core. Themore » mechanistic source term assessment attempted to provide a sequence specific source term evaluation to quantify offsite consequences. Lastly, the reliability assessment focused on components specific to the sodium fast reactor, including electromagnetic pumps, intermediate heat exchangers, the steam generator, and sodium valves and piping.« less
Spectra of cosmic X-ray sources
NASA Technical Reports Server (NTRS)
Holt, S. S.; Mccray, R.
1982-01-01
X-ray measurements provide the most direct probes of astrophysical environments with temperatures exceeding one million K. Progress in experimental research utilizing dispersive techniques (e.g., Bragg and grating spectroscopy) is considerably slower than that in areas utilizing photometric techniques, because of the relative inefficiency of the former for the weak X-ray signals from celestial sources. As a result, the term "spectroscopy" as applied to X-ray astronomy has traditionally satisfied a much less restrictive definition (in terms of resolving power) than it has in other wavebands. Until quite recently, resolving powers of order unity were perfectly respectable, and still provide (in most cases) the most useful spectroscopic data. In the broadest sense, X-ray photometric measurements are spectroscopic, insofar as they represent samples of the overall electromagnetic continua of celestial objects.
Numerical modeling of materials processing applications of a pulsed cold cathode electron gun
NASA Astrophysics Data System (ADS)
Etcheverry, J. I.; Martínez, O. E.; Mingolo, N.
1998-04-01
A numerical study of the application of a pulsed cold cathode electron gun to materials processing is performed. A simple semiempirical model of the discharge is used, together with backscattering and energy deposition profiles obtained by a Monte Carlo technique, in order to evaluate the energy source term inside the material. The numerical computation of the heat equation with the calculated source term is performed in order to obtain useful information on melting and vaporization thresholds, melted radius and depth, and on the dependence of these variables on processing parameters such as operating pressure, initial voltage of the discharge and cathode-sample distance. Numerical results for stainless steel are presented, which demonstrate the need for several modifications of the experimental design in order to achieve a better efficiency.
Mesas-Carrascosa, Francisco Javier; Verdú Santano, Daniel; Meroño de Larriva, Jose Emilio; Ortíz Cordero, Rafael; Hidalgo Fernández, Rafael Enrique; García-Ferrer, Alfonso
2016-01-01
A number of physical factors can adversely affect cultural heritage. Therefore, monitoring parameters involved in the deterioration process, principally temperature and relative humidity, is useful for preventive conservation. In this study, a total of 15 microclimate stations using open source hardware were developed and stationed at the Mosque-Cathedral of Córdoba, which is registered with UNESCO for its outstanding universal value, to assess the behavior of interior temperature and relative humidity in relation to exterior weather conditions, public hours and interior design. Long-term monitoring of these parameters is of interest in terms of preservation and reducing the costs of future conservation strategies. Results from monitoring are presented to demonstrate the usefulness of this system. PMID:27690056
Vezzaro, L; Sharma, A K; Ledin, A; Mikkelsen, P S
2015-03-15
The estimation of micropollutant (MP) fluxes in stormwater systems is a fundamental prerequisite when preparing strategies to reduce stormwater MP discharges to natural waters. Dynamic integrated models can be important tools in this step, as they can be used to integrate the limited data provided by monitoring campaigns and to evaluate the performance of different strategies based on model simulation results. This study presents an example where six different control strategies, including both source-control and end-of-pipe treatment, were compared. The comparison focused on fluxes of heavy metals (copper, zinc) and organic compounds (fluoranthene). MP fluxes were estimated by using an integrated dynamic model, in combination with stormwater quality measurements. MP sources were identified by using GIS land usage data, runoff quality was simulated by using a conceptual accumulation/washoff model, and a stormwater retention pond was simulated by using a dynamic treatment model based on MP inherent properties. Uncertainty in the results was estimated with a pseudo-Bayesian method. Despite the great uncertainty in the MP fluxes estimated by the runoff quality model, it was possible to compare the six scenarios in terms of discharged MP fluxes, compliance with water quality criteria, and sediment accumulation. Source-control strategies obtained better results in terms of reduction of MP emissions, but all the simulated strategies failed in fulfilling the criteria based on emission limit values. The results presented in this study shows how the efficiency of MP pollution control strategies can be quantified by combining advanced modeling tools (integrated stormwater quality model, uncertainty calibration). Copyright © 2014 Elsevier Ltd. All rights reserved.
The mass-zero spin-two field and gravitational theory.
NASA Technical Reports Server (NTRS)
Coulter, C. A.
1972-01-01
Demonstration that the conventional theory of the mass-zero spin-two field with sources introduces extraneous nonspin-two field components in source regions and fails to be covariant under the full or restricted conformal group. A modified theory is given, expressed in terms of the physical components of mass-zero spin-two field rather than in terms of 'potentials,' which has no extraneous components inside or outside sources, and which is covariant under the full conformal group. For a proper choice of source term, this modified theory has the correct Newtonian limit and automatically implies that a symmetric second-rank source tensor has zero divergence. It is shown that possibly a generally covariant form of the spin-two theory derived here can be constructed to agree with general relativity in all currently accessible experimental situations.
The Tea-Carbon Dioxide Laser as a Means of Generating Ultrasound in Solids
NASA Astrophysics Data System (ADS)
Taylor, Gregory Stuart
1990-01-01
Available from UMI in association with The British Library. Requires signed TDF. The aim of this thesis is to characterise the interaction between pulsed, high power, 10.6 mu m radiation and solids. The work is considered both in the general context of laser generation of ultrasound and specifically to gain a deeper understanding of the interaction between a laser supported plasma and a solid. The predominant experimental tools used are the homodyne Michelson interferometer and a range of electromagnetic acoustic transducers. To complement the ultrasonic data, various plasma inspection techniques, such as high speed, streak camera photography and reflection photometry, have been used to correlate the plasma properties with those of the ultrasonic transients. The work involving the characterisation of a laser supported plasma with a solid, which is based on previous experimental and theoretical analysis, gives an increased understanding of the plasma's ultrasonic generation mechanism. The ability to record the entire plasma-sample interaction, time history yields information of the internal dynamics of the plasma growth and shock wave generation. The interaction of the radiation with a solid is characterised in both the plasma breakdown and non-breakdown regimes by a wide ultrasonic source. The variation in source diameter enables the transition from a point to a near planar ultrasonic source to be studied. The resultant ultrasonic modifications are examined in terms of the wave structure and the directivity pattern. The wave structure is analysed in terms of existing wide source, bulk wave theories and extended to consider the effects on surface and Lamb waves. The directivity patterns of the longitudinal and shear waves are analysed in terms of top-hat and non -uniform source profiles, giving additional information into the radiation-solid interaction. The wide, one dimensional source analysis is continued to a two dimensional, extended ultrasonic source, generated on non-metals by the optical penetration of radiation within the target. The generation of ultrasound in both metals and non-metals, using the CO_2 laser, is shown to be an efficient process and may be employed almost totally non-destructively. Such a laser may therefore be used effectively on a greatly enhanced range of materials than those tested to-date via laser generation, resulting in the increased suitability of the laser technique within the field of Non Destructive Testing.
Tu, Ngu; King, Janet C; Dirren, Henri; Thu, Hoang Nga; Ngoc, Quyen Phi; Diep, Anh Nguyen Thi
2014-12-01
Maternal nutritional status is an important predictor of infant birthweight. Most previous attempts to improve birthweight through multiple micronutrient supplementation have been initiated after women are pregnant. Interventions to improve maternal nutritional status prior to conception may be more effective in preventing low birthweight and improving other infant health outcomes. To compare the effects of maternal supplementation with animal-source food from preconception to term or from mid-gestation to term with routine prenatal care on birthweight, the prevalence of preterm births, intrauterine growth restriction, and infant growth during the first 12 months of life and on maternal nutrient status and the incidence of maternal and infant infections. Young women from 29 rural communes in northwestern Vietnam were recruited when they registered to marry and were randomized to one of three interventions: animal-source food supplement 5 days per week from marriage to term (approximately 13 months), animal-source food supplement 5 days per week from 16 weeks of gestation to term (approximately 5 months), or routine prenatal care without supplementalfeeding. Data on infant birthweight and gestational age, maternal and infant anthropometry, micronutrient status, and infections in the infant and mother were collected at various time points. In a preliminary study of women of reproductive age in this area of Vietnam, 40% of the women were underweight (body mass index < 18.5) and anemic. About 50% had infections. Rice was the dietary staple, and nutrient-rich, animal-source foods were rarely consumed by women. Iron, zinc, vitamin A, folate, and vitamin B12 intakes were inadequate in about 40% of the women. The study is still ongoing, and further data are not yet available. The results of this study will provide important data regarding whether improved intake of micronutrient-rich animal-source foods that are locally available and affordable before and during pregnancy improves maternal and infant health and development. This food-based approach may have global implications regarding how and when to initiate sustainable nutritional interventions to improve maternal and infant health.
NASA Astrophysics Data System (ADS)
Lin, Wei-Chih; Lin, Yu-Pin; Anthony, Johnathen
2015-04-01
Heavy metal pollution has adverse effects on not only the focal invertebrate species of this study, such as reduction in pupa weight and increased larval mortality, but also on the higher trophic level organisms which feed on them, either directly or indirectly, through the process of biomagnification. Despite this, few studies regarding remediation prioritization take species distribution or biological conservation priorities into consideration. This study develops a novel approach for delineating sites which are both contaminated by any of 5 readily bioaccumulated heavy metal soil contaminants and are of high ecological importance for the highly mobile, low trophic level focal species. The conservation priority of each site was based on the projected distributions of 6 moth species simulated via the presence-only maximum entropy species distribution model followed by the subsequent application of a systematic conservation tool. In order to increase the number of available samples, we also integrated crowd-sourced data with professionally-collected data via a novel optimization procedure based on a simulated annealing algorithm. This integration procedure was important since while crowd-sourced data can drastically increase the number of data samples available to ecologists, still the quality or reliability of crowd-sourced data can be called into question, adding yet another source of uncertainty in projecting species distributions. The optimization method screens crowd-sourced data in terms of the environmental variables which correspond to professionally-collected data. The sample distribution data was derived from two different sources, including the EnjoyMoths project in Taiwan (crowd-sourced data) and the Global Biodiversity Information Facility (GBIF) ?eld data (professional data). The distributions of heavy metal concentrations were generated via 1000 iterations of a geostatistical co-simulation approach. The uncertainties in distributions of the heavy metals were then quantified based on the overall consistency between realizations. Finally, Information-Gap Decision Theory (IGDT) was applied to rank the remediation priorities of contaminated sites in terms of both spatial consensus of multiple heavy metal realizations and the priority of specific conservation areas. Our results show that the crowd-sourced optimization algorithm developed in this study is effective at selecting suitable data from crowd-sourced data. By using this technique the available sample data increased to a total number of 96, 162, 72, 62, 69 and 62 or, that is, 2.6, 1.6, 2.5, 1.6, 1.2 and 1.8 times that originally available through the GBIF professionally-assembled database. Additionally, for all species considered the performance of models, in terms of test-AUC values, based on the combination of both data sources exceeded those models which were based on a single data source. Furthermore, the additional optimization-selected data lowered the overall variability, and therefore uncertainty, of model outputs. Based on the projected species distributions, our results revealed that around 30% of high species hotspot areas were also identified as contaminated. The decision-making tool, IGDT, successfully yielded remediation plans in terms of specific ecological value requirements, false positive tolerance rates of contaminated areas, and expected decision robustness. The proposed approach can be applied both to identify high conservation priority sites contaminated by heavy metals, based on the combination of screened crowd-sourced and professionally-collected data, and in making robust remediation decisions.
Reversal Frequency, Core-Mantle Conditions, and the SCOR-field Hypothesis
NASA Astrophysics Data System (ADS)
Hoffman, K. A.
2009-12-01
One of the most intriguing results from paleomagnetic data spanning the past 108 yr comes from the work of McFadden et al. (1991) who found that the variation in the rate of polarity reversal is apparently tied to the temporal variation in the harmonic content of the full-polarity field. Their finding indicates that it is the relative importance of the two dynamo families--i.e. the Primary Family (PF), the field antisymmetric about the equator, and the Secondary Family (SF), the field symmetric about the equator--that largely determines reversal frequency. More specifically, McFadden et al. found that as the relative significance of the SF increases, as is observed during the Cenozoic, so too does reversal rate. Such a finding is reminiscent of the seminal work of Allan Cox who some forty years ago proposed that interactions with the non-dipole field may provide the trigger for reversal of the axial dipole (AD) field. Hence, new questions arise: Do the two dynamo family fields interact in this manner, and, if so, how can such an interaction physically occur in the fluid core? Gaussian coefficient terms comprising the PF and SF have degree and order (n + m) that sum to an odd and even number, respectively. The most significant field term in the PF is by far that of the axial dipole (g10). The entire SF, starting with the equatorial dipole terms (g11 and h11) and the axial quadrupole (g20), are constituents of the non-axial dipole (NAD) field. By way of both paleomagnetic transition and geomagnetic data Hoffman and Singer (2008) recently proposed (1) that field sources exist within the shallow core (SCOR-field) associated with fluid motions affected by long-lived core-mantle boundary conditions; (2) that these SCOR-field sources are largely separated from, i.e. in “poor communication” with, deep field convection roll-generated sources; and (3) that the deep sources are largely responsible for the AD field, leaving the SCOR-field to be the primary source for the NAD-field. This SCOR-field would almost exclusively contain the observed SF field, while the AD-field sources deeper within the core would be most responsible for the observed PF field. If so, the McFadden et al. result may be explained as follows: That the observed increasing significance of the SF field during the Cenozoic is the result of intensifying interactions between shallow core SCOR-field sources and deep core AD-field sources. This then suggests a progressive enhancement in the variability of physical conditions along the CMB which may indicate an accelerating influx of descended lithospheric plates and/or increasing number of plume roots during the Cenozoic.
Chiavassa, S; Lemosquet, A; Aubineau-Lanièce, I; de Carlan, L; Clairand, I; Ferrer, L; Bardiès, M; Franck, D; Zankl, M
2005-01-01
This paper aims at comparing dosimetric assessments performed with three Monte Carlo codes: EGS4, MCNP4c2 and MCNPX2.5e, using a realistic voxel phantom, namely the Zubal phantom, in two configurations of exposure. The first one deals with an external irradiation corresponding to the example of a radiological accident. The results are obtained using the EGS4 and the MCNP4c2 codes and expressed in terms of the mean absorbed dose (in Gy per source particle) for brain, lungs, liver and spleen. The second one deals with an internal exposure corresponding to the treatment of a medullary thyroid cancer by 131I-labelled radiopharmaceutical. The results are obtained by EGS4 and MCNPX2.5e and compared in terms of S-values (expressed in mGy per kBq and per hour) for liver, kidney, whole body and thyroid. The results of these two studies are presented and differences between the codes are analysed and discussed.
The pyroelectric properties of TGS for application in infrared detection
NASA Technical Reports Server (NTRS)
Kroes, R. L.; Reiss, D.
1981-01-01
The pyroelectric property of triglycine sulfate and its application in the detection of infrared radiation are described. The detectivities of pyroelectric detectors and other types of infrared detectors are compared. The thermal response of a pyroelectric detector element and the resulting electrical response are derived in terms of the material parameters. The noise sources which limit the sensitivity of pyroelectric detectors are described, and the noise equivalent power for each noise source is given as a function of frequency and detector area.
Numerical models analysis of energy conversion process in air-breathing laser propulsion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong Yanji; Song Junling; Cui Cunyan
Energy source was considered as a key essential in this paper to describe energy conversion process in air-breathing laser propulsion. Some secondary factors were ignored when three independent modules, ray transmission module, energy source term module and fluid dynamic module, were established by simultaneous laser radiation transportation equation and fluid mechanics equation. The incidence laser beam was simulated based on ray tracing method. The calculated results were in good agreement with those of theoretical analysis and experiments.
Noise-enhanced CVQKD with untrusted source
NASA Astrophysics Data System (ADS)
Wang, Xiaoqun; Huang, Chunhui
2017-06-01
The performance of one-way and two-way continuous variable quantum key distribution (CVQKD) protocols can be increased by adding some noise on the reconciliation side. In this paper, we propose to add noise at the reconciliation end to improve the performance of CVQKD with untrusted source. We derive the key rate of this case and analyze the impact of the additive noise. The simulation results show that the optimal additive noise can improve the performance of the system in terms of maximum transmission distance and tolerable excess noise.
Scarton, Lou Ann; Del Fiol, Guilherme; Oakley-Girvan, Ingrid; Gibson, Bryan; Logan, Robert; Workman, T Elizabeth
2018-01-01
The research examined complementary and alternative medicine (CAM) information-seeking behaviors and preferences from short- to long-term cancer survival, including goals, motivations, and information sources. A mixed-methods approach was used with cancer survivors from the "Assessment of Patients' Experience with Cancer Care" 2004 cohort. Data collection included a mail survey and phone interviews using the critical incident technique (CIT). Seventy survivors from the 2004 study responded to the survey, and eight participated in the CIT interviews. Quantitative results showed that CAM usage did not change significantly between 2004 and 2015. The following themes emerged from the CIT: families' and friends' provision of the initial introduction to a CAM, use of CAM to manage the emotional and psychological impact of cancer, utilization of trained CAM practitioners, and online resources as a prominent source for CAM information. The majority of participants expressed an interest in an online information-sharing portal for CAM. Patients continue to use CAM well into long-term cancer survivorship. Finding trustworthy sources for information on CAM presents many challenges such as reliability of source, conflicting information on efficacy, and unknown interactions with conventional medications. Study participants expressed interest in an online portal to meet these needs through patient testimonials and linkage of claims to the scientific literature. Such a portal could also aid medical librarians and clinicians in locating and evaluating CAM information on behalf of patients.
Evaluation of Long-term Performance of Enhanced Anaerobic Source Zone Bioremediation using mass flux
NASA Astrophysics Data System (ADS)
Haluska, A.; Cho, J.; Hatzinger, P.; Annable, M. D.
2017-12-01
Chlorinated ethene DNAPL source zones in groundwater act as potential long term sources of contamination as they dissolve yielding concentrations well above MCLs, posing an on-going public health risk. Enhanced bioremediation has been applied to treat many source zones with significant promise, but long-term sustainability of this technology has not been thoroughly assessed. This study evaluated the long-term effectiveness of enhanced anaerobic source zone bioremediation at chloroethene contaminated sites to determine if the treatment prevented contaminant rebound and removed NAPL from the source zone. Long-term performance was evaluated based on achieving MCL-based contaminant mass fluxes in parent compound concentrations during different monitoring periods. Groundwater concertation versus time data was compiled for 6-sites and post-remedial contaminant mass flux data was then measured using passive flux meters at wells both within and down-gradient of the source zone. Post-remedial mass flux data was then combined with pre-remedial water quality data to estimate pre-remedial mass flux. This information was used to characterize a DNAPL dissolution source strength function, such as the Power Law Model and the Equilibrium Stream tube model. The six-sites characterized for this study were (1) Former Charleston Air Force Base, Charleston, SC; (2) Dover Air Force Base, Dover, DE; (3) Treasure Island Naval Station, San Francisco, CA; (4) Former Raritan Arsenal, Edison, NJ; (5) Naval Air Station, Jacksonville, FL; and, (6) Former Naval Air Station, Alameda, CA. Contaminant mass fluxes decreased for all the sites by the end of the post-treatment monitoring period and rebound was limited within the source zone. Post remedial source strength function estimates suggest that decreases in contaminant mass flux will continue to occur at these sites, but a mass flux based on MCL levels may never be exceeded. Thus, site clean-up goals should be evaluated as order-of-magnitude reductions. Additionally, sites may require monitoring for a minimum of 5-years in order to sufficiently evaluate remedial performance. The study shows that enhanced anaerobic source zone bioremediation contributed to a modest reduction of source zone contaminant mass discharge and appears to have mitigated rebound of chlorinated ethenes.
Source apportionment of wet-deposited atmospheric mercury in Tampa, Florida
NASA Astrophysics Data System (ADS)
Michael, Ryan; Stuart, Amy L.; Trotz, Maya A.; Akiwumi, Fenda
2016-03-01
In this paper, sources of mercury deposition to the Tampa area (Florida, USA) are investigated by analysis of one year (March 2000-March 2001) of daily wet deposition data. HYSPLIT back-trajectory modeling was performed to assess potential source locations for high versus low concentration events in data stratified by precipitation level. Positive matrix factorization (PMF) was also applied to apportion the elemental compositions from each event and to identify sources. Increased total mercury deposition was observed during summer months, corresponding to increased precipitation. However, mercury concentration in deposited samples was not strongly correlated with precipitation amount. Back-trajectories show air masses passing over Florida land in the short (12 h) and medium (24 h) term prior to deposition for high mercury concentration events. PMF results indicate that eleven factors contribute to the deposited elements in the event data. Diagnosed elemental profiles suggest the sources that contribute to mercury wet deposition at the study site are coal combustion (52% of the deposited mercury mass), municipal waste incineration (23%), medical waste incineration (19%), and crustal dust (6%). Overall, results suggest that sources local to the county and in Florida likely contributed substantially to mercury deposition at the study site, but distant sources may also contribute.
NASA Astrophysics Data System (ADS)
Pandey, Arun; Bandyopadhyay, M.; Sudhir, Dass; Chakraborty, A.
2017-10-01
Helicon wave heated plasmas are much more efficient in terms of ionization per unit power consumed. A permanent magnet based compact helicon wave heated plasma source is developed in the Institute for Plasma Research, after carefully optimizing the geometry, the frequency of the RF power, and the magnetic field conditions. The HELicon Experiment for Negative ion-I source is the single driver helicon plasma source that is being studied for the development of a large sized, multi-driver negative hydrogen ion source. In this paper, the details about the single driver machine and the results from the characterization of the device are presented. A parametric study at different pressures and magnetic field values using a 13.56 MHz RF source has been carried out in argon plasma, as an initial step towards source characterization. A theoretical model is also presented for the particle and power balance in the plasma. The ambipolar diffusion process taking place in a magnetized helicon plasma is also discussed.
Pandey, Arun; Bandyopadhyay, M; Sudhir, Dass; Chakraborty, A
2017-10-01
Helicon wave heated plasmas are much more efficient in terms of ionization per unit power consumed. A permanent magnet based compact helicon wave heated plasma source is developed in the Institute for Plasma Research, after carefully optimizing the geometry, the frequency of the RF power, and the magnetic field conditions. The HELicon Experiment for Negative ion-I source is the single driver helicon plasma source that is being studied for the development of a large sized, multi-driver negative hydrogen ion source. In this paper, the details about the single driver machine and the results from the characterization of the device are presented. A parametric study at different pressures and magnetic field values using a 13.56 MHz RF source has been carried out in argon plasma, as an initial step towards source characterization. A theoretical model is also presented for the particle and power balance in the plasma. The ambipolar diffusion process taking place in a magnetized helicon plasma is also discussed.
Children's Use of Lexical and Non-Lexical Information in Responding to Commands.
ERIC Educational Resources Information Center
Wilcox, Stephen; Palermo, David S.
1982-01-01
Research results indicated that children were able to use information from a number of sources in interpreting commands in which the relational terms were replaced by nonsense. Linguistic and nonlinguistic context and prior repetition presented constraints to children's responses. (Author/JB)
Clean the Air and Breathe Easier.
ERIC Educational Resources Information Center
Guevin, John
1997-01-01
Failure to prevent indoor air quality problems or act promptly can result in increased chances for long- or short-term health problems for staff and students, reduced productivity, faster plant deterioration, and strained school-community relations. Basic pollution control measures include source management, local exhausts, ventilation, exposure…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barhen, Jacob; Imam, Neena
2007-01-01
Revolutionary computing technologies are defined in terms of technological breakthroughs, which leapfrog over near-term projected advances in conventional hardware and software to produce paradigm shifts in computational science. For underwater threat source localization using information provided by a dynamical sensor network, one of the most promising computational advances builds upon the emergence of digital optical-core devices. In this article, we present initial results of sensor network calculations that focus on the concept of signal wavefront time-difference-of-arrival (TDOA). The corresponding algorithms are implemented on the EnLight processing platform recently introduced by Lenslet Laboratories. This tera-scale digital optical core processor is optimizedmore » for array operations, which it performs in a fixed-point-arithmetic architecture. Our results (i) illustrate the ability to reach the required accuracy in the TDOA computation, and (ii) demonstrate that a considerable speed-up can be achieved when using the EnLight 64a prototype processor as compared to a dual Intel XeonTM processor.« less
IMPROVEMENTS IN THE THERMAL NEUTRON CALIBRATION UNIT, TNF2, AT LNMRI/IRD.
Astuto, A; Fernandes, S S; Patrão, K C S; Fonseca, E S; Pereira, W W; Lopes, R T
2018-02-21
The standard thermal neutron flux unit, TNF2, in the Brazilian National Ionizing Radiation Metrology Laboratory was rebuilt. Fluence is still achieved by moderating of four 241Am-Be sources with 0.6 TBq each. The facility was again simulated and redesigned with graphite core and paraffin added graphite blocks surrounding it. Simulations using the MCNPX code on different geometric arrangements of moderator materials and neutron sources were performed. The resulting neutron fluence quality in terms of intensity, spectrum and cadmium ratio was evaluated. After this step, the system was assembled based on the results obtained from the simulations and measurements were performed with equipment existing in LNMRI/IRD and by simulated equipment. This work focuses on the characterization of a central chamber point and external points around the TNF2 in terms of neutron spectrum, fluence and ambient dose equivalent, H*(10). This system was validated with spectra measurements, fluence and H*(10) to ensure traceability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeze, R.A.
Many emerging remediation technologies are designed to remove contaminant mass from source zones at DNAPL sites in response to regulatory requirements. There is often concern in the regulated community as to whether mass removal actually reduces risk, or whether the small risk reductions achieved warrant the large costs incurred. This paper sets out a framework for quantifying the degree to which risk is reduced as mass is removed from shallow, saturated, low-permeability, dual-porosity, DNAPL source zones. Risk is defined in terms of meeting an alternate concentration level (ACL) at a compliance well in an aquifer underlying the source zone. Themore » ACL is back-calculated from a carcinogenic health-risk characterization at a downstream water-supply well. Source-zone mass-removal efficiencies are heavily dependent on the distribution of mass between media (fractures, matrix) and phases (dissolved, sorbed, free product). Due to the uncertainties in currently-available technology performance data, the scope of the paper is limited to developing a framework for generic technologies rather than making risk-reduction calculations for specific technologies. Despite the qualitative nature of the exercise, results imply that very high mass-removal efficiencies are required to achieve significant long-term risk reduction with technology, applications of finite duration. 17 refs., 7 figs., 6 tabs.« less
ISS Ambient Air Quality: Updated Inventory of Known Aerosol Sources
NASA Technical Reports Server (NTRS)
Meyer, Marit
2014-01-01
Spacecraft cabin air quality is of fundamental importance to crew health, with concerns encompassing both gaseous contaminants and particulate matter. Little opportunity exists for direct measurement of aerosol concentrations on the International Space Station (ISS), however, an aerosol source model was developed for the purpose of filtration and ventilation systems design. This model has successfully been applied, however, since the initial effort, an increase in the number of crewmembers from 3 to 6 and new processes on board the ISS necessitate an updated aerosol inventory to accurately reflect the current ambient aerosol conditions. Results from recent analyses of dust samples from ISS, combined with a literature review provide new predicted aerosol emission rates in terms of size-segregated mass and number concentration. Some new aerosol sources have been considered and added to the existing array of materials. The goal of this work is to provide updated filtration model inputs which can verify that the current ISS filtration system is adequate and filter lifetime targets are met. This inventory of aerosol sources is applicable to other spacecraft, and becomes more important as NASA considers future long term exploration missions, which will preclude the opportunity for resupply of filtration products.
NASA thesaurus. Volume 3: Definitions
NASA Technical Reports Server (NTRS)
1988-01-01
Publication of NASA Thesaurus definitions began with Supplement 1 to the 1985 NASA Thesaurus. The definitions given here represent the complete file of over 3,200 definitions, complimented by nearly 1,000 use references. Definitions of more common or general scientific terms are given a NASA slant if one exists. Certain terms are not defined as a matter of policy: common names, chemical elements, specific models of computers, and nontechnical terms. The NASA Thesaurus predates by a number of years the systematic effort to define terms, therefore not all Thesaurus terms have been defined. Nevertheless, definitions of older terms are continually being added. The following data are provided for each entry: term in uppercase/lowercase form, definition, source, and year the term (not the definition) was added to the NASA Thesaurus. The NASA History Office is the authority for capitalization in satellite and spacecraft names. Definitions with no source given were constructed by lexicographers at the NASA Scientific and Technical Information (STI) Facility who rely on the following sources for their information: experts in the field, literature searches from the NASA STI database, and specialized references.
NASA Astrophysics Data System (ADS)
Piscitelli, F.; Mauri, G.; Messi, F.; Anastasopoulos, M.; Arnold, T.; Glavic, A.; Höglund, C.; Ilves, T.; Lopez Higuera, I.; Pazmandi, P.; Raspino, D.; Robinson, L.; Schmidt, S.; Svensson, P.; Varga, D.; Hall-Wilton, R.
2018-05-01
The Multi-Blade is a Boron-10-based gaseous thermal neutron detector developed to face the challenge arising in neutron reflectometry at neutron sources. Neutron reflectometers are challenging instruments in terms of instantaneous counting rate and spatial resolution. This detector has been designed according to the requirements given by the reflectometers at the European Spallation Source (ESS) in Sweden. The Multi-Blade has been installed and tested on the CRISP reflectometer at the ISIS neutron and muon source in U.K.. The results on the detailed detector characterization are discussed in this manuscript.
Surfzone alongshore advective accelerations: observations and modeling
NASA Astrophysics Data System (ADS)
Hansen, J.; Raubenheimer, B.; Elgar, S.
2014-12-01
The sources, magnitudes, and impacts of non-linear advective accelerations on alongshore surfzone currents are investigated with observations and a numerical model. Previous numerical modeling results have indicated that advective accelerations are an important contribution to the alongshore force balance, and are required to understand spatial variations in alongshore currents (which may result in spatially variable morphological change). However, most prior observational studies have neglected advective accelerations in the alongshore force balance. Using a numerical model (Delft3D) to predict optimal sensor locations, a dense array of 26 colocated current meters and pressure sensors was deployed between the shoreline and 3-m water depth over a 200 by 115 m region near Duck, NC in fall 2013. The array included 7 cross- and 3 alongshore transects. Here, observational and numerical estimates of the dominant forcing terms in the alongshore balance (pressure and radiation-stress gradients) and the advective acceleration terms will be compared with each other. In addition, the numerical model will be used to examine the force balance, including sources of velocity gradients, at a higher spatial resolution than possible with the instrument array. Preliminary numerical results indicate that at O(10-100 m) alongshore scales, bathymetric variations and the ensuing alongshore variations in the wave field and subsequent forcing are the dominant sources of the modeled velocity gradients and advective accelerations. Additional simulations and analysis of the observations will be presented. Funded by NSF and ASDR&E.
Auroral Proper Motion in the Era of AMISR and EMCCD
NASA Astrophysics Data System (ADS)
Semeter, J. L.
2016-12-01
The term "aurora" is a catch-all for luminosity produced by the deposition of magnetospheric energy in the outer atmosphere. The use of this single phenomenological term occludes the rich variety of sources and mechanisms responsible for the excitation. Among these are electron thermal conduction (SAR arcs), electrostatic potential fields ("inverted-V" aurora), wave-particle resonance (Alfvenic aurora, pulsating aurora), pitch-angle scattering (diffuse aurora), and direct injection of plasma sheet particles (PBIs, substorms). Much information about auroral energization has been derived from the energy spectrum of primary particles, which may be measured directly with an in situ detector or indirectly via analysis of the atmospheric response (e.g., auroral spectroscopy, tomography, ionization). Somewhat less emphasized has been the information in the B_perp dimension. Specifically, the scale-dependent motions of auroral forms in the rest frame of the ambient plasma provide a means of partitioning both the source region and the source mechanism. These results, in turn, affect ionospheric state parameters that control the M-I coupling process-most notably, the degree of structure imparted to the conductance field. This paper describes recent results enabled by the advent of two technologies: high frame-rate, high-resolution imaging detectors, and electronically steerable incoherent scatter radar (the AMISR systems). In addition to contributing to our understanding of the aurora, these results may be used in predictive models of multi-scale energy transfer within the disturbed geospace system.
NASA Astrophysics Data System (ADS)
Soulsby, Chris; Birkel, Christian; Geris, Josie; Tetzlaff, Doerthe
2016-04-01
Advances in the use of hydrological tracers and their integration into rainfall runoff models is facilitating improved quantification of stream water age distributions. This is of fundamental importance to understanding water quality dynamics over both short- and long-time scales, particularly as water quality parameters are often associated with water sources of markedly different ages. For example, legacy nitrate pollution may reflect deeper waters that have resided in catchments for decades, whilst more dynamics parameters from anthropogenic sources (e.g. P, pathogens etc) are mobilised by very young (<1 day) near-surface water sources. It is increasingly recognised that water age distributions of stream water is non-stationary in both the short (i.e. event dynamics) and longer-term (i.e. in relation to hydroclimatic variability). This provides a crucial context for interpreting water quality time series. Here, we will use longer-term (>5 year), high resolution (daily) isotope time series in modelling studies for different catchments to show how variable stream water age distributions can be a result of hydroclimatic variability and the implications for understanding water quality. We will also use examples from catchments undergoing rapid urbanisation, how the resulting age distributions of stream water change in a predictable way as a result of modified flow paths. The implication for the management of water quality in urban catchments will be discussed.
NASA Astrophysics Data System (ADS)
Griessbach, Sabine; Hoffmann, Lars; Höpfner, Michael; Riese, Martin; Spang, Reinhold
2013-09-01
The viability of a spectrally averaging model to perform radiative transfer calculations in the infrared including scattering by atmospheric particles is examined for the application of infrared limb remote sensing measurements. Here we focus on the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) aboard the European Space Agency's Envisat. Various spectra for clear air and cloudy conditions were simulated with a spectrally averaging radiative transfer model and a line-by-line radiative transfer model for three atmospheric window regions (825-830, 946-951, 1224-1228 cm-1) and compared to each other. The results are rated in terms of the MIPAS noise equivalent spectral radiance (NESR). The clear air simulations generally agree within one NESR. The cloud simulations neglecting the scattering source term agree within two NESR. The differences between the cloud simulations including the scattering source term are generally below three and always below four NESR. We conclude that the spectrally averaging approach is well suited for fast and accurate infrared radiative transfer simulations including scattering by clouds. We found that the main source for the differences between the cloud simulations of both models is the cloud edge sampling. Furthermore we reasoned that this model comparison for clouds is also valid for atmospheric aerosol in general.
Hütter, Markus; Brader, Joseph M
2009-06-07
We examine the origins of nonlocality in a nonisothermal hydrodynamic formulation of a one-component fluid of particles that exhibit long-range correlations, e.g., due to a spherically symmetric, long-range interaction potential. In order to furnish the continuum modeling with physical understanding of the microscopic interactions and dynamics, we make use of systematic coarse graining from the microscopic to the continuum level. We thus arrive at a thermodynamically admissible and closed set of evolution equations for the densities of momentum, mass, and internal energy. From the consideration of an illustrative special case, the following main conclusions emerge. There are two different source terms in the momentum balance. The first is a body force, which in special circumstances can be related to the functional derivative of a nonlocal Helmholtz free energy density with respect to the mass density. The second source term is proportional to the temperature gradient, multiplied by the nonlocal entropy density. These two source terms combine into a pressure gradient only in the absence of long-range effects. In the irreversible contributions to the time evolution, the nonlocal contributions arise since the self-correlations of the stress tensor and heat flux, respectively, are nonlocal as a result of the microscopic nonlocal correlations. Finally, we point out specific points that warrant further discussions.
On the inclusion of mass source terms in a single-relaxation-time lattice Boltzmann method
NASA Astrophysics Data System (ADS)
Aursjø, Olav; Jettestuen, Espen; Vinningland, Jan Ludvig; Hiorth, Aksel
2018-05-01
We present a lattice Boltzmann algorithm for incorporating a mass source in a fluid flow system. The proposed mass source/sink term, included in the lattice Boltzmann equation, maintains the Galilean invariance and the accuracy of the overall method, while introducing a mass source/sink term in the fluid dynamical equations. The method can, for instance, be used to inject or withdraw fluid from any preferred lattice node in a system. This suggests that injection and withdrawal of fluid does not have to be introduced through cumbersome, and sometimes less accurate, boundary conditions. The method also suggests that, through a chosen equation of state relating mass density to pressure, the proposed mass source term will render it possible to set a preferred pressure at any lattice node in a system. We demonstrate how this model handles injection and withdrawal of a fluid. And we show how it can be used to incorporate pressure boundaries. The accuracy of the algorithm is identified through a Chapman-Enskog expansion of the model and supported by the numerical simulations.
Directional Unfolded Source Term (DUST) for Compton Cameras.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Dean J.; Horne, Steven M.; O'Brien, Sean
2018-03-01
A Directional Unfolded Source Term (DUST) algorithm was developed to enable improved spectral analysis capabilities using data collected by Compton cameras. Achieving this objective required modification of the detector response function in the Gamma Detector Response and Analysis Software (GADRAS). Experimental data that were collected in support of this work include measurements of calibration sources at a range of separation distances and cylindrical depleted uranium castings.
Gene and protein nomenclature in public databases
Fundel, Katrin; Zimmer, Ralf
2006-01-01
Background Frequently, several alternative names are in use for biological objects such as genes and proteins. Applications like manual literature search, automated text-mining, named entity identification, gene/protein annotation, and linking of knowledge from different information sources require the knowledge of all used names referring to a given gene or protein. Various organism-specific or general public databases aim at organizing knowledge about genes and proteins. These databases can be used for deriving gene and protein name dictionaries. So far, little is known about the differences between databases in terms of size, ambiguities and overlap. Results We compiled five gene and protein name dictionaries for each of the five model organisms (yeast, fly, mouse, rat, and human) from different organism-specific and general public databases. We analyzed the degree of ambiguity of gene and protein names within and between dictionaries, to a lexicon of common English words and domain-related non-gene terms, and we compared different data sources in terms of size of extracted dictionaries and overlap of synonyms between those. The study shows that the number of genes/proteins and synonyms covered in individual databases varies significantly for a given organism, and that the degree of ambiguity of synonyms varies significantly between different organisms. Furthermore, it shows that, despite considerable efforts of co-curation, the overlap of synonyms in different data sources is rather moderate and that the degree of ambiguity of gene names with common English words and domain-related non-gene terms varies depending on the considered organism. Conclusion In conclusion, these results indicate that the combination of data contained in different databases allows the generation of gene and protein name dictionaries that contain significantly more used names than dictionaries obtained from individual data sources. Furthermore, curation of combined dictionaries considerably increases size and decreases ambiguity. The entries of the curated synonym dictionary are available for manual querying, editing, and PubMed- or Google-search via the ProThesaurus-wiki. For automated querying via custom software, we offer a web service and an exemplary client application. PMID:16899134
Flowsheets and source terms for radioactive waste projections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Forsberg, C.W.
1985-03-01
Flowsheets and source terms used to generate radioactive waste projections in the Integrated Data Base (IDB) Program are given. Volumes of each waste type generated per unit product throughput have been determined for the following facilities: uranium mining, UF/sub 6/ conversion, uranium enrichment, fuel fabrication, boiling-water reactors (BWRs), pressurized-water reactors (PWRs), and fuel reprocessing. Source terms for DOE/defense wastes have been developed. Expected wastes from typical decommissioning operations for each facility type have been determined. All wastes are also characterized by isotopic composition at time of generation and by general chemical composition. 70 references, 21 figures, 53 tables.
NASA Astrophysics Data System (ADS)
Winiarek, Victor; Bocquet, Marc; Saunier, Olivier; Mathieu, Anne
2012-03-01
A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativity of the measurements, those that are instrumental, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. We propose to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We apply the method to the estimation of the Fukushima Daiichi source term using activity concentrations in the air. The results are compared to an L-curve estimation technique and to Desroziers's scheme. The total reconstructed activities significantly depend on the chosen method. Because of the poor observability of the Fukushima Daiichi emissions, these methods provide lower bounds for cesium-137 and iodine-131 reconstructed activities. These lower bound estimates, 1.2 × 1016 Bq for cesium-137, with an estimated standard deviation range of 15%-20%, and 1.9 - 3.8 × 1017 Bq for iodine-131, with an estimated standard deviation range of 5%-10%, are of the same order of magnitude as those provided by the Japanese Nuclear and Industrial Safety Agency and about 5 to 10 times less than the Chernobyl atmospheric releases.
Starns, Jeffrey J.; Pazzaglia, Angela M.; Rotello, Caren M.; Hautus, Michael J.; Macmillan, Neil A.
2014-01-01
Source memory zROC slopes change from below 1 to above 1 depending on which source gets the strongest learning. This effect has been attributed to memory processes, either in terms of a threshold source recollection process or changes in the variability of continuous source evidence. We propose two decision mechanisms that can produce the slope effect, and we test them in three experiments. The evidence mixing account assumes that people change how they weight item versus source evidence based on which source is stronger, and the converging criteria account assumes that participants become more willing to make high confidence source responses for test probes that have higher item strength. Results failed to support the evidence mixing account, in that the slope effect emerged even when item evidence was not informative for the source judgment (that is, in tests that included strong and weak items from both sources). In contrast, results showed strong support for the converging criteria account. This account not only accommodated the unequal-strength slope effect, but also made a prediction for unstudied (new) items that was empirically confirmed: participants made more high confidence source responses for new items when they were more confident that the item was studied. The converging criteria account has an advantage over accounts based on source recollection or evidence variability, as the latter accounts do not predict the relationship between recognition and source confidence for new items. PMID:23565789
Results from the Ariel-5 all-sky X-ray monitor
NASA Technical Reports Server (NTRS)
Holt, S. S.
1975-01-01
A summary of results obtained from the first year of Ariel-5 all-sky monitor operation is presented. Transient source observations, as well as the results of long term studies of Sco X-1, Cyg X-3, and Cyg X-1 are described. By example, the included results are indicative of the temporal effects to which the all-sky monitor remains sensitive as it begins its second year of observation.
Schedl, Markus
2012-01-01
Different term weighting techniques such as [Formula: see text] or BM25 have been used intensely for manifold text-based information retrieval tasks. Their use for modeling term profiles for named entities and subsequent calculation of similarities between these named entities have been studied to a much smaller extent. The recent trend of microblogging made available massive amounts of information about almost every topic around the world. Therefore, microblogs represent a valuable source for text-based named entity modeling. In this paper, we present a systematic and comprehensive evaluation of different term weighting measures , normalization techniques , query schemes , index term sets , and similarity functions for the task of inferring similarities between named entities, based on data extracted from microblog posts . We analyze several thousand combinations of choices for the above mentioned dimensions, which influence the similarity calculation process, and we investigate in which way they impact the quality of the similarity estimates. Evaluation is performed using three real-world data sets: two collections of microblogs related to music artists and one related to movies. For the music collections, we present results of genre classification experiments using as benchmark genre information from allmusic.com. For the movie collection, we present results of multi-class classification experiments using as benchmark categories from IMDb. We show that microblogs can indeed be exploited to model named entity similarity with remarkable accuracy, provided the correct settings for the analyzed aspects are used. We further compare the results to those obtained when using Web pages as data source.
Power-output regularization in global sound equalization.
Stefanakis, Nick; Sarris, John; Cambourakis, George; Jacobsen, Finn
2008-01-01
The purpose of equalization in room acoustics is to compensate for the undesired modification that an enclosure introduces to signals such as audio or speech. In this work, equalization in a large part of the volume of a room is addressed. The multiple point method is employed with an acoustic power-output penalty term instead of the traditional quadratic source effort penalty term. Simulation results demonstrate that this technique gives a smoother decline of the reproduction performance away from the control points.
2017-01-01
Background The Internet is considered to be an effective source of health information and consultation for immigrants. Nutritional interventions for immigrants have become increasingly common over the past few decades. However, each population of immigrants has specific needs. Understanding the factors influencing the success of nutrition programs among immigrants requires an examination of their attitudes and perceptions, as well as their cultural values. Objective The purpose of this study was to examine perceptions of the Internet as a tool for long-term and “real-time” professional, psychological, and nutritional treatment for immigrants from the former Soviet Union who immigrated to Israel (IIFSU) from 1990 to 2012. Methods A sample of nutrition forum users (n=18) was interviewed and comments of 80 users were analyzed qualitatively in accordance with the grounded theory principles. Results The results show that IIFSU perceive the Internet as a platform for long-term and “real-time” dietary treatment and not just as an informative tool. IIFSU report benefits of online psychological support with professional dietary treatment. They attribute importance to cultural customization, which helps reduce barriers to intervention. Conclusions In light of the results, when formulating nutritional programs, it is essential to have a specific understanding of immigrants’ cultural characteristics and their patterns of Internet use concerning dietary care. PMID:28159729
A Benchmark Study of Large Contract Supplier Monitoring Within DOD and Private Industry
1994-03-01
83 2. Long Term Supplier Relationships ...... .. 84 3. Global Sourcing . . . . . . . . . . . . .. 85 4. Refocusing on Customer Quality...monitoring and recognition, reduced number of suppliers, global sourcing, and long term contractor relationships . These initiatives were then compared to DCMC...on customer quality. 14. suBJE.C TERMS Benchmark Study of Large Contract Supplier Monitoring. 15. NUMBER OF PAGES108 16. PRICE CODE 17. SECURITY
CO2 Flux From Antarctic Dry Valley Soils: Determining the Source and Environmental Controls
NASA Astrophysics Data System (ADS)
Risk, D. A.; Macintyre, C. M.; Shanhun, F.; Almond, P. C.; Lee, C.; Cary, C.
2014-12-01
Soils within the McMurdo Dry Valleys are known to respire carbon dioxide (CO2), but considerable debate surrounds the contributing sources and mechanisms that drive temporal variability. While some of the CO2 is of biological origin, other known contributors to variability include geochemical sources within, or beneath, the soil column. The relative contribution from each of these sources will depend on seasonal and environmental drivers such as temperature and wind that exert influence on temporal dynamics. To supplement a long term CO2 surface flux monitoring station that has now recorded fluxes over three full annual cycles, in January 2014 an automated flux and depth concentration monitoring system was installed in the Spaulding Pond area of Taylor Valley, along with standard meteorological sensors, to assist in defining source contributions through time. During two weeks of data we observed marked diel variability in CO2 concentrations within the profile (~100 ppm CO2 above or below atmospheric), and of CO2 moving across the soil surface. The pattern at many depths suggested an alternating diel-scale transition from source to sink that seemed clearly correlated with temperature-driven changes in the solubility of CO2 in water films. This CO2 solution storage flux was very highly coupled to soil temperature. A small depth source of unknown origin also appeared to be present. A controlled laboratory soil experiment was conducted to confirm the magnitude of fluxes into and out of soil water films, and confirmed the field results and temperature dependence. Ultimately, this solution storage flux needs to be well understood if the small biological fluxes from these soils are to be properly quantified and monitored for change. Here, we present results from the 2013/2014 field season and these supplementary experiments, placed in the context of 3 year long term continuous measurement of soil CO2 flux within the Dry Valleys.
THE DNAPL REMEDIATION CHALLENGE: IS THERE A CASE FOR SOURCE DEPLETION?
Releases of Dense Non-Aqueous Phase Liquids (DNAPLs) at a large number of public and private sector sites in the United States pose significant challenges in site remediation and long-term site management. Extensive contamination of groundwater occurs as a result of significant ...
40 CFR 417.61 - Specialized definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Soap Flakes and Powders... result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.31 - Specialized definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Soap Manufacturing by Fatty Acid... would result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.11 - Specialized definitions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Soap Manufacturing by Batch Kettle... result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.61 - Specialized definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Soap Flakes and Powders... result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.71 - Specialized definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... AND STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Bar Soaps... result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.61 - Specialized definitions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Soap Flakes and Powders... result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.61 - Specialized definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Soap Flakes and Powders... result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.11 - Specialized definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Soap Manufacturing by Batch Kettle... result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.11 - Specialized definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Soap Manufacturing by Batch Kettle... result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.11 - Specialized definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... AND STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Soap Manufacturing by Batch... would result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.31 - Specialized definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Soap Manufacturing by Fatty Acid... would result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.11 - Specialized definitions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Soap Manufacturing by Batch Kettle... result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.31 - Specialized definitions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Soap Manufacturing by Fatty Acid... would result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
A structured approach to recording AIDS-defining illnesses in Kenya: A SNOMED CT based solution
Oluoch, Tom; de Keizer, Nicolette; Langat, Patrick; Alaska, Irene; Ochieng, Kenneth; Okeyo, Nicky; Kwaro, Daniel; Cornet, Ronald
2016-01-01
Introduction Several studies conducted in sub-Saharan Africa (SSA) have shown that routine clinical data in HIV clinics often have errors. Lack of structured and coded documentation of diagnosis of AIDS defining illnesses (ADIs) can compromise data quality and decisions made on clinical care. Methods We used a structured framework to derive a reference set of concepts and terms used to describe ADIs. The four sources used were: (i) CDC/Accenture list of opportunistic infections, (ii) SNOMED Clinical Terms (SNOMED CT), (iii) Focus Group Discussion (FGD) among clinicians and nurses attending to patients at a referral provincial hospital in western Kenya, and (iv) chart abstraction from the Maternal Child Health (MCH) and HIV clinics at the same hospital. Using the January 2014 release of SNOMED CT, concepts were retrieved that matched terms abstracted from approach iii & iv, and the content coverage assessed. Post-coordination matching was applied when needed. Results The final reference set had 1054 unique ADI concepts which were described by 1860 unique terms. Content coverage of SNOMED CT was high (99.9% with pre-coordinated concepts; 100% with post-coordination). The resulting reference set for ADIs was implemented as the interface terminology on OpenMRS data entry forms. Conclusion Different sources demonstrate complementarity in the collection of concepts and terms for an interface terminology. SNOMED CT provides a high coverage in the domain of ADIs. Further work is needed to evaluate the effect of the interface terminology on data quality and quality of care. PMID:26184057
Electron Energy Deposition in Atomic Nitrogen
1987-10-06
knovn theoretical results, and their relative accuracy in comparison to existing measurements and calculations is given elsevhere. 20 2.1 The Source Term...with the proper choice of parameters, reduces to vell-known theoretical results. 20 Table 2 gives the parameters for collisional excitation of the...calculations of McGuire 36 and experimental measurements of Brook et al.3 7 Additional theoretical and experimental results are discussed in detail elsevhere
Discriminating Simulated Vocal Tremor Source Using Amplitude Modulation Spectra
Carbonell, Kathy M.; Lester, Rosemary A.; Story, Brad H.; Lotto, Andrew J.
2014-01-01
Objectives/Hypothesis Sources of vocal tremor are difficult to categorize perceptually and acoustically. This paper describes a preliminary attempt to discriminate vocal tremor sources through the use of spectral measures of the amplitude envelope. The hypothesis is that different vocal tremor sources are associated with distinct patterns of acoustic amplitude modulations. Study Design Statistical categorization methods (discriminant function analysis) were used to discriminate signals from simulated vocal tremor with different sources using only acoustic measures derived from the amplitude envelopes. Methods Simulations of vocal tremor were created by modulating parameters of a vocal fold model corresponding to oscillations of respiratory driving pressure (respiratory tremor), degree of vocal fold adduction (adductory tremor) and fundamental frequency of vocal fold vibration (F0 tremor). The acoustic measures were based on spectral analyses of the amplitude envelope computed across the entire signal and within select frequency bands. Results The signals could be categorized (with accuracy well above chance) in terms of the simulated tremor source using only measures of the amplitude envelope spectrum even when multiple sources of tremor were included. Conclusions These results supply initial support for an amplitude-envelope based approach to identify the source of vocal tremor and provide further evidence for the rich information about talker characteristics present in the temporal structure of the amplitude envelope. PMID:25532813
Beeman, John W.; Hansen, Amy C.; Sprando, Jamie M.
2015-01-01
Infection with Salmincola californiensis is common in juvenile Chinook salmon in western USA reservoirs and may affect the viability of fish used in studies of telemetered animals. Our limited assessment suggests infection by Salmincola californiensis affects the short-term morality of tagged fish and may affect long-term viability of tagged fish after release; however, the intensity of infection in the sample population did not represent the source population due to the observational nature of the data. We suggest these results warrant further study into the effects of infection bySalmincola californiensis on the results obtained through active telemetry and perhaps other methods requiring handling of infected fish.
An Evaluation Methodology for Longitudinal Studies of Short Term Cancer Research Training Programs
Padilla, Luz A.; Venkatesh, Raam; Daniel, Casey L.; Desmond, Renee A.; Brooks, C. Michael; Waterbor, John W.
2014-01-01
The need to familiarize medical students and graduate health professional students with research training opportunities that cultivate the appeal of research careers is vital to the future of research. Comprehensive evaluation of a cancer research training program can be achieved through longitudinal tracking of program alumni to assess the program’s impact on each participant’s career path and professional achievements. With advances in technology and smarter means of communication, effective ways to track alumni have changed. In order to collect data on the career outcomes and achievements of nearly 500 short-term cancer research training program alumni from 1999–2013, we sought to contact each alumnus to request completion of a survey instrument online, or by means of a telephone interview. The effectiveness of each contact method that we used was quantified according to ease of use and time required. The most reliable source of contact information for tracking alumni from the early years of the program was previous tracking results; and for alumni from the later years, the most important source of contact information was university alumni records that provided email addresses and telephone numbers. Personal contacts with former preceptors were sometimes helpful, as were generic search engines and people search engines. Social networking was of little value for most searches. Using information from two or more sources in combination was most effective in tracking alumni. These results provide insights and tools for other research training programs that wish to track their alumni for long-term program evaluation. PMID:25412722
ERIC Educational Resources Information Center
Littlejohn, Emily
2018-01-01
"Adaptation" originally began as a scientific term, but from 1860 to today it most often refers to an altered version of a text, film, or other literary source. When this term was first analyzed, humanities scholars often measured adaptations against their source texts, frequently privileging "original" texts. However, this…
40 CFR 401.11 - General definitions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Environmental Protection Agency. (d) The term point source means any discernible, confined and discrete conveyance, including but not limited to any pipe, ditch, channel, tunnel, conduit, well, discrete fissure... which pollutants are or may be discharged. (e) The term new source means any building, structure...
NASA Astrophysics Data System (ADS)
Zaccheo, T. S.; Pernini, T.; Dobler, J. T.; Blume, N.; Braun, M.
2017-12-01
This work highlights the use of the greenhouse-gas laser imaging tomography experiment (GreenLITETM) data in conjunction with a sparse tomography approach to identify and quantify both urban and industrial sources of CO2 and CH4. The GreenLITETM system provides a user-defined set of time-sequenced intersecting chords or integrated column measurements at a fixed height through a quasi-horizontal plane of interest. This plane, with unobstructed views along the lines of sight, may range from complex industrial facilities to a small city scale or urban sector. The continuous time phased absorption measurements are converted to column concentrations and combined with a plume based model to estimate the 2-D distribution of gas concentration over extended areas ranging from 0.04-25 km2. Finally, these 2-D maps of concentration are combined with ancillary meteorological and atmospheric data to identify potential emission sources and provide first order estimates of their associated fluxes. In this presentation, we will provide a brief overview of the systems and results from both controlled release experiments and a long-term system deployment in Paris, FR. These results provide a quantitative assessment of the system's ability to detect and estimate CO2 and CH4 sources, and demonstrate its ability to perform long-term autonomous monitoring and quantification of either persistent or sporadic emissions that may have both health and safety as well as environmental impacts.
Coarse Grid Modeling of Turbine Film Cooling Flows Using Volumetric Source Terms
NASA Technical Reports Server (NTRS)
Heidmann, James D.; Hunter, Scott D.
2001-01-01
The recent trend in numerical modeling of turbine film cooling flows has been toward higher fidelity grids and more complex geometries. This trend has been enabled by the rapid increase in computing power available to researchers. However, the turbine design community requires fast turnaround time in its design computations, rendering these comprehensive simulations ineffective in the design cycle. The present study describes a methodology for implementing a volumetric source term distribution in a coarse grid calculation that can model the small-scale and three-dimensional effects present in turbine film cooling flows. This model could be implemented in turbine design codes or in multistage turbomachinery codes such as APNASA, where the computational grid size may be larger than the film hole size. Detailed computations of a single row of 35 deg round holes on a flat plate have been obtained for blowing ratios of 0.5, 0.8, and 1.0, and density ratios of 1.0 and 2.0 using a multiblock grid system to resolve the flows on both sides of the plate as well as inside the hole itself. These detailed flow fields were spatially averaged to generate a field of volumetric source terms for each conservative flow variable. Solutions were also obtained using three coarse grids having streamwise and spanwise grid spacings of 3d, 1d, and d/3. These coarse grid solutions used the integrated hole exit mass, momentum, energy, and turbulence quantities from the detailed solutions as volumetric source terms. It is shown that a uniform source term addition over a distance from the wall on the order of the hole diameter is able to predict adiabatic film effectiveness better than a near-wall source term model, while strictly enforcing correct values of integrated boundary layer quantities.
Malyarenko, Dariya; Fedorov, Andriy; Bell, Laura; Prah, Melissa; Hectors, Stefanie; Arlinghaus, Lori; Muzi, Mark; Solaiyappan, Meiyappan; Jacobs, Michael; Fung, Maggie; Shukla-Dave, Amita; McManus, Kevin; Boss, Michael; Taouli, Bachir; Yankeelov, Thomas E; Quarles, Christopher Chad; Schmainda, Kathleen; Chenevert, Thomas L; Newitt, David C
2018-01-01
This paper reports on results of a multisite collaborative project launched by the MRI subgroup of Quantitative Imaging Network to assess current capability and provide future guidelines for generating a standard parametric diffusion map Digital Imaging and Communication in Medicine (DICOM) in clinical trials that utilize quantitative diffusion-weighted imaging (DWI). Participating sites used a multivendor DWI DICOM dataset of a single phantom to generate parametric maps (PMs) of the apparent diffusion coefficient (ADC) based on two models. The results were evaluated for numerical consistency among models and true phantom ADC values, as well as for consistency of metadata with attributes required by the DICOM standards. This analysis identified missing metadata descriptive of the sources for detected numerical discrepancies among ADC models. Instead of the DICOM PM object, all sites stored ADC maps as DICOM MR objects, generally lacking designated attributes and coded terms for quantitative DWI modeling. Source-image reference, model parameters, ADC units and scale, deemed important for numerical consistency, were either missing or stored using nonstandard conventions. Guided by the identified limitations, the DICOM PM standard has been amended to include coded terms for the relevant diffusion models. Open-source software has been developed to support conversion of site-specific formats into the standard representation.
Possible Dual Earthquake-Landslide Source of the 13 November 2016 Kaikoura, New Zealand Tsunami
NASA Astrophysics Data System (ADS)
Heidarzadeh, Mohammad; Satake, Kenji
2017-10-01
A complicated earthquake ( M w 7.8) in terms of rupture mechanism occurred in the NE coast of South Island, New Zealand, on 13 November 2016 (UTC) in a complex tectonic setting comprising a transition strike-slip zone between two subduction zones. The earthquake generated a moderate tsunami with zero-to-crest amplitude of 257 cm at the near-field tide gauge station of Kaikoura. Spectral analysis of the tsunami observations showed dual peaks at 3.6-5.7 and 5.7-56 min, which we attribute to the potential landslide and earthquake sources of the tsunami, respectively. Tsunami simulations showed that a source model with slip on an offshore plate-interface fault reproduces the near-field tsunami observation in terms of amplitude, but fails in terms of tsunami period. On the other hand, a source model without offshore slip fails to reproduce the first peak, but the later phases are reproduced well in terms of both amplitude and period. It can be inferred that an offshore source is necessary to be involved, but it needs to be smaller in size than the plate interface slip, which most likely points to a confined submarine landslide source, consistent with the dual-peak tsunami spectrum. We estimated the dimension of the potential submarine landslide at 8-10 km.
The use of the virtual source technique in computing scattering from periodic ocean surfaces.
Abawi, Ahmad T
2011-08-01
In this paper the virtual source technique is used to compute scattering of a plane wave from a periodic ocean surface. The virtual source technique is a method of imposing boundary conditions using virtual sources, with initially unknown complex amplitudes. These amplitudes are then determined by applying the boundary conditions. The fields due to these virtual sources are given by the environment Green's function. In principle, satisfying boundary conditions on an infinite surface requires an infinite number of sources. In this paper, the periodic nature of the surface is employed to populate a single period of the surface with virtual sources and m surface periods are added to obtain scattering from the entire surface. The use of an accelerated sum formula makes it possible to obtain a convergent sum with relatively small number of terms (∼40). The accuracy of the technique is verified by comparing its results with those obtained using the integral equation technique.
Medrano-Félix, Andrés; Estrada-Acosta, Mitzi; Peraza-Garay, Felipe; Castro-Del Campo, Nohelia; Martínez-Urtaza, Jaime; Chaidez, Cristóbal
2017-08-01
Long-term exposure to river water by non-indigenous micro-organisms such as Salmonella may affect metabolic adaptation to carbon sources. This study was conducted to determine differences in carbon source utilization of Salmonella Oranienburg and Salmonella Saintpaul (isolated from tropical river water) as well as the control strain Salmonella Typhimurium exposed to laboratory, river water, and host cells (Hep-2 cell line) growth conditions. Results showed that Salmonella Oranienburg and Salmonella Saintpaul showed better ability for carbon source utilization under the three growth conditions evaluated; however, S. Oranienburg showed the fastest and highest utilization on different carbon sources, including D-Glucosaminic acid, N-acetyl-D-Glucosamine, Glucose-1-phosphate, and D-Galactonic acid, while Salmonella Saintpaul and S. Typhimurium showed a limited utilization of carbon sources. In conclusion, this study suggests that environmental Salmonella strains show better survival and preconditioning abilities to external environments than the control strain based on their plasticity on diverse carbon sources use.
A source study of atmospheric polycyclic aromatic hydrocarbons in Shenzhen, South China.
Liu, Guoqing; Tong, Yongpeng; Luong, John H T; Zhang, Hong; Sun, Huibin
2010-04-01
Air pollution has become a serious problem in the Pearl River Delta, South China, particularly in winter due to the local micrometeorology. In this study, atmospheric polycyclic aromatic hydrocarbons (PAHs) were monitored weekly in Shenzhen during the winter of 2006. Results indicated that the detected PAHs were mainly of vapor phase compounds with phenanthrene dominant. The average vapor phase and particle phase PAHs concentration in Shenzhen was 101.3 and 26.7 ng m( - 3), respectively. Meteorological conditions showed great effect on PAH concentrations. The higher PAHs concentrations observed during haze episode might result from the accumulation of pollutants under decreased boundary layer, slower wind speed, and long-term dryness conditions. The sources of PAHs in the air were estimated by principal component analysis in combination with diagnostic ratios. Vehicle exhaust was the major PAHs source in Shenzhen, accounting for 50.0% of the total PAHs emissions, whereas coal combustion and solid waste incineration contributed to 29.4% and 20.6% of the total PAHs concentration, respectively. The results clearly indicated that the increasing solid waste incinerators have become a new important PAHs source in this region.
Numerical solutions of the complete Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Hassan, H. A.
1993-01-01
The objective of this study is to compare the use of assumed pdf (probability density function) approaches for modeling supersonic turbulent reacting flowfields with the more elaborate approach where the pdf evolution equation is solved. Assumed pdf approaches for averaging the chemical source terms require modest increases in CPU time typically of the order of 20 percent above treating the source terms as 'laminar.' However, it is difficult to assume a form for these pdf's a priori that correctly mimics the behavior of the actual pdf governing the flow. Solving the evolution equation for the pdf is a theoretically sound approach, but because of the large dimensionality of this function, its solution requires a Monte Carlo method which is computationally expensive and slow to coverage. Preliminary results show both pdf approaches to yield similar solutions for the mean flow variables.
LES-Modeling of a Partially Premixed Flame using a Deconvolution Turbulence Closure
NASA Astrophysics Data System (ADS)
Wang, Qing; Wu, Hao; Ihme, Matthias
2015-11-01
The modeling of the turbulence/chemistry interaction in partially premixed and multi-stream combustion remains an outstanding issue. By extending a recently developed constrained minimum mean-square error deconvolution (CMMSED) method, to objective of this work is to develop a source-term closure for turbulent multi-stream combustion. In this method, the chemical source term is obtained from a three-stream flamelet model, and CMMSED is used as closure model, thereby eliminating the need for presumed PDF-modeling. The model is applied to LES of a piloted turbulent jet flame with inhomogeneous inlets, and simulation results are compared with experiments. Comparisons with presumed PDF-methods are performed, and issues regarding resolution and conservation of the CMMSED method are examined. The author would like to acknowledge the support of funding from Stanford Graduate Fellowship.
Low-grade geothermal energy conversion by organic Rankine cycle turbine generator
NASA Astrophysics Data System (ADS)
Zarling, J. P.; Aspnes, J. D.
Results of a demonstration project which helped determine the feasibility of converting low-grade thermal energy in 49 C water into electrical energy via an organic Rankine cycle 2500 watt (electrical) turbine-generator are presented. The geothermal source which supplied the water is located in a rural Alaskan village. The reasons an organic Rankine cycle turbine-generator was investigated as a possible source of electric power in rural Alaska are: (1) high cost of operating diesel-electric units and their poor long-term reliability when high-quality maintenance is unavailable and (2) the extremely high level of long-term reliability reportedly attained by commercially available organic Rankine cycle turbines. Data is provided on the thermal and electrical operating characteristics of an experimental organic Rankine cycle turbine-generator operating at a uniquely low vaporizer temperature.
NASA Technical Reports Server (NTRS)
Shyy, W.; Thakur, S.; Udaykumar, H. S.
1993-01-01
A high accuracy convection scheme using a sequential solution technique has been developed and applied to simulate the longitudinal combustion instability and its active control. The scheme has been devised in the spirit of the Total Variation Diminishing (TVD) concept with special source term treatment. Due to the substantial heat release effect, a clear delineation of the key elements employed by the scheme, i.e., the adjustable damping factor and the source term treatment has been made. By comparing with the first-order upwind scheme previously utilized, the present results exhibit less damping and are free from spurious oscillations, offering improved quantitative accuracy while confirming the spectral analysis reported earlier. A simple feedback type of active control has been found to be capable of enhancing or attenuating the magnitude of the combustion instability.
Long-Term Spectral and Timing Behavior of Black Hole Candidate XTE J1908+094
NASA Technical Reports Server (NTRS)
Gogus, E.; Finger, M. H.; Kouveliotou, C.; Woods, P. M.; Patel, S. K.; Rupen, M.; Swank, J. H.; Markwardt, C. B.; Van Der Klis, M.
2003-01-01
The X-ray transient XTE J1908+094 was serendipitously discovered during RXTE ToO observations of SGR 1900+14 in February 2002. Following the discovery, RXTE routinely monitored the region. At the onset, the source was found in a spectrally low/hard state lasting for approximately 40 days, followed by a quick transition to the highhoft state. At the highest X-ray intensity level (seen on 2002 April 6), the source flux (2-10 keV) reached approximately 105 mCrab, then decayed rapidly. Overall outburst characteristics resemble the transient behavior of galactic black hole candidates. Here, we present the long term light curves, and detailed spectral and timing investigations of XTE J1098+094 using the RXTE/PCA data. We also report the results of Chandra ACIS observations which were performed during the decay phase.
McDonald, Brian C; Goldstein, Allen H; Harley, Robert A
2015-04-21
A fuel-based approach is used to assess long-term trends (1970-2010) in mobile source emissions of black carbon (BC) and organic aerosol (OA, including both primary emissions and secondary formation). The main focus of this analysis is the Los Angeles Basin, where a long record of measurements is available to infer trends in ambient concentrations of BC and organic carbon (OC), with OC used here as a proxy for OA. Mobile source emissions and ambient concentrations have decreased similarly, reflecting the importance of on- and off-road engines as sources of BC and OA in urban areas. In 1970, the on-road sector accounted for ∼90% of total mobile source emissions of BC and OA (primary + secondary). Over time, as on-road engine emissions have been controlled, the relative importance of off-road sources has grown. By 2010, off-road engines were estimated to account for 37 ± 20% and 45 ± 16% of total mobile source contributions to BC and OA, respectively, in the Los Angeles area. This study highlights both the success of efforts to control on-road emission sources, and the importance of considering off-road engine and other VOC source contributions when assessing long-term emission and ambient air quality trends.
Mitchell, Karen J; Mather, Mara; Johnson, Marcia K; Raye, Carol L; Greene, Erich J
2006-10-02
We investigated the hypothesis that arousal recruits attention to item information, thereby disrupting working memory processes that help bind items to context. Using functional magnetic resonance imaging, we compared brain activity when participants remembered negative or neutral picture-location conjunctions (source memory) versus pictures only. Behaviorally, negative trials showed disruption of short-term source, but not picture, memory; long-term picture recognition memory was better for negative than for neutral pictures. Activity in areas involved in working memory and feature integration (precentral gyrus and its intersect with superior temporal gyrus) was attenuated on negative compared with neutral source trials relative to picture-only trials. Visual processing areas (middle occipital and lingual gyri) showed greater activity for negative than for neutral trials, especially on picture-only trials.
QCD sum rules study of meson-baryon sigma terms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erkol, Gueray; Oka, Makoto; Turan, Guersevil
2008-11-01
The pion-baryon sigma terms and the strange-quark condensates of the octet and the decuplet baryons are calculated by employing the method of QCD sum rules. We evaluate the vacuum-to-vacuum transition matrix elements of two baryon interpolating fields in an external isoscalar-scalar field and use a Monte Carlo-based approach to systematically analyze the sum rules and the uncertainties in the results. We extract the ratios of the sigma terms, which have rather high accuracy and minimal dependence on QCD parameters. We discuss the sources of uncertainties and comment on possible strangeness content of the nucleon and the Delta.
Circular current loops, magnetic dipoles and spherical harmonic analysis.
Alldredge, L.R.
1980-01-01
Spherical harmonic analysis (SHA) is the most used method of describing the Earth's magnetic field, even though spherical harmonic coefficients (SHC) almost completely defy interpretation in terms of real sources. Some moderately successful efforts have been made to represent the field in terms of dipoles placed in the core in an effort to have the model come closer to representing real sources. Dipole sources are only a first approximation to the real sources which are thought to be a very complicated network of electrical currents in the core of the Earth. -Author
Madjidi, Faramarz; Behroozy, Ali
2014-01-01
Exposure to visible light and near infrared (NIR) radiation in the wavelength region of 380 to 1400 nm may cause thermal retinal injury. In this analysis, the effective spectral radiance of a hot source is replaced by its temperature in the exposure limit values in the region of 380-1400 nm. This article describes the development and implementation of a computer code to predict those temperatures, corresponding to the exposure limits proposed by the American Conference of Governmental Industrial Hygienists (ACGIH). Viewing duration and apparent diameter of the source were inputs for the computer code. At the first stage, an infinite series was created for calculation of spectral radiance by integration with Planck's law. At the second stage for calculation of effective spectral radiance, the initial terms of this infinite series were selected and integration was performed by multiplying these terms by a weighting factor R(λ) in the wavelength region 380-1400 nm. At the third stage, using a computer code, the source temperature that can emit the same effective spectral radiance was found. As a result, based only on measuring the source temperature and accounting for the exposure time and the apparent diameter of the source, it is possible to decide whether the exposure to visible and NIR in any 8-hr workday is permissible. The substitution of source temperature for effective spectral radiance provides a convenient way to evaluate exposure to visible light and NIR.
Impact of saline water sources on hypertension and cardiovascular disease risk in coastal Bangladesh
NASA Astrophysics Data System (ADS)
Butler, Adrian; Hoque, Mohammad; Mathewson, Eleanor; Ahmed, Kazi; Rahman, Moshuir; Vineis, Paolo; Scheelbeek, Pauline
2016-04-01
Southern Bangladesh is periodically affected by tropical cyclone induced storm surges. Such events can result in the inundation of large areas of the coastal plain by sea water. Over time these episodic influxes of saline water have led to the build-up of a high of salinities (e.g. > 1,000 mg/l) in the shallow (up to ca. 150 m depth) groundwater. Owing to the highly saline groundwater, local communities have developed alternative surface water sources by constructing artificial drinking water ponds, which collect monsoonal rainwater. These have far greater storage than traditional rainwater harvesting systems, which typically use 40 litre storage containers that are quickly depleted during the dry season. Unfortunately, the ponds can also become salinised during storm surge events, the impacts of which can last for a number of years. A combined hydrological and epidemiological research programme over the past two years has been undertaken to understand the potential health risks associated with these saline water sources, as excessive intake of sodium can lead to hypertension and an increased risk of cardiovascular disease (such as stroke and heart attack). An important aspect of the selected research sites was the variety of drinking water sources available. These included the presence of managed aquifer recharge sites where monsoonal rainwater is stored in near-surface (semi-)confined aquifers for abstraction during the dry season. This provided an opportunity for the effects of interventions with lower salinity sources to be assessed. Adjusting for confounding factors such as age, gender and diet, the results show a significant association between salinity and blood pressure. Furthermore, the results also showed such impacts are reversible. In order to evaluate the costs and benefits of such interventions, a water salinity - dose impact model is being developed to assess the effectiveness of alternative drinking water sources, such as enhanced rainwater harvesting, localised solar distillation, as well as the long-term risks from traditional water sources due to climate change. Preliminary results from the model will be presented showing the relative impacts from these interventions. These highlight the need for an integrated approach to salinity management in such coastal deltas in order to improve the long-term health of local communities living in these areas.
A study of numerical methods for hyperbolic conservation laws with stiff source terms
NASA Technical Reports Server (NTRS)
Leveque, R. J.; Yee, H. C.
1988-01-01
The proper modeling of nonequilibrium gas dynamics is required in certain regimes of hypersonic flow. For inviscid flow this gives a system of conservation laws coupled with source terms representing the chemistry. Often a wide range of time scales is present in the problem, leading to numerical difficulties as in stiff systems of ordinary differential equations. Stability can be achieved by using implicit methods, but other numerical difficulties are observed. The behavior of typical numerical methods on a simple advection equation with a parameter-dependent source term was studied. Two approaches to incorporate the source term were utilized: MacCormack type predictor-corrector methods with flux limiters, and splitting methods in which the fluid dynamics and chemistry are handled in separate steps. Various comparisons over a wide range of parameter values were made. In the stiff case where the solution contains discontinuities, incorrect numerical propagation speeds are observed with all of the methods considered. This phenomenon is studied and explained.
Wennberg, Richard; Cheyne, Douglas
2014-05-01
To assess the reliability of MEG source imaging (MSI) of anterior temporal spikes through detailed analysis of the localization and orientation of source solutions obtained for a large number of spikes that were separately confirmed by intracranial EEG to be focally generated within a single, well-characterized spike focus. MSI was performed on 64 identical right anterior temporal spikes from an anterolateral temporal neocortical spike focus. The effects of different volume conductors (sphere and realistic head model), removal of noise with low frequency filters (LFFs) and averaging multiple spikes were assessed in terms of the reliability of the source solutions. MSI of single spikes resulted in scattered dipole source solutions that showed reasonable reliability for localization at the lobar level, but only for solutions with a goodness-of-fit exceeding 80% using a LFF of 3 Hz. Reliability at a finer level of intralobar localization was limited. Spike averaging significantly improved the reliability of source solutions and averaging 8 or more spikes reduced dependency on goodness-of-fit and data filtering. MSI performed on topographically identical individual spikes from an intracranially defined classical anterior temporal lobe spike focus was limited by low reliability (i.e., scattered source solutions) in terms of fine, sublobar localization within the ipsilateral temporal lobe. Spike averaging significantly improved reliability. MSI performed on individual anterior temporal spikes is limited by low reliability. Reduction of background noise through spike averaging significantly improves the reliability of MSI solutions. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Effects of volcano topography on seismic broad-band waveforms
NASA Astrophysics Data System (ADS)
Neuberg, Jürgen; Pointer, Tim
2000-10-01
Volcano seismology often deals with rather shallow seismic sources and seismic stations deployed in their near field. The complex stratigraphy on volcanoes and near-field source effects have a strong impact on the seismic wavefield, complicating the interpretation techniques that are usually employed in earthquake seismology. In addition, as most volcanoes have a pronounced topography, the interference of the seismic wavefield with the stress-free surface results in severe waveform perturbations that affect seismic interpretation methods. In this study we deal predominantly with the surface effects, but take into account the impact of a typical volcano stratigraphy as well as near-field source effects. We derive a correction term for plane seismic waves and a plane-free surface such that for smooth topographies the effect of the free surface can be totally removed. Seismo-volcanic sources radiate energy in a broad frequency range with a correspondingly wide range of different Fresnel zones. A 2-D boundary element method is employed to study how the size of the Fresnel zone is dependent on source depth, dominant wavelength and topography in order to estimate the limits of the plane wave approximation. This approximation remains valid if the dominant wavelength does not exceed twice the source depth. Further aspects of this study concern particle motion analysis to locate point sources and the influence of the stratigraphy on particle motions. Furthermore, the deployment strategy of seismic instruments on volcanoes, as well as the direct interpretation of the broad-band waveforms in terms of pressure fluctuations in the volcanic plumbing system, are discussed.
Engström, Emma; Balfors, Berit; Mörtberg, Ulla; Thunvik, Roger; Gaily, Tarig; Mangold, Mikael
2015-05-15
In low-income regions, drinking water is often derived from groundwater sources, which might spread diarrheal disease if they are microbiologically polluted. This study aimed to investigate the occurrence of fecal contamination in 147 improved groundwater sources in Juba, South Sudan and to assess potential contributing risk factors, based on bivariate statistical analysis. Thermotolerant coliforms (TTCs) were detected in 66% of the investigated sources, including 95 boreholes, breaching the health-based recommendations for drinking water. A significant association (p<0.05) was determined between the presence of TTCs and the depth of cumulative, long-term prior precipitation (both within the previous five days and within the past month). No such link was found to short-term rainfall, the presence of latrines or damages in the borehole apron. However, the risk factor analysis further suggested, to a lesser degree, that the local topography and on-site hygiene were additionally significant. In summary, the analysis indicated that an important contamination mechanism was fecal pollution of the contributing groundwater, which was unlikely due to the presence of latrines; instead, infiltration from contaminated surface water was more probable. The reduction in fecal sources in the environment in Juba is thus recommended, for example, through constructing latrines or designating protection areas near water sources. The study results contribute to the understanding of microbiological contamination of groundwater sources in areas with low incomes and high population densities, tropical climates and weathered basement complex environments, which are common in urban sub-Saharan Africa. Copyright © 2015 Elsevier B.V. All rights reserved.
Hiring in a Hobbesian World. Social Infrastructure and Employers' Use of Information.
ERIC Educational Resources Information Center
Miller, Shazia Rafiullah; Rosenbaum, James E.
1997-01-01
Interviews of 51 employers showed they do not use transcripts or teacher recommendations in hiring. They mistrust applicant information from most sources, emphasizing interviews and "gut instinct," which often gives invalid results. They tend to use information from other employees or long-term social networks. (SK)
Educating the Ablest: Twenty Years Later
ERIC Educational Resources Information Center
Culross, Rita R.
2015-01-01
This study examines the current lives of thirty-five individuals who participated in high school gifted programs twenty years ago. The research specifically looked at educational attainment and career goals in terms of expressed aspirations in high school, using social media and other Internet sources. Results indicated continued support for the…
40 CFR 417.61 - Specialized definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... AND STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Manufacture of Soap Flakes and... would result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
40 CFR 417.31 - Specialized definitions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... AND STANDARDS SOAP AND DETERGENT MANUFACTURING POINT SOURCE CATEGORY Soap Manufacturing by Fatty Acid... would result if all water were removed from the actual product. (c) The term neat soap shall mean the solution of completely saponified and purified soap containing about 20-30 percent water which is ready for...
Section summary: Remote sensing
Belinda Arunarwati Margono
2013-01-01
Remote sensing is an important data source for monitoring the change of forest cover, in terms of both total removal of forest cover (deforestation), and change of canopy cover, structure and forest ecosystem services that result in forest degradation. In the context of Intergovernmental Panel on Climate Change (IPCC), forest degradation monitoring requires information...
Trends in Mortality of Tuberculosis Patients in the United States: The Long-term Perspective
Barnes, Richard F.W.; Moore, Maria Luisa; Garfein, Richard S.; Brodine, Stephanie; Strathdee, Steffanie A.; Rodwell, Timothy C.
2011-01-01
PURPOSE To describe long-term trends in TB mortality and to compare trends estimated from two different sources of public health surveillance data. METHODS Trends and changes in trend were estimated by joinpoint regression. Comparisons between datasets were made by fitting a Poisson regression model. RESULTS Since 1900, TB mortality rates estimated from death certificates have declined steeply, except for a period of no change in the 1980s. This decade had long-term consequences resulting in more TB deaths in later years than would have occurred had there been no flattening of the trend. Recent trends in TB mortality estimated from National Tuberculosis Surveillance System (NTSS) data, which record all-cause mortality, differed from trends based on death certificates. In particular, NTSS data showed TB mortality rates flattening since 2002. CONCLUSIONS Estimates of trends in TB mortality vary by data source, and therefore interpretation of the success of control efforts will depend upon the surveillance dataset used. The datasets may be subject to different biases that vary with time. One dataset showed a sustained improvement in the control of TB since the early 1990s while the other indicated that the rate of TB mortality was no longer declining. PMID:21820320
Sediment delivery to the Gulf of Alaska: source mechanisms along a glaciated transform margin
Dobson, M.R.; O'Leary, D.; Veart, M.
1998-01-01
Sediment delivery to the Gulf of Alaska occurs via four areally extensive deep-water fans, sourced from grounded tidewater glaciers. During periods of climatic cooling, glaciers cross a narrow shelf and discharge sediment down the continental slope. Because the coastal terrain is dominated by fjords and a narrow, high-relief Pacific watershed, deposition is dominated by channellized point-source fan accumulations, the volumes of which are primarily a function of climate. The sediment distribution is modified by a long-term tectonic translation of the Pacific plate to the north along the transform margin. As a result, the deep-water fans are gradually moved away from the climatically controlled point sources. Sets of abandoned channels record the effect of translation during the Plio-Pleistocene.
Enhancements to the MCNP6 background source
McMath, Garrett E.; McKinney, Gregg W.
2015-10-19
The particle transport code MCNP has been used to produce a background radiation data file on a worldwide grid that can easily be sampled as a source in the code. Location-dependent cosmic showers were modeled by Monte Carlo methods to produce the resulting neutron and photon background flux at 2054 locations around Earth. An improved galactic-cosmic-ray feature was used to model the source term as well as data from multiple sources to model the transport environment through atmosphere, soil, and seawater. A new elevation scaling feature was also added to the code to increase the accuracy of the cosmic neutronmore » background for user locations with off-grid elevations. Furthermore, benchmarking has shown the neutron integral flux values to be within experimental error.« less
NASA Astrophysics Data System (ADS)
Zou, Zhen-Zhen; Yu, Xu-Tao; Zhang, Zai-Chen
2018-04-01
At first, the entanglement source deployment problem is studied in a quantum multi-hop network, which has a significant influence on quantum connectivity. Two optimization algorithms are introduced with limited entanglement sources in this paper. A deployment algorithm based on node position (DNP) improves connectivity by guaranteeing that all overlapping areas of the distribution ranges of the entanglement sources contain nodes. In addition, a deployment algorithm based on an improved genetic algorithm (DIGA) is implemented by dividing the region into grids. From the simulation results, DNP and DIGA improve quantum connectivity by 213.73% and 248.83% compared to random deployment, respectively, and the latter performs better in terms of connectivity. However, DNP is more flexible and adaptive to change, as it stops running when all nodes are covered.
Alpine Warming induced Nitrogen Export from Green Lakes Valley, Colorado Front Range, USA
NASA Astrophysics Data System (ADS)
Barnes, R. T.; Williams, M. W.; Parman, J.
2012-12-01
Alpine ecosystems are particularly susceptible to disturbance due to their short growing seasons, sparse vegetation and thin soils. Atmospheric nitrogen deposition and warming temperatures currently affect Green Lakes Valley (GLV) within the Colorado Front Range. Research conducted within the alpine links chronic nitrogen inputs to a suite of ecological impacts, resulting in increased nitrate export. According to NADP records at the site, the atmospheric flux of nitrogen has decreased by 0.56 kg ha-1 yr-1 since 2000, due to a decrease in precipitation. Concurrent with this decrease, alpine nitrate yields have continued to increase; by 32% relative to the previous decade (1990-1999). In order to determine the source(s) of the sustained nitrate increases we utilized long term datasets to construct a mass balance model for four stream segments (glacier to subalpine) for nitrogen and weathering product constituents. We also compared geochemical fingerprints of various solute sources (glacial meltwater, thawing permafrost, snow, and stream water) to alpine stream water to determine if sources had changed over time. Long term trends indicate that in addition to increases in nitrate; sulfate, calcium, and silica have also increased over the same period. The geochemical composition of thawing permafrost (as indicated by rock glacial meltwater) suggests it is the source of these weathering products. Mass balance results indicate the high ammonium loads within glacial meltwater are rapidly nitrified, contributing approximately 0.45 kg yr-1 to the NO3- flux within the upper reaches of the watershed. The sustained export of these solutes during dry, summer months is likely facilitated by thawing cryosphere providing hydraulic connectivity late into the growing season. In a neighboring catchment, lacking permafrost and glacial features, there were no long term weathering or nitrogen solute trends; providing further evidence that the changes in alpine chemistry in GLV are likely due to cryospheric thaw exposing soils to biological and geochemical processes. These findings suggest that efforts to reduce nitrogen deposition loads may not improve water quality, as thawing cryosphere associated with climate change may affect alpine nitrate concentrations as much, or more than atmospheric deposition trends.
Surgical Ablation of Atrial Fibrillation Using Energy Sources.
Brick, Alexandre Visconti; Braile, Domingo Marcolino
2015-01-01
Surgical ablation, concomitant with other operations, is an option for treatment in patients with chronic atrial fibrillation. The aim of this study is to present a literature review on surgical ablation of atrial fibrillation in patients undergoing cardiac surgery, considering energy sources and return to sinus rhythm. A comprehensive survey was performed in the literature on surgical ablation of atrial fibrillation considering energy sources, sample size, study type, outcome (early and late), and return to sinus rhythm. Analyzing studies with immediate results (n=5), the percentage of return to sinus rhythm ranged from 73% to 96%, while those with long-term results (n=20) (from 12 months on) ranged from 62% to 97.7%. In both of them, there was subsequent clinical improvement of patients who underwent ablation, regardless of the energy source used. Surgical ablation of atrial fibrillation is essential for the treatment of this arrhythmia. With current technology, it may be minimally invasive, making it mandatory to perform a procedure in an attempt to revert to sinus rhythm in patients requiring heart surgery.
Leveraging Terminological Resources for Mapping between Rare Disease Information Sources
Rance, Bastien; Snyder, Michelle; Lewis, Janine; Bodenreider, Olivier
2015-01-01
Background Rare disease information sources are incompletely and inconsistently cross-referenced to one another, making it difficult for information seekers to navigate across them. The development of such cross-references established manually by experts is generally labor intensive and costly. Objectives To develop an automatic mapping between two of the major rare diseases information sources, GARD and Orphanet, by leveraging terminological resources, especially the UMLS. Methods We map the rare disease terms from Orphanet and ORDR to the UMLS. We use the UMLS as a pivot to bridge between the rare disease terminologies. We compare our results to a mapping obtained through manually established cross-references to OMIM. Results Our mapping has a precision of 94%, a recall of 63% and an F1-score of 76%. Our automatic mapping should help facilitate the development of more complete and consistent cross-references between GARD and Orphanet, and is applicable to other rare disease information sources as well. PMID:23920611
Computation of nonlinear ultrasound fields using a linearized contrast source method.
Verweij, Martin D; Demi, Libertario; van Dongen, Koen W A
2013-08-01
Nonlinear ultrasound is important in medical diagnostics because imaging of the higher harmonics improves resolution and reduces scattering artifacts. Second harmonic imaging is currently standard, and higher harmonic imaging is under investigation. The efficient development of novel imaging modalities and equipment requires accurate simulations of nonlinear wave fields in large volumes of realistic (lossy, inhomogeneous) media. The Iterative Nonlinear Contrast Source (INCS) method has been developed to deal with spatiotemporal domains measuring hundreds of wavelengths and periods. This full wave method considers the nonlinear term of the Westervelt equation as a nonlinear contrast source, and solves the equivalent integral equation via the Neumann iterative solution. Recently, the method has been extended with a contrast source that accounts for spatially varying attenuation. The current paper addresses the problem that the Neumann iterative solution converges badly for strong contrast sources. The remedy is linearization of the nonlinear contrast source, combined with application of more advanced methods for solving the resulting integral equation. Numerical results show that linearization in combination with a Bi-Conjugate Gradient Stabilized method allows the INCS method to deal with fairly strong, inhomogeneous attenuation, while the error due to the linearization can be eliminated by restarting the iterative scheme.
NASA Astrophysics Data System (ADS)
Taha, M. P. M.; Drew, G. H.; Tamer, A.; Hewings, G.; Jordinson, G. M.; Longhurst, P. J.; Pollard, S. J. T.
We present bioaerosol source term concentrations from passive and active composting sources and compare emissions from green waste compost aged 1, 2, 4, 6, 8, 12 and 16 weeks. Results reveal that the age of compost has little effect on the bioaerosol concentrations emitted for passive windrow sources. However emissions from turning compost during the early stages may be higher than during the later stages of the composting process. The bioaerosol emissions from passive sources were in the range of 10 3-10 4 cfu m -3, with releases from active sources typically 1-log higher. We propose improvements to current risk assessment methodologies by examining emission rates and the differences between two air dispersion models for the prediction of downwind bioaerosol concentrations at off-site points of exposure. The SCREEN3 model provides a more precautionary estimate of the source depletion curves of bioaerosol emissions in comparison to ADMS 3.3. The results from both models predict that bioaerosol concentrations decrease to below typical background concentrations before 250 m, the distance at which the regulator in England and Wales may require a risk assessment to be completed.
Source-term characterisation and solid speciation of plutonium at the Semipalatinsk NTS, Kazakhstan.
Nápoles, H Jiménez; León Vintró, L; Mitchell, P I; Omarova, A; Burkitbayev, M; Priest, N D; Artemyev, O; Lukashenko, S
2004-01-01
New data on the concentrations of key fission/activation products and transuranium nuclides in samples of soil and water from the Semipalatinsk Nuclear Test Site are presented and interpreted. Sampling was carried out at Ground Zero, Lake Balapan, the Tel'kem craters and reference locations within the test site boundary well removed from localised sources. Radionuclide ratios have been used to characterise the source term(s) at each of these sites. The geochemical partitioning of plutonium has also been examined and it is shown that the bulk of the plutonium contamination at most of the sites examined is in a highly refractory, non-labile form.
An Empirical Temperature Variance Source Model in Heated Jets
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Bridges, James
2012-01-01
An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.
NASA Astrophysics Data System (ADS)
Tarvainen, O.; Rouleau, G.; Keller, R.; Geros, E.; Stelzer, J.; Ferris, J.
2008-02-01
The converter-type negative ion source currently employed at the Los Alamos Neutron Science Center (LANSCE) is based on cesium enhanced surface production of H- ion beams in a filament-driven discharge. In this kind of an ion source the extracted H- beam current is limited by the achievable plasma density which depends primarily on the electron emission current from the filaments. The emission current can be increased by increasing the filament temperature but, unfortunately, this leads not only to shorter filament lifetime but also to an increase in metal evaporation from the filament, which deposits on the H- converter surface and degrades its performance. Therefore, we have started an ion source development project focused on replacing these thermionic cathodes (filaments) of the converter source by a helicon plasma generator capable of producing high-density hydrogen plasmas with low electron energy. In our studies which have so far shown that the plasma density of the surface conversion source can be increased significantly by exciting a helicon wave in the plasma, and we expect to improve the performance of the surface converter H- ion source in terms of beam brightness and time between services. The design of this new source and preliminary results are presented, along with a discussion of physical processes relevant for H- ion beam production with this novel design. Ultimately, we perceive this approach as an interim step towards our long-term goal, combining a helicon plasma generator with an SNS-type main discharge chamber, which will allow us to individually optimize the plasma properties of the plasma cathode (helicon) and H- production (main discharge) in order to further improve the brightness of extracted H- ion beams.
Tarvainen, O; Rouleau, G; Keller, R; Geros, E; Stelzer, J; Ferris, J
2008-02-01
The converter-type negative ion source currently employed at the Los Alamos Neutron Science Center (LANSCE) is based on cesium enhanced surface production of H(-) ion beams in a filament-driven discharge. In this kind of an ion source the extracted H(-) beam current is limited by the achievable plasma density which depends primarily on the electron emission current from the filaments. The emission current can be increased by increasing the filament temperature but, unfortunately, this leads not only to shorter filament lifetime but also to an increase in metal evaporation from the filament, which deposits on the H(-) converter surface and degrades its performance. Therefore, we have started an ion source development project focused on replacing these thermionic cathodes (filaments) of the converter source by a helicon plasma generator capable of producing high-density hydrogen plasmas with low electron energy. In our studies which have so far shown that the plasma density of the surface conversion source can be increased significantly by exciting a helicon wave in the plasma, and we expect to improve the performance of the surface converter H(-) ion source in terms of beam brightness and time between services. The design of this new source and preliminary results are presented, along with a discussion of physical processes relevant for H(-) ion beam production with this novel design. Ultimately, we perceive this approach as an interim step towards our long-term goal, combining a helicon plasma generator with an SNS-type main discharge chamber, which will allow us to individually optimize the plasma properties of the plasma cathode (helicon) and H(-) production (main discharge) in order to further improve the brightness of extracted H(-) ion beams.
Microseismic source locations with deconvolution migration
NASA Astrophysics Data System (ADS)
Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu
2018-03-01
Identifying and locating microseismic events are critical problems in hydraulic fracturing monitoring for unconventional resources exploration. In contrast to active seismic data, microseismic data are usually recorded with unknown source excitation time and source location. In this study, we introduce deconvolution migration by combining deconvolution interferometry with interferometric cross-correlation migration (CCM). This method avoids the need for the source excitation time and enhances both the spatial resolution and robustness by eliminating the square term of the source wavelets from CCM. The proposed algorithm is divided into the following three steps: (1) generate the virtual gathers by deconvolving the master trace with all other traces in the microseismic gather to remove the unknown excitation time; (2) migrate the virtual gather to obtain a single image of the source location and (3) stack all of these images together to get the final estimation image of the source location. We test the proposed method on complex synthetic and field data set from the surface hydraulic fracturing monitoring, and compare the results with those obtained by interferometric CCM. The results demonstrate that the proposed method can obtain a 50 per cent higher spatial resolution image of the source location, and more robust estimation with smaller errors of the localization especially in the presence of velocity model errors. This method is also beneficial for source mechanism inversion and global seismology applications.
Dissociation between memory accuracy and memory confidence following bilateral parietal lesions.
Simons, Jon S; Peers, Polly V; Mazuz, Yonatan S; Berryhill, Marian E; Olson, Ingrid R
2010-02-01
Numerous functional neuroimaging studies have observed lateral parietal lobe activation during memory tasks: a surprise to clinicians who have traditionally associated the parietal lobe with spatial attention rather than memory. Recent neuropsychological studies examining episodic recollection after parietal lobe lesions have reported differing results. Performance was preserved in unilateral lesion patients on source memory tasks involving recollecting the context in which stimuli were encountered, and impaired in patients with bilateral parietal lesions on tasks assessing free recall of autobiographical memories. Here, we investigated a number of possible accounts for these differing results. In 3 experiments, patients with bilateral parietal lesions performed as well as controls at source recollection, confirming the previous unilateral lesion results and arguing against an explanation for those results in terms of contralesional compensation. Reducing the behavioral relevance of mnemonic information critical to the source recollection task did not affect performance of the bilateral lesion patients, indicating that the previously observed reduced autobiographical free recall might not be due to impaired bottom-up attention. The bilateral patients did, however, exhibit reduced confidence in their source recollection abilities across the 3 experiments, consistent with a suggestion that parietal lobe lesions might lead to impaired subjective experience of rich episodic recollection.
NASA Astrophysics Data System (ADS)
Crittenden, P. E.; Balachandar, S.
2018-07-01
The radial one-dimensional Euler equations are often rewritten in what is known as the geometric source form. The differential operator is identical to the Cartesian case, but source terms result. Since the theory and numerical methods for the Cartesian case are well-developed, they are often applied without modification to cylindrical and spherical geometries. However, numerical conservation is lost. In this article, AUSM^+-up is applied to a numerically conservative (discrete) form of the Euler equations labeled the geometric form, a nearly conservative variation termed the geometric flux form, and the geometric source form. The resulting numerical methods are compared analytically and numerically through three types of test problems: subsonic, smooth, steady-state solutions, Sedov's similarity solution for point or line-source explosions, and shock tube problems. Numerical conservation is analyzed for all three forms in both spherical and cylindrical coordinates. All three forms result in constant enthalpy for steady flows. The spatial truncation errors have essentially the same order of convergence, but the rate constants are superior for the geometric and geometric flux forms for the steady-state solutions. Only the geometric form produces the correct shock location for Sedov's solution, and a direct connection between the errors in the shock locations and energy conservation is found. The shock tube problems are evaluated with respect to feature location using an approximation with a very fine discretization as the benchmark. Extensions to second order appropriate for cylindrical and spherical coordinates are also presented and analyzed numerically. Conclusions are drawn, and recommendations are made. A derivation of the steady-state solution is given in the Appendix.
NASA Astrophysics Data System (ADS)
Crittenden, P. E.; Balachandar, S.
2018-03-01
The radial one-dimensional Euler equations are often rewritten in what is known as the geometric source form. The differential operator is identical to the Cartesian case, but source terms result. Since the theory and numerical methods for the Cartesian case are well-developed, they are often applied without modification to cylindrical and spherical geometries. However, numerical conservation is lost. In this article, AUSM^+ -up is applied to a numerically conservative (discrete) form of the Euler equations labeled the geometric form, a nearly conservative variation termed the geometric flux form, and the geometric source form. The resulting numerical methods are compared analytically and numerically through three types of test problems: subsonic, smooth, steady-state solutions, Sedov's similarity solution for point or line-source explosions, and shock tube problems. Numerical conservation is analyzed for all three forms in both spherical and cylindrical coordinates. All three forms result in constant enthalpy for steady flows. The spatial truncation errors have essentially the same order of convergence, but the rate constants are superior for the geometric and geometric flux forms for the steady-state solutions. Only the geometric form produces the correct shock location for Sedov's solution, and a direct connection between the errors in the shock locations and energy conservation is found. The shock tube problems are evaluated with respect to feature location using an approximation with a very fine discretization as the benchmark. Extensions to second order appropriate for cylindrical and spherical coordinates are also presented and analyzed numerically. Conclusions are drawn, and recommendations are made. A derivation of the steady-state solution is given in the Appendix.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yidong Xia; Mitch Plummer; Robert Podgorney
2016-02-01
Performance of heat production process over a 30-year period is assessed in a conceptual EGS model with a geothermal gradient of 65K per km depth in the reservoir. Water is circulated through a pair of parallel wells connected by a set of single large wing fractures. The results indicate that the desirable output electric power rate and lifespan could be obtained under suitable material properties and system parameters. A sensitivity analysis on some design constraints and operation parameters indicates that 1) the fracture horizontal spacing has profound effect on the long-term performance of heat production, 2) the downward deviation anglemore » for the parallel doublet wells may help overcome the difficulty of vertical drilling to reach a favorable production temperature, and 3) the thermal energy production rate and lifespan has close dependence on water mass flow rate. The results also indicate that the heat production can be improved when the horizontal fracture spacing, well deviation angle, and production flow rate are under reasonable conditions. To conduct the reservoir modeling and simulations, an open-source, finite element based, fully implicit, fully coupled hydrothermal code, namely FALCON, has been developed and used in this work. Compared with most other existing codes that are either closed-source or commercially available in this area, this new open-source code has demonstrated a code development strategy that aims to provide an unparalleled easiness for user-customization and multi-physics coupling. Test results have shown that the FALCON code is able to complete the long-term tests efficiently and accurately, thanks to the state-of-the-art nonlinear and linear solver algorithms implemented in the code.« less
NASA Astrophysics Data System (ADS)
Chai, Xintao; Tang, Genyang; Peng, Ronghua; Liu, Shaoyong
2018-03-01
Full-waveform inversion (FWI) reconstructs the subsurface properties from acquired seismic data via minimization of the misfit between observed and simulated data. However, FWI suffers from considerable computational costs resulting from the numerical solution of the wave equation for each source at each iteration. To reduce the computational burden, constructing supershots by combining several sources (aka source encoding) allows mitigation of the number of simulations at each iteration, but it gives rise to crosstalk artifacts because of interference between the individual sources of the supershot. A modified Gauss-Newton FWI (MGNFWI) approach showed that as long as the difference between the initial and true models permits a sparse representation, the ℓ _1-norm constrained model updates suppress subsampling-related artifacts. However, the spectral-projected gradient ℓ _1 (SPGℓ _1) algorithm employed by MGNFWI is rather complicated that makes its implementation difficult. To facilitate realistic applications, we adapt a linearized Bregman (LB) method to sparsity-promoting FWI (SPFWI) because of the efficiency and simplicity of LB in the framework of ℓ _1-norm constrained optimization problem and compressive sensing. Numerical experiments performed with the BP Salt model, the Marmousi model and the BG Compass model verify the following points. The FWI result with LB solving ℓ _1-norm sparsity-promoting problem for the model update outperforms that generated by solving ℓ _2-norm problem in terms of crosstalk elimination and high-fidelity results. The simpler LB method performs comparably and even superiorly to the complicated SPGℓ _1 method in terms of computational efficiency and model quality, making the LB method a viable alternative for realistic implementations of SPFWI.
Isotopic composition and neutronics of the Okelobondo natural reactor
NASA Astrophysics Data System (ADS)
Palenik, Christopher Samuel
The Oklo-Okelobondo and Bangombe uranium deposits, in Gabon, Africa host Earth's only known natural nuclear fission reactors. These 2 billion year old reactors represent a unique opportunity to study used nuclear fuel over geologic periods of time. The reactors in these deposits have been studied as a means by which to constrain the source term of fission product concentrations produced during reactor operation. The source term depends on the neutronic parameters, which include reactor operation duration, neutron flux and the neutron energy spectrum. Reactor operation has been modeled using a point-source computer simulation (Oak Ridge Isotope Generation and Depletion, ORIGEN, code) for a light water reactor. Model results have been constrained using secondary ionization mass spectroscopy (SIMS) isotopic measurements of the fission products Nd and Te, as well as U in uraninite from samples collected in the Okelobondo reactor zone. Based upon the constraints on the operating conditions, the pre-reactor concentrations of Nd (150 ppm +/- 75 ppm) and Te (<1 ppm) in uraninite were estimated. Related to the burnup measured in Okelobondo samples (0.7 to 13.8 GWd/MTU), the final fission product inventories of Nd (90 to 1200 ppm) and Te (10 to 110 ppm) were calculated. By the same means, the ranges of all other fission products and actinides produced during reactor operation were calculated as a function of burnup. These results provide a source term against which the present elemental and decay abundances at the fission reactor can be compared. Furthermore, they provide new insights into the extent to which a "fossil" nuclear reactor can be characterized on the basis of its isotopic signatures. In addition, results from the study of two other natural systems related to the radionuclide and fission product transport are included. A detailed mineralogical characterization of the uranyl mineralogy at the Bangombe uranium deposit in Gabon, Africa was completed to improve geochemical models of the solubility-limiting phase. A study of the competing effects of radiation damage and annealing in a U-bearing crystal of zircon shows that low temperature annealing in actinide-bearing phases is significant in the annealing of radiation damage.
A Study of Regional Waveform Calibration in the Eastern Mediterranean Region.
NASA Astrophysics Data System (ADS)
di Luccio, F.; Pino, A.; Thio, H.
2002-12-01
We modeled Pnl phases from several moderate magnitude events in the eastern Mediterranean to test methods and to develop path calibrations for source determination. The study region spanning from the eastern part of the Hellenic arc to the eastern Anatolian fault is mostly interested by moderate earthquakes, that can produce relevant damages. The selected area consists of several tectonic environment, which produces increased level of difficulty in waveform modeling. The results of this study are useful for the analysis of regional seismicity and for seismic hazard as well, in particular because very few broadband seismic stations are available in the selected area. The obtained velocity model gives a 30 km crustal tickness and low upper mantle velocities. The applied inversion procedure to determine the source mechanism has been successful, also in terms of discrimination of depth, for the entire range of selected paths. We conclude that using the true calibration of the seismic structure and high quality broadband data, it is possible to determine the seismic source in terms of mechanism, even with a single station.
ERIC Educational Resources Information Center
Hashim, Ismail Hussein
2003-01-01
Tests the universal nature of stress and coping behavior among overseas college students in China and provides basic information towards understanding the problems that result from stress and coping which can best be defined in cultural terms. Results indicated that academic and interpersonal sources of stress were the most common Stressors…
NASA Astrophysics Data System (ADS)
Jiang, Daijun; Li, Zhiyuan; Liu, Yikan; Yamamoto, Masahiro
2017-05-01
In this paper, we first establish a weak unique continuation property for time-fractional diffusion-advection equations. The proof is mainly based on the Laplace transform and the unique continuation properties for elliptic and parabolic equations. The result is weaker than its parabolic counterpart in the sense that we additionally impose the homogeneous boundary condition. As a direct application, we prove the uniqueness for an inverse problem on determining the spatial component in the source term by interior measurements. Numerically, we reformulate our inverse source problem as an optimization problem, and propose an iterative thresholding algorithm. Finally, several numerical experiments are presented to show the accuracy and efficiency of the algorithm.
Orejas, Jaime; Pfeuffer, Kevin P; Ray, Steven J; Pisonero, Jorge; Sanz-Medel, Alfredo; Hieftje, Gary M
2014-11-01
Ambient desorption/ionization (ADI) sources coupled to mass spectrometry (MS) offer outstanding analytical features: direct analysis of real samples without sample pretreatment, combined with the selectivity and sensitivity of MS. Since ADI sources typically work in the open atmosphere, ambient conditions can affect the desorption and ionization processes. Here, the effects of internal source parameters and ambient humidity on the ionization processes of the flowing atmospheric pressure afterglow (FAPA) source are investigated. The interaction of reagent ions with a range of analytes is studied in terms of sensitivity and based upon the processes that occur in the ionization reactions. The results show that internal parameters which lead to higher gas temperatures afforded higher sensitivities, although fragmentation is also affected. In the case of humidity, only extremely dry conditions led to higher sensitivities, while fragmentation remained unaffected.
The future of meat: a qualitative analysis of cultured meat media coverage.
Goodwin, J N; Shoulders, C W
2013-11-01
This study sought to explore the informational themes and information sources cited by the media to cover stories of cultured meat in both the United States and the European Union. The results indicated that cultured meat news articles in both the United States and the European Union commonly discuss cultured meat in terms of benefits, history, process, time, livestock production problems, and skepticism. Additionally, the information sources commonly cited in the articles included cultured meat researchers, sources from academia, People for the Ethical Treatment of Animals (PETA), New Harvest, Winston Churchill, restaurant owners/chefs, and sources from the opposing countries (e.g. US use some EU sources and vice versa). The implications of this study will allow meat scientists to understand how the media is influencing consumers' perceptions about the topic, and also allow them to strategize how to shape future communication about cultured meat. Published by Elsevier Ltd.
Gault, Lora V; Shultz, Mary; Davies, Kathy J
2002-04-01
This study compared the mapping of natural language patron terms to the Medical Subject Headings (MeSH) across six MeSH interfaces for the MEDLINE database. Test data were obtained from search requests submitted by patrons to the Library of the Health Sciences, University of Illinois at Chicago, over a nine-month period. Search request statements were parsed into separate terms or phrases. Using print sources from the National Library of Medicine, Each parsed patron term was assigned corresponding MeSH terms. Each patron term was entered into each of the selected interfaces to determine how effectively they mapped to MeSH. Data were collected for mapping success, accessibility of MeSH term within mapped list, and total number of MeSH choices within each list. The selected MEDLINE interfaces do not map the same patron term in the same way, nor do they consistently lead to what is considered the appropriate MeSH term. If searchers utilize the MEDLINE database to its fullest potential by mapping to MeSH, the results of the mapping will vary between interfaces. This variance may ultimately impact the search results. These differences should be considered when choosing a MEDLINE interface and when instructing end users.
Repeat immigration: A previously unobserved source of heterogeneity?
Aradhya, Siddartha; Scott, Kirk; Smith, Christopher D
2017-07-01
Register data allow for nuanced analyses of heterogeneities between sub-groups which are not observable in other data sources. One heterogeneity for which register data is particularly useful is in identifying unique migration histories of immigrant populations, a group of interest across disciplines. Years since migration is a commonly used measure of integration in studies seeking to understand the outcomes of immigrants. This study constructs detailed migration histories to test whether misclassified migrations may mask important heterogeneities. In doing so, we identify a previously understudied group of migrants called repeat immigrants, and show that they differ systematically from permanent immigrants. In addition, we quantify the degree to which migration information is misreported in the registers. The analysis is carried out in two steps. First, we estimate income trajectories for repeat immigrants and permanent immigrants to understand the degree to which they differ. Second, we test data validity by cross-referencing migration information with changes in income to determine whether there are inconsistencies indicating misreporting. From the first part of the analysis, the results indicate that repeat immigrants systematically differ from permanent immigrants in terms of income trajectories. Furthermore, income trajectories differ based on the way in which years since migration is calculated. The second part of the analysis suggests that misreported migration events, while present, are negligible. Repeat immigrants differ in terms of income trajectories, and may differ in terms of other outcomes as well. Furthermore, this study underlines that Swedish registers provide a reliable data source to analyze groups which are unidentifiable in other data sources.
Turbulent transport in premixed flames
NASA Technical Reports Server (NTRS)
Rutland, C. J.; Cant, R. S.
1994-01-01
Simulations of planar, premixed turbulent flames with heat release were used to study turbulent transport. Reynolds stress and Reynolds flux budgets were obtained and used to guide the investigation of important physical effects. Essentially all pressure terms in the transport equations were found to be significant. In the Reynolds flux equations, these terms are the major source of counter-gradient transport. Viscous and molecular terms were also found to be significant, with both dilatational and solenoidal terms contributing to the Reynolds stress dissipation. The BML theory of premixed turbulent combustion was critically examined in detail. The BML bimodal pdf was found to agree well with the DNS data. All BML decompositions, through the third moments, show very good agreement with the DNS results. Several BML models for conditional terms were checked using the DNS data and were found to require more extensive development.
NASA Astrophysics Data System (ADS)
Sharifian, Mohammad Kazem; Kesserwani, Georges; Hassanzadeh, Yousef
2018-05-01
This work extends a robust second-order Runge-Kutta Discontinuous Galerkin (RKDG2) method to solve the fully nonlinear and weakly dispersive flows, within a scope to simultaneously address accuracy, conservativeness, cost-efficiency and practical needs. The mathematical model governing such flows is based on a variant form of the Green-Naghdi (GN) equations decomposed as a hyperbolic shallow water system with an elliptic source term. Practical features of relevance (i.e. conservative modeling over irregular terrain with wetting and drying and local slope limiting) have been restored from an RKDG2 solver to the Nonlinear Shallow Water (NSW) equations, alongside new considerations to integrate elliptic source terms (i.e. via a fourth-order local discretization of the topography) and to enable local capturing of breaking waves (i.e. via adding a detector for switching off the dispersive terms). Numerical results are presented, demonstrating the overall capability of the proposed approach in achieving realistic prediction of nearshore wave processes involving both nonlinearity and dispersion effects within a single model.
Antimatter Requirements and Energy Costs for Near-Term Propulsion Applications
NASA Technical Reports Server (NTRS)
Schmidt, G. R.; Gerrish, H. P.; Martin, J. J.; Smith, G. A.; Meyer, K. J.
1999-01-01
The superior energy density of antimatter annihilation has often been pointed to as the ultimate source of energy for propulsion. However, the limited capacity and very low efficiency of present-day antiproton production methods suggest that antimatter may be too costly to consider for near-term propulsion applications. We address this issue by assessing the antimatter requirements for six different types of propulsion concepts, including two in which antiprotons are used to drive energy release from combined fission/fusion. These requirements are compared against the capacity of both the current antimatter production infrastructure and the improved capabilities that could exist within the early part of next century. Results show that although it may be impractical to consider systems that rely on antimatter as the sole source of propulsive energy, the requirements for propulsion based on antimatter-assisted fission/fusion do fall within projected near-term production capabilities. In fact, a new facility designed solely for antiproton production but based on existing technology could feasibly support interstellar precursor missions and omniplanetary spaceflight with antimatter costs ranging up to $6.4 million per mission.
An investigation on nuclear energy policy in Turkey and public perception
NASA Astrophysics Data System (ADS)
Coskun, Mehmet Burhanettin; Tanriover, Banu
2016-11-01
Turkey, which meets nearly 70 per cent of its energy demands with import, is facing the problems of energy security and current account deficit as a result of its dependence on foreign sources in terms of energy input. It is also known that Turkey is having environmental problems due to the increases in CO2 emission. Considering these problems in Turkish economy, where energy input is commonly used, it is necessary to use energy sources efficiently and provide alternative energy sources. Due to the dependency of renewable sources on meteorological conditions (the absence of enough sun, wind, and water sources), the energy generation could not be provided efficiently and permanently from these sources. At this point, nuclear energy as analternative energy source maintains its importance as a sustainable energy source that providing energy in 7 days and 24 hours. The main purpose of this study is to evaluate the nuclear energy subject within the context of negative public perceptions emerged after Chernobyl (1986) and Fukushima (2011) disasters and to investigate in the economic framework.
Prioritized packet video transmission over time-varying wireless channel using proactive FEC
NASA Astrophysics Data System (ADS)
Kumwilaisak, Wuttipong; Kim, JongWon; Kuo, C.-C. Jay
2000-12-01
Quality of video transmitted over time-varying wireless channels relies heavily on the coordinated effort to cope with both channel and source variations dynamically. Given the priority of each source packet and the estimated channel condition, an adaptive protection scheme based on joint source-channel criteria is investigated via proactive forward error correction (FEC). With proactive FEC in Reed Solomon (RS)/Rate-compatible punctured convolutional (RCPC) codes, we study a practical algorithm to match the relative priority of source packets and instantaneous channel conditions. The channel condition is estimated to capture the long-term fading effect in terms of the averaged SNR over a preset window. Proactive protection is performed for each packet based on the joint source-channel criteria with special attention to the accuracy, time-scale match, and feedback delay of channel status estimation. The overall gain of the proposed protection mechanism is demonstrated in terms of the end-to-end wireless video performance.
Comparison of different wavelength pump sources for Tm subnanosecond amplifier
NASA Astrophysics Data System (ADS)
Cserteg, Andras; Guillemet, Sébastien; Hernandez, Yves; Giannone, Domenico
2012-06-01
We report here a comparison of different pumping wavelengths for short pulse Thulium fibre amplifiers. We compare the results in terms of efficiency and required fibre length. As we operate the laser in the sub-nanosecond regime, the fibre length is a critical parameter regarding non linear effects. With 793 nm clad-pumping, a 4 m long active fibre was necessary, leading to strong spectral deformation through Self Phase Modulation (SPM). Core-pumping scheme was then more in-depth investigated with several wavelengths tested. Good results with Erbium and Raman shifted pumping sources were obtained, with very short fibre length, aiming to reach a few micro-joules per pulse without (or with limited) SPM.
Finite-element solutions for geothermal systems
NASA Technical Reports Server (NTRS)
Chen, J. C.; Conel, J. E.
1977-01-01
Vector potential and scalar potential are used to formulate the governing equations for a single-component and single-phase geothermal system. By assuming an initial temperature field, the fluid velocity can be determined which, in turn, is used to calculate the convective heat transfer. The energy equation is then solved by considering convected heat as a distributed source. Using the resulting temperature to compute new source terms, the final results are obtained by iterations of the procedure. Finite-element methods are proposed for modeling of realistic geothermal systems; the advantages of such methods are discussed. The developed methodology is then applied to a sample problem. Favorable agreement is obtained by comparisons with a previous study.
Modeling of Turbulence Generated Noise in Jets
NASA Technical Reports Server (NTRS)
Khavaran, Abbas; Bridges, James
2004-01-01
A numerically calculated Green's function is used to predict jet noise spectrum and its far-field directivity. A linearized form of Lilley's equation governs the non-causal Green s function of interest, with the non-linear terms on the right hand side identified as the source. In this paper, contributions from the so-called self- and shear-noise source terms will be discussed. A Reynolds-averaged Navier-Stokes solution yields the required mean flow as well as time- and length scales of a noise-generating turbulent eddy. A non-compact source, with exponential temporal and spatial functions, is used to describe the turbulence velocity correlation tensors. It is shown that while an exact non-causal Green's function accurately predicts the observed shift in the location of the spectrum peak with angle as well as the angularity of sound at moderate Mach numbers, at high subsonic and supersonic acoustic Mach numbers the polar directivity of radiated sound is not entirely captured by this Green's function. Results presented for Mach 0.5 and 0.9 isothermal jets, as well as a Mach 0.8 hot jet conclude that near the peak radiation angle a different source/Green's function convolution integral may be required in order to capture the peak observed directivity of jet noise.
Plutonium isotopes and 241Am in the atmosphere of Lithuania: A comparison of different source terms
NASA Astrophysics Data System (ADS)
Lujanienė, G.; Valiulis, D.; Byčenkienė, S.; Šakalys, J.; Povinec, P. P.
2012-12-01
137Cs, 241Am and Pu isotopes collected in aerosol samples during 1994-2011 were analyzed with special emphasis on better understanding of Pu and Am behavior in the atmosphere. The results from long-term measurements of 240Pu/239Pu atom ratios showed a bimodal frequency distribution with median values of 0.195 and 0.253, indicating two main sources contributing to the Pu activities at the Vilnius sampling station. The low Pu atom ratio of 0.141 could be attributed to the weapon-grade plutonium derived from the nuclear weapon test sites. The frequency of air masses arriving from the North-West and North-East correlated with the Pu atom ratio indicating the input from the sources located in these regions (the Novaya Zemlya test site, Siberian nuclear plants), while no correlation with the Chernobyl region was observed. Measurements carried out during the Fukushima accident showed a negligible impact of this source with Pu activities by four orders of magnitude lower as compared to the Chernobyl accident. The activity concentration of actinides measured in the integrated sample collected in March-April, 2011 showed a small contribution of Pu with unusual activity and atom ratios indicating the presence of the spent fuel of different origin than that of the Chernobyl accident.
NASA Astrophysics Data System (ADS)
Diapouli, E.; Manousakas, M.; Vratolis, S.; Vasilatou, V.; Maggos, Th; Saraga, D.; Grigoratos, Th; Argyropoulos, G.; Voutsa, D.; Samara, C.; Eleftheriadis, K.
2017-09-01
Metropolitan Urban areas in Greece have been known to suffer from poor air quality, due to variety of emission sources, topography and climatic conditions favouring the accumulation of pollution. While a number of control measures have been implemented since the 1990s, resulting in reductions of atmospheric pollution and changes in emission source contributions, the financial crisis which started in 2009 has significantly altered this picture. The present study is the first effort to assess the contribution of emission sources to PM10 and PM2.5 concentration levels and their long-term variability (over 5-10 years), in the two largest metropolitan urban areas in Greece (Athens and Thessaloniki). Intensive measurement campaigns were conducted during 2011-2012 at suburban, urban background and urban traffic sites in these two cities. In addition, available datasets from previous measurements in Athens and Thessaloniki were used in order to assess the long-term variability of concentrations and sources. Chemical composition analysis of the 2011-2012 samples showed that carbonaceous matter was the most abundant component for both PM size fractions. Significant increase of carbonaceous particle concentrations and of OC/EC ratio during the cold period, especially in the residential urban background sites, pointed towards domestic heating and more particularly wood (biomass) burning as a significant source. PMF analysis further supported this finding. Biomass burning was the largest contributing source at the two urban background sites (with mean contributions for the two size fractions in the range of 24-46%). Secondary aerosol formation (sulphate, nitrate & organics) was also a major contributing source for both size fractions at the suburban and urban background sites. At the urban traffic site, vehicular traffic (exhaust and non-exhaust emissions) was the source with the highest contributions, accounting for 44% of PM10 and 37% of PM2.5, respectively. The long-term variability of emission sources in the two cities (over 5-10 years), assessed through a harmonized application of the PMF technique on recent and past year data, clearly demonstrates the effective reduction in emissions during the last decade due to control measures and technological development; however, it also reflects the effects of the financial crisis in Greece during these years, which has led to decreased economic activities and the adoption of more polluting practices by the local population in an effort to reduce living costs.
Manríquez, Juan J
2008-04-01
Systematic reviews should include as many articles as possible. However, many systematic reviews use only databases with high English language content as sources of trials. Literatura Latino Americana e do Caribe em Ciências da Saúde (LILACS) is an underused source of trials, and there is not a validated strategy for searching clinical trials to be used in this database. The objective of this study was to develop a sensitive search strategy for clinical trials in LILACS. An analytical survey was performed. Several single and multiple-term search strategies were tested for their ability to retrieve clinical trials in LILACS. Sensitivity, specificity, and accuracy of each single and multiple-term strategy were calculated using the results of a hand-search of 44 Chilean journals as gold standard. After combining the most sensitive, specific, and accurate single and multiple-term search strategy, a strategy with a sensitivity of 97.75% (95% confidence interval [CI]=95.98-99.53) and a specificity of 61.85 (95% CI=61.19-62.51) was obtained. LILACS is a source of trials that could improve systematic reviews. A new highly sensitive search strategy for clinical trials in LILACS has been developed. It is hoped this search strategy will improve and increase the utilization of LILACS in future systematic reviews.
NASA Astrophysics Data System (ADS)
Vitillaro, Enzo
2017-03-01
The aim of this paper is to study the problem u_{tt}-Δ u+P(x,u_t)=f(x,u) quad & in (0,∞)×Ω, u=0 & on (0,∞)× Γ_0, u_{tt}+partial_ν u-Δ_Γ u+Q(x,u_t)=g(x,u)quad & on (0,∞)× Γ_1, u(0,x)=u_0(x),quad u_t(0,x)=u_1(x) & in overline Ω, where {Ω} is a open bounded subset of R^N with C 1 boundary ({N ≥ 2}), {Γ = partialΩ}, {(Γ0,Γ1)} is a measurable partition of {Γ}, {Δ_{Γ}} denotes the Laplace-Beltrami operator on {Γ}, {ν} is the outward normal to {Ω}, and the terms P and Q represent nonlinear damping terms, while f and g are nonlinear subcritical perturbations. In the paper a local Hadamard well-posedness result for initial data in the natural energy space associated to the problem is given. Moreover, when {Ω} is C 2 and {overline{Γ0} \\cap overline{Γ1} = emptyset}, the regularity of solutions is studied. Next a blow-up theorem is given when P and Q are linear and f and g are superlinear sources. Finally a dynamical system is generated when the source parts of f and g are at most linear at infinity, or they are dominated by the damping terms.
Influence of heat conducting substrates on explosive crystallization in thin layers
NASA Astrophysics Data System (ADS)
Schneider, Wilhelm
2017-09-01
Crystallization in a thin, initially amorphous layer is considered. The layer is in thermal contact with a substrate of very large dimensions. The energy equation of the layer contains source and sink terms. The source term is due to liberation of latent heat in the crystallization process, while the sink term is due to conduction of heat into the substrate. To determine the latter, the heat diffusion equation for the substrate is solved by applying Duhamel's integral. Thus, the energy equation of the layer becomes a heat diffusion equation with a time integral as an additional term. The latter term indicates that the heat loss due to the substrate depends on the history of the process. To complete the set of equations, the crystallization process is described by a rate equation for the degree of crystallization. The governing equations are then transformed to a moving co-ordinate system in order to analyze crystallization waves that propagate with invariant properties. Dual solutions are found by an asymptotic expansion for large activation energies of molecular diffusion. By introducing suitable variables, the results can be presented in a universal form that comprises the influence of all non-dimensional parameters that govern the process. Of particular interest for applications is the prediction of a critical heat loss parameter for the existence of crystallization waves with invariant properties.
Zhou, Peiyu; Chen, Changshu; Ye, Jianjun; Shen, Wenjie; Xiong, Xiaofei; Hu, Ping; Fang, Hongda; Huang, Chuguang; Sun, Yongge
2015-04-15
Oil fingerprints have been a powerful tool widely used for determining the source of spilled oil. In most cases, this tool works well. However, it is usually difficult to identify the source if the oil spill accident occurs during offshore petroleum exploration due to the highly similar physiochemical characteristics of suspected oils from the same drilling platform. In this report, a case study from the waters of the South China Sea is presented, and multidimensional scaling analysis (MDS) is introduced to demonstrate how oil fingerprints can be combined with mathematical methods to identify the source of spilled oil from highly similar suspected sources. The results suggest that the MDS calculation based on oil fingerprints and subsequently integrated with specific biomarkers in spilled oils is the most effective method with a great potential for determining the source in terms of highly similar suspected oils. Copyright © 2015 Elsevier Ltd. All rights reserved.
Cramer-Rao bound analysis of wideband source localization and DOA estimation
NASA Astrophysics Data System (ADS)
Yip, Lean; Chen, Joe C.; Hudson, Ralph E.; Yao, Kung
2002-12-01
In this paper, we derive the Cramér-Rao Bound (CRB) for wideband source localization and DOA estimation. The resulting CRB formula can be decomposed into two terms: one that depends on the signal characteristic and one that depends on the array geometry. For a uniformly spaced circular array (UCA), a concise analytical form of the CRB can be given by using some algebraic approximation. We further define a DOA beamwidth based on the resulting CRB formula. The DOA beamwidth can be used to design the sampling angular spacing for the Maximum-likelihood (ML) algorithm. For a randomly distributed array, we use an elliptical model to determine the largest and smallest effective beamwidth. The effective beamwidth and the CRB analysis of source localization allow us to design an efficient algorithm for the ML estimator. Finally, our simulation results of the Approximated Maximum Likelihood (AML) algorithm are demonstrated to match well to the CRB analysis at high SNR.
Fahnline, John B
2016-12-01
An equivalent source method is developed for solving transient acoustic boundary value problems. The method assumes the boundary surface is discretized in terms of triangular or quadrilateral elements and that the solution is represented using the acoustic fields of discrete sources placed at the element centers. Also, the boundary condition is assumed to be specified for the normal component of the surface velocity as a function of time, and the source amplitudes are determined to match the known elemental volume velocity vector at a series of discrete time steps. Equations are given for marching-on-in-time schemes to solve for the source amplitudes at each time step for simple, dipole, and tripole source formulations. Several example problems are solved to illustrate the results and to validate the formulations, including problems with closed boundary surfaces where long-time numerical instabilities typically occur. A simple relationship between the simple and dipole source amplitudes in the tripole source formulation is derived so that the source radiates primarily in the direction of the outward surface normal. The tripole source formulation is shown to eliminate interior acoustic resonances and long-time numerical instabilities.
Investigation of organic light emitting diodes for interferometric purposes
NASA Astrophysics Data System (ADS)
Pakula, Anna; Zimak, Marzena; Sałbut, Leszek
2011-05-01
Recently the new type of light source has been introduced to the market. Organic light emitting diode (OLED) is not only interesting because of the low applying voltage, wide light emitting areas and emission efficiency. It gives the possibility to create a light source of a various shape, various color and in the near future very likely even the one that will change shape and spectrum in time in controlled way. Those opportunities have not been in our reach until now. In the paper authors try to give an answer to the question if the new light source -OLED - is suitable for interferometric purposes. Tests cover the short and long term spectrum stability, spectrum changes due to the emission area selection. In the paper the results of two OLEDs (red and white) are shown together with the result of an attempt to use them in an interferometric setup.
NASA Technical Reports Server (NTRS)
Green, Del L.; Walker, Eric L.; Everhart, Joel L.
2006-01-01
Minimization of uncertainty is essential to extend the usable range of the 15-psid Electronically Scanned Pressure [ESP) transducer measurements to the low free-stream static pressures found in hypersonic wind tunnels. Statistical characterization of environmental error sources inducing much of this uncertainty requires a well defined and controlled calibration method. Employing such a controlled calibration system, several studies were conducted that provide quantitative information detailing the required controls needed to minimize environmental and human induced error sources. Results of temperature, environmental pressure, over-pressurization, and set point randomization studies for the 15-psid transducers are presented along with a comparison of two regression methods using data acquired with both 0.36-psid and 15-psid transducers. Together these results provide insight into procedural and environmental controls required for long term high-accuracy pressure measurements near 0.01 psia in the hypersonic testing environment using 15-psid ESP transducers.
NASA Technical Reports Server (NTRS)
Green, Del L.; Walker, Eric L.; Everhart, Joel L.
2006-01-01
Minimization of uncertainty is essential to extend the usable range of the 15-psid Electronically Scanned Pressure (ESP) transducer measurements to the low free-stream static pressures found in hypersonic wind tunnels. Statistical characterization of environmental error sources inducing much of this uncertainty requires a well defined and controlled calibration method. Employing such a controlled calibration system, several studies were conducted that provide quantitative information detailing the required controls needed to minimize environmental and human induced error sources. Results of temperature, environmental pressure, over-pressurization, and set point randomization studies for the 15-psid transducers are presented along with a comparison of two regression methods using data acquired with both 0.36-psid and 15-psid transducers. Together these results provide insight into procedural and environmental controls required for long term high-accuracy pressure measurements near 0.01 psia in the hypersonic testing environment using 15-psid ESP transducers.
Kang, Seok; Ha, Jae-Sik; Velasco, Teresa
2017-05-01
This study investigated videos about Attention Deficit Hyperactivity Disorder (ADHD) on YouTube in terms of issues, sources, and episodic-thematic aspects. A total of 685 videos uploaded onto YouTube between 2006 and 2014 were content analyzed. Results demonstrated that the top three key issues about ADHD were symptom, child, and treatment. Doctor, patient, and supporter were the three most interviewed sources. Videos from the public sector including the government, company representative, and public organizations were relatively rare compared to other sources suggesting the potential for a greater role for the government and public sector contributions to YouTube to provide credible information relevant to public awareness, campaigns, and policy announcements. Meanwhile, many personal videos in the episodic frame advocated social solutions. This result implies that YouTube videos about health information from the private sectors have the potential to affect change at the social level.
Exploring the Earth's crust: history and results of controlled-source seismology
Prodehl, Claus; Mooney, Walter D.
2012-01-01
This volume contains a comprehensive, worldwide history of seismological studies of the Earth’s crust using controlled sources from 1850 to 2005. Essentially all major seismic projects on land and the most important oceanic projects are covered. The time period 1850 to 1939 is presented as a general synthesis, and from 1940 onward the history and results are presented in separate chapters for each decade, with the material organized by geographical region. Each chapter highlights the major advances achieved during that decade in terms of data acquisition, processing technology, and interpretation methods. For all major seismic projects, the authors provide specific details on field observations, interpreted crustal cross sections, and key references. They conclude with global and continental-scale maps of all field measurements and interpreted Moho contours. An accompanying DVD contains important out-of-print publications and an extensive collection of controlled-source data, location maps, and crustal cross sections.
NASA Astrophysics Data System (ADS)
Ji, Cuiying; Zhang, Xuewei; Yan, Xiaogang; Mostafizar Rahman, M.; Prates, Luciana L.; Yu, Peiqiang
2017-08-01
The objectives of this study were to: 1) investigate forage carbohydrate molecular structure profiles; 2) bio-functions in terms of CHO rumen degradation characteristics and hourly effective degradation ratio of N to OM (HEDN/OM), and 3) quantify interactive association between molecular structures, bio-functions and nutrient availability. The vibrational molecular spectroscopy was applied to investigate the structure feature on a molecular basis. Two sourced-origin alfalfa forages were used as modeled forages. The results showed that the carbohydrate molecular structure profiles were highly linked to the bio-functions in terms of rumen degradation characteristics and hourly effective degradation ratio. The molecular spectroscopic technique can be used to detect forage carbohydrate structure features on a molecular basis and can be used to study interactive association between forage molecular structure and bio-functions.
Renewable energies in electricity generation for reduction of greenhouse gases in Mexico 2025.
Islas, Jorge; Manzini, Fabio; Martínez, Manuel
2002-02-01
This study presents 4 scenarios relating to the environmental futures of electricity generation in Mexico up to the year 2025. The first scenario emphasizes the use of oil products, particularly fuel oil, and represents the historic path of Mexico's energy policy. The second scenario prioritizes the use of natural gas, reflecting the energy consumption pattern that arose in the mid-1990s as a result of reforms in the energy sector. In the third scenario, the high participation of renewable sources of energy is considered feasible from a technical and economic point of view. The fourth scenario takes into account the present- and medium-term use of natural-gas technologies that the energy reform has produced, but after 2007 a high and feasible participation of renewable sources of energy is considered. The 4 scenarios are evaluated up to the year 2025 in terms of greenhouse gases (GHG) and acid rain precursor gases (ARPG).
Neon reduction program on Cymer ArF light sources
NASA Astrophysics Data System (ADS)
Kanawade, Dinesh; Roman, Yzzer; Cacouris, Ted; Thornes, Josh; O'Brien, Kevin
2016-03-01
In response to significant neon supply constraints, Cymer has responded with a multi-part plan to support its customers. Cymer's primary objective is to ensure that reliable system performance is maintained while minimizing gas consumption. Gas algorithms were optimized to ensure stable performance across all operating conditions. The Cymer neon support plan contains four elements: 1. Gas reduction program to reduce neon by >50% while maintaining existing performance levels and availability; 2. short-term containment solutions for immediate relief. 3. qualification of additional gas suppliers; and 4. long-term recycling/reclaim opportunity. The Cymer neon reduction program has shown excellent results as demonstrated through the comparison on standard gas use versus the new >50% reduced neon performance for ArF immersion light sources. Testing included stressful conditions such as repetition rate, duty cycle and energy target changes. No performance degradation has been observed over typical gas lives.
On the role of mean flows in Doppler shifted frequencies
NASA Astrophysics Data System (ADS)
Gerkema, Theo; Maas, Leo R. M.; van Haren, Hans
2013-04-01
In the oceanographic literature, the term 'Doppler shift' often features in the context of mean flows and (internal) waves. Closer inspection reveals that the term is in fact used for two different things, which should be carefully distinguished, for their conflation results in incorrect interpretations. One refers to the difference in frequencies measured by two observers, one at a fixed position and one moving with the mean flow. The other definition is the one used in physics, where the frequency measured by an observer is compared to that of the source. In the latter sense, Doppler shifts occur only if the source and observer move with respect to each other; a steady mean flow cannot create a Doppler shift. We rehash the classical theory to straighten out some misconceptions and discuss how wave dispersion affects the classical relations and their application, for example on near-inertial internal waves.
Unprecedented long-term frequency stability with a microwave resonator oscillator.
Grop, Serge; Schafer, Wolfgang; Bourgeois, Pierre-Yves; Kersale, Yann; Oxborrow, Mark; Rubiola, Enrico; Giordano, Vincent
2011-08-01
This article reports on the long-term frequency stability characterization of a new type of cryogenic sapphire oscillator using an autonomous pulse-tube cryocooler as its cold source. This new design enables a relative frequency stability of better than 4.5 x 10(-15) over one day of integration. To the best of our knowledge, this represents the best long-term frequency stability ever obtained with a signal source based on a macroscopic resonator.
Nuclear Powerplant Safety: Source Terms. Nuclear Energy.
ERIC Educational Resources Information Center
Department of Energy, Washington, DC. Nuclear Energy Office.
There has been increased public interest in the potential effects of nuclear powerplant accidents since the Soviet reactor accident at Chernobyl. People have begun to look for more information about the amount of radioactivity that might be released into the environment as a result of such an accident. When this issue is discussed by people…
We compared patterns of historical watershed nutrient inputs with in-river nutrient loads for the Neuse River, NC. Basin-wide sources of both nitrogen and phosphorus have increased substantially during the past century, marked by a sharp increase in the last 10 years resulting...
Motivational Aspects of Learning Genetics with Interactive Multimedia
ERIC Educational Resources Information Center
Tsui, Chi-Yan; Treagust, David F.
2004-01-01
A BioLogica trial in six U.S. schools using interpretive approach is conducted by the Concord Consortium that examined the student motivation of learning genetics. Multiple data sources like online tests, computer data log files and classroom observation are used that found the result in terms of interviewees' perception, class-wide online…
There is often a difference between the timing of an intervention in a natural system and the resultant impact. Delays in the transport of pollutants from sources, inertia in the effects of policy on economic actors, and long-term memory in natural systems, turn environmental man...
Benzotriazoles (BZTs) are used in a broad range of commercial and industrial products, particularly as metal corrosion inhibitors and as ultraviolet (UV) light stabilizer additives in plastics and polymers. Their long-term usage and high production volumes have resulted in the r...
Educational Policy Reform As a Result of a Failed Political Initiative.
ERIC Educational Resources Information Center
Greene, Ravelle Lyn; Dutton, Jo Sargent
Most political scientists argue that the term "public policy" encompasses both the actions of government and the intentions that determine those actions. This paper presents an overview of theories about the sources of impetus for policy changes, reviews public policy formation about school choice at the national level and in California,…
Data, Data Everywhere--Not a Report in Sight!
ERIC Educational Resources Information Center
Norman, Wendy
2003-01-01
Presents six steps of data warehouse development that result in valuable, long-term reporting solutions, discussing how to choose the right reporting vehicle. The six steps are: defining one's needs; mapping the source for each element; extracting the data; cleaning and verifying the data; moving the data into a relational database; and developing…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grabaskas, Dave; Brunett, Acacia J.; Bucknor, Matthew
GE Hitachi Nuclear Energy (GEH) and Argonne National Laboratory are currently engaged in a joint effort to modernize and develop probabilistic risk assessment (PRA) techniques for advanced non-light water reactors. At a high level, the primary outcome of this project will be the development of next-generation PRA methodologies that will enable risk-informed prioritization of safety- and reliability-focused research and development, while also identifying gaps that may be resolved through additional research. A subset of this effort is the development of PRA methodologies to conduct a mechanistic source term (MST) analysis for event sequences that could result in the release ofmore » radionuclides. The MST analysis seeks to realistically model and assess the transport, retention, and release of radionuclides from the reactor to the environment. The MST methods developed during this project seek to satisfy the requirements of the Mechanistic Source Term element of the ASME/ANS Non-LWR PRA standard. The MST methodology consists of separate analysis approaches for risk-significant and non-risk significant event sequences that may result in the release of radionuclides from the reactor. For risk-significant event sequences, the methodology focuses on a detailed assessment, using mechanistic models, of radionuclide release from the fuel, transport through and release from the primary system, transport in the containment, and finally release to the environment. The analysis approach for non-risk significant event sequences examines the possibility of large radionuclide releases due to events such as re-criticality or the complete loss of radionuclide barriers. This paper provides details on the MST methodology, including the interface between the MST analysis and other elements of the PRA, and provides a simplified example MST calculation for a sodium fast reactor.« less
NASA Astrophysics Data System (ADS)
Smith, N.; Blewitt, D.; Hebert, L. B.
2015-12-01
In coordination with oil and gas operators, we developed a high resolution (< 1 min) simulation of temporal variability in well-pad oil and gas emissions over a year. We include routine emissions from condensate tanks, dehydrators, pneumatic devices, fugitive leaks and liquids unloading. We explore the variability in natural gas emissions from these individual well-pad sources, and find that routine short-term episodic emissions such as tank flashing and liquids unloading result in the appearance of a skewed, or 'fat-tail' distribution of emissions, from an individual well-pad over time. Additionally, we explore the expected variability in emissions from multiple wells with different raw gas composition, gas/liquids production volumes and control equipment. Differences in well-level composition, production volume and control equipment translate into differences in well-level emissions leading to a fat-tail distribution of emissions in the absence of operational upsets. Our results have several implications for recent studies focusing on emissions from oil and gas sources. Time scale of emission estimates are important and have important policy implications. Fat tail distributions may not be entirely driven by avoidable mechanical failures, and are expected to occur under routine operational conditions from short-duration emissions (e.g., tank flashing, liquid unloading). An understanding of the expected distribution of emissions for a particular population of wells is necessary to evaluate whether the observed distribution is more skewed than expected. Temporal variability in well-pad emissions make comparisons to annual average emissions inventories difficult and may complicate the interpretation of long-term ambient fenceline monitoring data. Sophisticated change detection algorithms will be necessary to identify when true operational upsets occur versus routine short-term emissions.
The Study of High-Speed Surface Dynamics Using a Pulsed Proton Beam
NASA Astrophysics Data System (ADS)
Buttler, William; Stone, Benjamin; Oro, David; Dimonte, Guy; Preston, Dean; Cherne, Frank; Germann, Timothy; Terrones, Guillermo; Tupa, Dale
2011-06-01
Los Alamos National Laboratory is presently engaged in development and implementation of ejecta source term and transport models for integration into LANL hydrodynamic computer codes. Experimental support for the effort spans a broad array of activities, including ejecta source term measurements from machine roughened Sn surfaces shocked by HE or flyer plates. Because the underlying postulate for ejecta formation is that ejecta are characterized by Richtmyer-Meshkov instability (RMI) phenomena, a key element of the theory and modeling effort centers on validation and verification RMI experiments at the LANSCE Proton Radiography Facility (pRad) to compare with modeled ejecta measurements. Here we present experimental results used to define and validate a physics based ejecta model together with remarkable, unexpected results of Sn instability growth in vacuum and gasses, and Sn and Cu RM growth that reveals the sensitivity of the RM instability to the yield strength of the material, Cu. The motivation of this last subject, RM growth linked to material strength, is to probe the shock pressure regions over which ejecta begins to form. Presenter
NASA Astrophysics Data System (ADS)
Liu, Yong; Shu, Chi-Wang; Zhang, Mengping
2018-02-01
We present a discontinuous Galerkin (DG) scheme with suitable quadrature rules [15] for ideal compressible magnetohydrodynamic (MHD) equations on structural meshes. The semi-discrete scheme is analyzed to be entropy stable by using the symmetrizable version of the equations as introduced by Godunov [32], the entropy stable DG framework with suitable quadrature rules [15], the entropy conservative flux in [14] inside each cell and the entropy dissipative approximate Godunov type numerical flux at cell interfaces to make the scheme entropy stable. The main difficulty in the generalization of the results in [15] is the appearance of the non-conservative "source terms" added in the modified MHD model introduced by Godunov [32], which do not exist in the general hyperbolic system studied in [15]. Special care must be taken to discretize these "source terms" adequately so that the resulting DG scheme satisfies entropy stability. Total variation diminishing / bounded (TVD/TVB) limiters and bound-preserving limiters are applied to control spurious oscillations. We demonstrate the accuracy and robustness of this new scheme on standard MHD examples.
Synthesis of Exotic Soaps in the Chemistry Laboratory
NASA Astrophysics Data System (ADS)
Phanstiel, Otto, IV; Dueno, Eric; Xianghong Wang, Queenie
1998-05-01
A variety of different triglyceride sources ranging from Vietnamese garlic oil to a local restaurant's grill sludge were saponified to generate a series of exotic soaps. Students did not quantify their results, but described their products in terms of color, texture and odor. Their results were compared with existing data on the triglyceride content for each source used (when possible). Soap texture seemed to be related to the degree of unsaturation present in the starting triglyceride. However, texture alterations due to occluded impurities could not be ruled out. In general, fats and oils high in saturated fats (butter) gave hard, chunky, and waxlike soaps, while those high in unsaturated fats gave flaky and easily crumbled soaps (olive, corn, peanut and sunflower oils). Soap color was not consistent with triglyceride unsaturation levels during the time frame studied. Odor changes were dramatic and were explained in terms of a change in chemical structure (i.e. conversion from an ester to a carboxylate salt). In general, the experiment was well received by students and stressed the importance of making precise qualitative observations during the experiment.
NASA Technical Reports Server (NTRS)
Karchmer, A. M.
1977-01-01
Fluctuating pressure measurements within the combustor and tailpipe of a turbofan engine are made simultaneously with far field acoustic measurements. The pressure measurements within the engine are accomplished with cooled semi-infinite waveguide probes utilizing conventional condenser microphones as the transducers. The measurements are taken over a broad range of engine operating conditions and for 16 far field microphone positions between 10 deg and 160 deg relative to the engine inlet axis. Correlation and coherence techniques are used to determine the relative phase and amplitude relationships between the internal pressures and far field acoustic pressures. The results indicate that the combustor is a low frequency source region for acoustic propagation through the tailpipe and out to the far field. Specifically, it is found that the relation between source pressure and the resulting sound pressure involves a 180 deg phase shift. The latter result is obtained by Fourier transforming the cross correlation function between the source pressure and acoustic pressure after removing the propagation delay time. Further, it is found that the transfer function between the source pressure and acoustic pressure has a magnitude approximately proportional to frequency squared. These results are shown to be consistent with a model using a modified source term in Lighthill's turbulence stress tensor, wherein the fluctuating Reynolds stresses are replaced with the pressure fluctuations due to fluctuating entropy.
22 CFR 228.12 - Long-term leases.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Long-term leases. 228.12 Section 228.12 Foreign Relations AGENCY FOR INTERNATIONAL DEVELOPMENT RULES ON SOURCE, ORIGIN AND NATIONALITY FOR COMMODITIES AND... agreement is subject to the source and origin requirements of this subpart B. For purposes of this subpart B...
Related Studies in Long Term Lithium Battery Stability
NASA Technical Reports Server (NTRS)
Horning, R. J.; Chua, D. L.
1984-01-01
The continuing growth of the use of lithium electrochemical systems in a wide variety of both military and industrial applications is primarily a result of the significant benefits associated with the technology such as high energy density, wide temperature operation and long term stability. The stability or long term storage capability of a battery is a function of several factors, each important to the overall storage life and, therefore, each potentially a problem area if not addressed during the design, development and evaluation phases of the product cycle. Design (e.g., reserve vs active), inherent material thermal stability, material compatibility and self-discharge characteristics are examples of factors key to the storability of a power source.
NASA Astrophysics Data System (ADS)
Shukla, S.; Shukla, A.
2017-12-01
Water and phosphorus (P) dynamics and loss pathways at two stormwater impoundments (SIs) were analyzed using measured fluxes between 2008 and 2011. These SIs are a decade old. Analyses of water and P budgets along with the discernment of various P pools and characterization of the intermediary processes revealed that soil adsorption and plant uptake are secondary to volume reduction apropos of P treatment. At one site, extreme wet conditions in a year combined with soil P saturation resulted in it being a P source rather than a sink. The impoundment (SI-1) discharged 12% more P than incoming due to soil P desorption, a consequence of dilution of incoming stormwater with large water input from an extreme tropical rain event. The second impoundment (SI-2) was a consistent sink of P; 55% and 95% of the incoming total P was retained in the two years, mainly as a result of 49% and 84% volume retention, respectively. Analysis of plant available aluminum, iron, and phosphorus showed the surface soil to be P saturated and at risk of releasing P to a limit of environmental concern. These results when seen in light of more frequent extreme precipitation events under the changed climate scenario call for alternatives to revive the role of biogeochemical processes in P treatment because volume reduction may not always be the viable option, especially for wet conditions. Aboveground biomass harvesting and removal was evaluated to transform the SIs from a frequent P source to sink and maintain the long-term sink functions of the SIs. Use of harvested biomass as a source of nutrients (N and P) and carbon to agricultural soil can result in beneficial use of biomass and offset the cost of harvesting. Other avenues such as altering the hydrology of the SIs by compartmentalizing the system and increasing the storage were also explored for short-term benefits. Results provided a combination of hydraulic and biochemical options for achieving long-term water and nutrient retentions in agricultural and urban landscapes that use the SIs to meet downstream flows and water quality goals for watersheds.
2016-11-17
out dynamics of a designer fluid were investigated experimentally in a flat grooved heat pipe. Generated coatings were observed during heat pipe... experimental temperature distributions matched well. Uncertainties in the closure properties were the major source of error. 15. SUBJECT TERMS...72 Results and Discussion ( Experimental Results for IAS 2 in Grooved Wick #1
Li, Tianhong; Bai, Fengjiao; Han, Peng; Zhang, Yuanyan
2016-11-01
Urban sprawl is a major driving force that alters local and regional hydrology and increases non-point source pollution. Using the Bao'an District in Shenzhen, China, a typical rapid urbanization area, as the study area and land-use change maps from 1988 to 2014 that were obtained by remote sensing, the contributions of different land-use types to NPS pollutant production were assessed with a localized long-term hydrologic impact assessment (L-THIA) model. The results show that the non-point source pollution load changed significantly both in terms of magnitude and spatial distribution. The loads of chemical oxygen demand, total suspended substances, total nitrogen and total phosphorus were affected by the interactions between event mean concentration and the magnitude of changes in land-use acreages and the spatial distribution. From 1988 to 2014, the loads of chemical oxygen demand, suspended substances and total phosphorus showed clearly increasing trends with rates of 132.48 %, 32.52 % and 38.76 %, respectively, while the load of total nitrogen decreased by 71.52 %. The immigrant population ratio was selected as an indicator to represent the level of rapid urbanization and industrialization in the study area, and a comparison analysis of the indicator with the four non-point source loads demonstrated that the chemical oxygen demand, total phosphorus and total nitrogen loads are linearly related to the immigrant population ratio. The results provide useful information for environmental improvement and city management in the study area.
NASA Astrophysics Data System (ADS)
Li, Tianhong; Bai, Fengjiao; Han, Peng; Zhang, Yuanyan
2016-11-01
Urban sprawl is a major driving force that alters local and regional hydrology and increases non-point source pollution. Using the Bao'an District in Shenzhen, China, a typical rapid urbanization area, as the study area and land-use change maps from 1988 to 2014 that were obtained by remote sensing, the contributions of different land-use types to NPS pollutant production were assessed with a localized long-term hydrologic impact assessment (L-THIA) model. The results show that the non-point source pollution load changed significantly both in terms of magnitude and spatial distribution. The loads of chemical oxygen demand, total suspended substances, total nitrogen and total phosphorus were affected by the interactions between event mean concentration and the magnitude of changes in land-use acreages and the spatial distribution. From 1988 to 2014, the loads of chemical oxygen demand, suspended substances and total phosphorus showed clearly increasing trends with rates of 132.48 %, 32.52 % and 38.76 %, respectively, while the load of total nitrogen decreased by 71.52 %. The immigrant population ratio was selected as an indicator to represent the level of rapid urbanization and industrialization in the study area, and a comparison analysis of the indicator with the four non-point source loads demonstrated that the chemical oxygen demand, total phosphorus and total nitrogen loads are linearly related to the immigrant population ratio. The results provide useful information for environmental improvement and city management in the study area.
Numerical simulations of LNG vapor dispersion in Brayton Fire Training Field tests with ANSYS CFX.
Qi, Ruifeng; Ng, Dedy; Cormier, Benjamin R; Mannan, M Sam
2010-11-15
Federal safety regulations require the use of validated consequence models to determine the vapor cloud dispersion exclusion zones for accidental liquefied natural gas (LNG) releases. One tool that is being developed in industry for exclusion zone determination and LNG vapor dispersion modeling is computational fluid dynamics (CFD). This paper uses the ANSYS CFX CFD code to model LNG vapor dispersion in the atmosphere. Discussed are important parameters that are essential inputs to the ANSYS CFX simulations, including the atmospheric conditions, LNG evaporation rate and pool area, turbulence in the source term, ground surface temperature and roughness height, and effects of obstacles. A sensitivity analysis was conducted to illustrate uncertainties in the simulation results arising from the mesh size and source term turbulence intensity. In addition, a set of medium-scale LNG spill tests were performed at the Brayton Fire Training Field to collect data for validating the ANSYS CFX prediction results. A comparison of test data with simulation results demonstrated that CFX was able to describe the dense gas behavior of LNG vapor cloud, and its prediction results of downwind gas concentrations close to ground level were in approximate agreement with the test data. Copyright © 2010 Elsevier B.V. All rights reserved.
NuSTAR view of the central region of M31
NASA Astrophysics Data System (ADS)
Stiele, H.; Kong, A. K. H.
2018-04-01
Our neighbouring large spiral galaxy, the Andromeda galaxy (M31 or NGC 224), is an ideal target to study the X-ray source population of a nearby galaxy. NuSTAR observed the central region of M31 in 2015 and allows studying the population of X-ray point sources at energies higher than 10 keV. Based on the source catalogue of the large XMM-Newton survey of M31, we identified counterparts to the XMM-Newton sources in the NuSTAR data. The NuSTAR data only contain sources of a brightness comparable (or even brighter) than the selected sources that have been detected in XMM-Newton data. We investigate hardness ratios, spectra, and long-term light curves of individual sources obtained from NuSTAR data. Based on our spectral studies, we suggest four sources as possible X-ray binary candidates. The long-term light curves of seven sources that have been observed more than once show low (but significant) variability.
Source-term development for a contaminant plume for use by multimedia risk assessment models
NASA Astrophysics Data System (ADS)
Whelan, Gene; McDonald, John P.; Taira, Randal Y.; Gnanapragasam, Emmanuel K.; Yu, Charley; Lew, Christine S.; Mills, William B.
2000-02-01
Multimedia modelers from the US Environmental Protection Agency (EPA) and US Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: MEPAS, MMSOILS, PRESTO, and RESRAD. These models represent typical analytically based tools that are used in human-risk and endangerment assessments at installations containing radioactive and hazardous contaminants. The objective is to demonstrate an approach for developing an adequate source term by simplifying an existing, real-world, 90Sr plume at DOE's Hanford installation in Richland, WA, for use in a multimedia benchmarking exercise between MEPAS, MMSOILS, PRESTO, and RESRAD. Source characteristics and a release mechanism are developed and described; also described is a typical process and procedure that an analyst would follow in developing a source term for using this class of analytical tool in a preliminary assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gauntt, Randall O.; Goldmann, Andrew; Kalinich, Donald A.
2016-12-01
In this study, risk-significant pressurized-water reactor severe accident sequences are examined using MELCOR 1.8.5 to explore the range of fission product releases to the reactor containment building. Advances in the understanding of fission product release and transport behavior and severe accident progression are used to render best estimate analyses of selected accident sequences. Particular emphasis is placed on estimating the effects of high fuel burnup in contrast with low burnup on fission product releases to the containment. Supporting this emphasis, recent data available on fission product release from high-burnup (HBU) fuel from the French VERCOR project are used in thismore » study. The results of these analyses are treated as samples from a population of accident sequences in order to employ approximate order statistics characterization of the results. These trends and tendencies are then compared to the NUREG-1465 alternative source term prescription used today for regulatory applications. In general, greater differences are observed between the state-of-the-art calculations for either HBU or low-burnup (LBU) fuel and the NUREG-1465 containment release fractions than exist between HBU and LBU release fractions. Current analyses suggest that retention of fission products within the vessel and the reactor coolant system (RCS) are greater than contemplated in the NUREG-1465 prescription, and that, overall, release fractions to the containment are therefore lower across the board in the present analyses than suggested in NUREG-1465. The decreased volatility of Cs 2 MoO 4 compared to CsI or CsOH increases the predicted RCS retention of cesium, and as a result, cesium and iodine do not follow identical behaviors with respect to distribution among vessel, RCS, and containment. With respect to the regulatory alternative source term, greater differences are observed between the NUREG-1465 prescription and both HBU and LBU predictions than exist between HBU and LBU analyses. Additionally, current analyses suggest that the NUREG-1465 release fractions are conservative by about a factor of 2 in terms of release fractions and that release durations for in-vessel and late in-vessel release periods are in fact longer than the NUREG-1465 durations. It is currently planned that a subsequent report will further characterize these results using more refined statistical methods, permitting a more precise reformulation of the NUREG-1465 alternative source term for both LBU and HBU fuels, with the most important finding being that the NUREG-1465 formula appears to embody significant conservatism compared to current best-estimate analyses. ACKNOWLEDGEMENTS This work was supported by the United States Nuclear Regulatory Commission, Office of Nuclear Regulatory Research. The authors would like to thank Dr. Ian Gauld and Dr. Germina Ilas, of Oak Ridge National Laboratory, for their contributions to this work. In addition to development of core fission product inventory and decay heat information for use in MELCOR models, their insights related to fuel management practices and resulting effects on spatial distribution of fission products in the core was instrumental in completion of our work.« less
A Cross-Lingual Similarity Measure for Detecting Biomedical Term Translations
Bollegala, Danushka; Kontonatsios, Georgios; Ananiadou, Sophia
2015-01-01
Bilingual dictionaries for technical terms such as biomedical terms are an important resource for machine translation systems as well as for humans who would like to understand a concept described in a foreign language. Often a biomedical term is first proposed in English and later it is manually translated to other languages. Despite the fact that there are large monolingual lexicons of biomedical terms, only a fraction of those term lexicons are translated to other languages. Manually compiling large-scale bilingual dictionaries for technical domains is a challenging task because it is difficult to find a sufficiently large number of bilingual experts. We propose a cross-lingual similarity measure for detecting most similar translation candidates for a biomedical term specified in one language (source) from another language (target). Specifically, a biomedical term in a language is represented using two types of features: (a) intrinsic features that consist of character n-grams extracted from the term under consideration, and (b) extrinsic features that consist of unigrams and bigrams extracted from the contextual windows surrounding the term under consideration. We propose a cross-lingual similarity measure using each of those feature types. First, to reduce the dimensionality of the feature space in each language, we propose prototype vector projection (PVP)—a non-negative lower-dimensional vector projection method. Second, we propose a method to learn a mapping between the feature spaces in the source and target language using partial least squares regression (PLSR). The proposed method requires only a small number of training instances to learn a cross-lingual similarity measure. The proposed PVP method outperforms popular dimensionality reduction methods such as the singular value decomposition (SVD) and non-negative matrix factorization (NMF) in a nearest neighbor prediction task. Moreover, our experimental results covering several language pairs such as English–French, English–Spanish, English–Greek, and English–Japanese show that the proposed method outperforms several other feature projection methods in biomedical term translation prediction tasks. PMID:26030738
Ghannam, K; El-Fadel, M
2013-02-01
This paper examines the relative source contribution to ground-level concentrations of carbon monoxide (CO), nitrogen dioxide (NO2), and PM10 (particulate matter with an aerodynamic diameter < 10 microm) in a coastal urban area due to emissions from an industrial complex with multiple stacks, quarrying activities, and a nearby highway. For this purpose, an inventory of CO, oxide of nitrogen (NO(x)), and PM10 emissions was coupled with the non-steady-state Mesoscale Model 5/California Puff Dispersion Modeling system to simulate individual source contributions under several spatial and temporal scales. As the contribution of a particular source to ground-level concentrations can be evaluated by simulating this single-source emissions or otherwise total emissions except that source, a set of emission sensitivity simulations was designed to examine if CALPUFF maintains a linear relationship between emission rates and predicted concentrations in cases where emitted plumes overlap and chemical transformations are simulated. Source apportionment revealed that ground-level releases (i.e., highway and quarries) extended over large areas dominated the contribution to exposure levels over elevated point sources, despite the fact that cumulative emissions from point sources are higher. Sensitivity analysis indicated that chemical transformations of NO(x) are insignificant, possibly due to short-range plume transport, with CALPUFF exhibiting a linear response to changes in emission rate. The current paper points to the significance of ground-level emissions in contributing to urban air pollution exposure and questions the viability of the prevailing paradigm of point-source emission reduction, especially that the incremental improvement in air quality associated with this common abatement strategy may not accomplish the desirable benefit in terms of lower exposure with costly emissions capping. The application of atmospheric dispersion models for source apportionment helps in identifying major contributors to regional air pollution. In industrial urban areas where multiple sources with different geometry contribute to emissions, ground-level releases extended over large areas such as roads and quarries often dominate the contribution to ground-level air pollution. Industrial emissions released at elevated stack heights may experience significant dilution, resulting in minor contribution to exposure at ground level. In such contexts, emission reduction, which is invariably the abatement strategy targeting industries at a significant investment in control equipment or process change, may result in minimal return on investment in terms of improvement in air quality at sensitive receptors.
NASA Astrophysics Data System (ADS)
Katata, G.; Chino, M.; Kobayashi, T.; Terada, H.; Ota, M.; Nagai, H.; Kajino, M.; Draxler, R.; Hort, M. C.; Malo, A.; Torii, T.; Sanada, Y.
2014-06-01
Temporal variations in the amount of radionuclides released into the atmosphere during the Fukushima Dai-ichi Nuclear Power Station (FNPS1) accident and their atmospheric and marine dispersion are essential to evaluate the environmental impacts and resultant radiological doses to the public. In this paper, we estimate a detailed time trend of atmospheric releases during the accident by combining environmental monitoring data with atmospheric model simulations from WSPEEDI-II (Worldwide version of System for Prediction of Environmental Emergency Dose Information), and simulations from the oceanic dispersion model SEA-GEARN-FDM, both developed by the authors. A sophisticated deposition scheme, which deals with dry and fogwater depositions, cloud condensation nuclei (CCN) activation and subsequent wet scavenging due to mixed-phase cloud microphysics (in-cloud scavenging) for radioactive iodine gas (I2 and CH3I) and other particles (CsI, Cs, and Te), was incorporated into WSPEEDI-II to improve the surface deposition calculations. The fallout to the ocean surface calculated by WSPEEDI-II was used as input data for the SEA-GEARN-FDM calculations. Reverse and inverse source-term estimation methods based on coupling the simulations from both models was adopted using air dose rates and concentrations, and sea surface concentrations. The results revealed that the major releases of radionuclides due to FNPS1 accident occurred in the following periods during March 2011: the afternoon of 12 March due to the wet venting and hydrogen explosion at Unit 1, the morning of 13 March after the venting event at Unit 3, midnight of 14 March when the SRV (Safely Relief Valve) at Unit 2 was opened three times, the morning and night of 15 March, and the morning of 16 March. According to the simulation results, the highest radioactive contamination areas around FNPS1 were created from 15 to 16 March by complicated interactions among rainfall, plume movements, and the temporal variation of release rates associated with reactor pressure changes in Units 2 and 3. The modified WSPEEDI-II simulation using the new source term reproduced local and regional patterns of cumulative surface deposition of total 131I and 137Cs and air dose rate obtained by airborne surveys. The new source term was also tested using three atmospheric dispersion models (MLDP0, HYSPLIT, and NAME) for regional and global calculations and showed good agreement between calculated and observed air concentration and surface deposition of 137Cs in East Japan. Moreover, HYSPLIT model using the new source term also reproduced the plume arrivals at several countries abroad showing a good correlation with measured air concentration data. A large part of deposition pattern of total 131I and 137Cs in East Japan was explained by in-cloud particulate scavenging. However, for the regional scale contaminated areas, there were large uncertainties due to the overestimation of rainfall amounts and the underestimation of fogwater and drizzle depositions. The computations showed that approximately 27% of 137Cs discharged from FNPS1 deposited to the land in East Japan, mostly in forest areas.
Hirsch, Robert M.; Moyer, Douglas; Archfield, Stacey A.
2010-01-01
A new approach to the analysis of long-term surface water-quality data is proposed and implemented. The goal of this approach is to increase the amount of information that is extracted from the types of rich water-quality datasets that now exist. The method is formulated to allow for maximum flexibility in representations of the long-term trend, seasonal components, and discharge-related components of the behavior of the water-quality variable of interest. It is designed to provide internally consistent estimates of the actual history of concentrations and fluxes as well as histories that eliminate the influence of year-to-year variations in streamflow. The method employs the use of weighted regressions of concentrations on time, discharge, and season. Finally, the method is designed to be useful as a diagnostic tool regarding the kinds of changes that are taking place in the watershed related to point sources, groundwater sources, and surface-water nonpoint sources. The method is applied to datasets for the nine large tributaries of Chesapeake Bay from 1978 to 2008. The results show a wide range of patterns of change in total phosphorus and in dissolved nitrate plus nitrite. These results should prove useful in further examination of the causes of changes, or lack of changes, and may help inform decisions about future actions to reduce nutrient enrichment in the Chesapeake Bay and its watershed.
Comparison of the landslide susceptibility models in Taipei Water Source Domain, Taiwan
NASA Astrophysics Data System (ADS)
WU, C. Y.; Yeh, Y. C.; Chou, T. H.
2017-12-01
Taipei Water Source Domain, locating at the southeast of Taipei Metropolis, is the main source of water resource in this region. Recently, the downstream turbidity often soared significantly during the typhoon period because of the upstream landslides. The landslide susceptibilities should be analysed to assess the influence zones caused by different rainfall events, and to ensure the abilities of this domain to serve enough and quality water resource. Generally, the landslide susceptibility models can be established based on either a long-term landslide inventory or a specified landslide event. Sometimes, there is no long-term landslide inventory in some areas. Thus, the event-based landslide susceptibility models are established widely. However, the inventory-based and event-based landslide susceptibility models may result in dissimilar susceptibility maps in the same area. So the purposes of this study were to compare the landslide susceptibility maps derived from the inventory-based and event-based models, and to interpret how to select a representative event to be included in the susceptibility model. The landslide inventory from Typhoon Tim in July, 1994 and Typhoon Soudelor in August, 2015 was collected, and used to establish the inventory-based landslide susceptibility model. The landslides caused by Typhoon Nari and rainfall data were used to establish the event-based model. The results indicated the high susceptibility slope-units were located at middle upstream Nan-Shih Stream basin.
Liu, Qi; Liu, Yadong; Yang, Fan; He, Hao; Xiao, Xianghui; Ren, Yang; Lu, Wenquan; Stach, Eric; Xie, Jian
2018-02-07
In situ high-energy synchrotron XRD studies were carried out on commercial 18650 LiFePO 4 cells at different cycles to track and investigate the dynamic, chemical, and structural changes in the course of long-term cycling to elucidate the capacity fading mechanism. The results indicate that the crystalline structural deterioration of the LiFePO 4 cathode and the graphite anode is unlikely to happen before capacity fades below 80% of the initial capacity. Rather, the loss of the active lithium source is the primary cause for the capacity fade, which leads to the appearance of inactive FePO 4 that is proportional to the absence of the lithium source. Our in situ HESXRD studies further show that the lithium-ion insertion and deinsertion behavior of LiFePO 4 continuously changed with cycling. For a fresh cell, the LiFePO 4 experienced a dual-phase solid-solution behavior, whereas with increasing cycle numbers, the dynamic change, which is characteristic of the continuous decay of solid solution behavior, is obvious. The unpredicted dynamic change may result from the morphology evolution of LiFePO 4 particles and the loss of the lithium source, which may be the cause of the decreased rate capability of LiFePO 4 cells after long-term cycling.
Modeling long-term trends of chlorinated ethene contamination at a public supply well
Chapelle, Francis H.; Kauffman, Leon J.; Widdowson, Mark A.
2015-01-01
A mass-balance solute-transport modeling approach was used to investigate the effects of dense nonaqueous phase liquid (DNAPL) volume, composition, and generation of daughter products on simulated and measured long-term trends of chlorinated ethene (CE) concentrations at a public supply well. The model was built by telescoping a calibrated regional three-dimensional MODFLOW model to the capture zone of a public supply well that has a history of CE contamination. The local model was then used to simulate the interactions between naturally occurring organic carbon that acts as an electron donor, and dissolved oxygen (DO), CEs, ferric iron, and sulfate that act as electron acceptors using the Sequential Electron Acceptor Model in three dimensions (SEAM3D) code. The modeling results indicate that asymmetry between rapidly rising and more gradual falling concentration trends over time suggests a DNAPL rather than a dissolved source of CEs. Peak concentrations of CEs are proportional to the volume and composition of the DNAPL source. The persistence of contamination, which can vary from a few years to centuries, is proportional to DNAPL volume, but is unaffected by DNAPL composition. These results show that monitoring CE concentrations in raw water produced by impacted public supply wells over time can provide useful information concerning the nature of contaminant sources and the likely future persistence of contamination.
GPS Block 2R Time Standard Assembly (TSA) architecture
NASA Technical Reports Server (NTRS)
Baker, Anthony P.
1990-01-01
The underlying philosophy of the Global Positioning System (GPS) 2R Time Standard Assembly (TSA) architecture is to utilize two frequency sources, one fixed frequency reference source and one system frequency source, and to couple the system frequency source to the reference frequency source via a sample data loop. The system source is used to provide the basic clock frequency and timing for the space vehicle (SV) and it uses a voltage controlled crystal oscillator (VCXO) with high short term stability. The reference source is an atomic frequency standard (AFS) with high long term stability. The architecture can support any type of frequency standard. In the system design rubidium, cesium, and H2 masers outputting a canonical frequency were accommodated. The architecture is software intensive. All VCXO adjustments are digital and are calculated by a processor. They are applied to the VCXO via a digital to analog converter.
The psychological contract: enhancing productivity and its implications for long-term care.
Flannery, Raymond B
2002-01-01
When hired, a new employee is usually given a job description and an explanation of benefits. In addition, the employee will also have a psychological contract with the organization. This contract, often unstated, reflects the main source of the employee's motivation to work hard. This is true of all groups of employees, including long-term care staff. Common examples of psychological contracts for long-term care administrative staff include autonomy, social acceptance, and being in the forefront of cutting-edge research. An awareness of these psychological contracts can result in better "fits" between employee aspirations and relevant long-term care organization tasks so that productivity is enhanced. This article outlines the steps necessary to create these good fits in ways that benefit both the organization and its employees. These recommendations are of particular relevance to administrators and supervisors in long-term carefacilities.
Advances in audio source seperation and multisource audio content retrieval
NASA Astrophysics Data System (ADS)
Vincent, Emmanuel
2012-06-01
Audio source separation aims to extract the signals of individual sound sources from a given recording. In this paper, we review three recent advances which improve the robustness of source separation in real-world challenging scenarios and enable its use for multisource content retrieval tasks, such as automatic speech recognition (ASR) or acoustic event detection (AED) in noisy environments. We present a Flexible Audio Source Separation Toolkit (FASST) and discuss its advantages compared to earlier approaches such as independent component analysis (ICA) and sparse component analysis (SCA). We explain how cues as diverse as harmonicity, spectral envelope, temporal fine structure or spatial location can be jointly exploited by this toolkit. We subsequently present the uncertainty decoding (UD) framework for the integration of audio source separation and audio content retrieval. We show how the uncertainty about the separated source signals can be accurately estimated and propagated to the features. Finally, we explain how this uncertainty can be efficiently exploited by a classifier, both at the training and the decoding stage. We illustrate the resulting performance improvements in terms of speech separation quality and speaker recognition accuracy.
Operation of large RF sources for H-: Lessons learned at ELISE
NASA Astrophysics Data System (ADS)
Fantz, U.; Wünderlich, D.; Heinemann, B.; Kraus, W.; Riedl, R.
2017-08-01
The goal of the ELISE test facility is to demonstrate that large RF-driven negative ion sources (1 × 1 m2 source area with 360 kW installed RF power) can achieve the parameters required for the ITER beam sources in terms of current densities and beam homogeneity at a filling pressure of 0.3 Pa for pulse lengths of up to one hour. With the experience in operation of the test facility, the beam source inspection and maintenance as well as with the results of the achieved source performance so far, conclusions are drawn for commissioning and operation of the ITER beam sources. Addressed are critical technical RF issues, extrapolations to the required RF power, Cs consumption and Cs ovens, the need of adjusting the magnetic filter field strength as well as the temporal dynamic and spatial asymmetry of the co-extracted electron current. It is proposed to relax the low pressure limit to 0.4 Pa and to replace the fixed electron-to-ion ratio by a power density limit for the extraction grid. This would be highly beneficial for controlling the co-extracted electrons.
The relationship between CDOM and salinity in estuaries: An analytical and graphical solution
NASA Astrophysics Data System (ADS)
Bowers, D. G.; Brett, H. L.
2008-09-01
The relationship between coloured dissolved organic matter (CDOM) and salinity in an estuary is explored using a simple box model in which the river discharge and concentration of CDOM in the river are allowed to vary with time. The results are presented as analytical and graphical solutions. The behaviour of the estuary depends upon the ratio, β, of the flushing time of the estuary to the timescale of the source variation. For small values of β, the variation in CDOM concentration in the estuary tracks that in the source, producing a linear relationship on a CDOM-salinity plot. As β increases, the estuary struggles to keep up with the changes in the source; and a curved CDOM-salinity plot results. For very large values of β, however, corresponding to estuaries with a long flushing time, the CDOM concentration in the estuary settles down to a mean value which again lies on a straight line on a CDOM-salinity plot (and extrapolates to the time-mean concentration in the source). The results are discussed in terms of the mapping of surface salinity in estuaries through the visible band remote sensing of CDOM.
Shi, Guo-Liang; Tian, Ying-Ze; Ma, Tong; Song, Dan-Lin; Zhou, Lai-Dong; Han, Bo; Feng, Yin-Chang; Russell, Armistead G
2017-06-01
Long-term and synchronous monitoring of PM 10 and PM 2.5 was conducted in Chengdu in China from 2007 to 2013. The levels, variations, compositions and size distributions were investigated. The sources were quantified by two-way and three-way receptor models (PMF2, ME2-2way and ME2-3way). Consistent results were found: the primary source categories contributed 63.4% (PMF2), 64.8% (ME2-2way) and 66.8% (ME2-3way) to PM 10 , and contributed 60.9% (PMF2), 65.5% (ME2-2way) and 61.0% (ME2-3way) to PM 2.5 . Secondary sources contributed 31.8% (PMF2), 32.9% (ME2-2way) and 31.7% (ME2-3way) to PM 10 , and 35.0% (PMF2), 33.8% (ME2-2way) and 36.0% (ME2-3way) to PM 2.5 . The size distribution of source categories was estimated better by the ME2-3way method. The three-way model can simultaneously consider chemical species, temporal variability and PM sizes, while a two-way model independently computes datasets of different sizes. A method called source directional apportionment (SDA) was employed to quantify the contributions from various directions for each source category. Crustal dust from east-north-east (ENE) contributed the highest to both PM 10 (12.7%) and PM 2.5 (9.7%) in Chengdu, followed by the crustal dust from south-east (SE) for PM 10 (9.8%) and secondary nitrate & secondary organic carbon from ENE for PM 2.5 (9.6%). Source contributions from different directions are associated with meteorological conditions, source locations and emission patterns during the sampling period. These findings and methods provide useful tools to better understand PM pollution status and to develop effective pollution control strategies. Copyright © 2016. Published by Elsevier B.V.
Fermi Large Area Telescope Second Source Catalog
NASA Astrophysics Data System (ADS)
Nolan, P. L.; Abdo, A. A.; Ackermann, M.; Ajello, M.; Allafort, A.; Antolini, E.; Atwood, W. B.; Axelsson, M.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Belfiore, A.; Bellazzini, R.; Berenji, B.; Bignami, G. F.; Blandford, R. D.; Bloom, E. D.; Bonamente, E.; Bonnell, J.; Borgland, A. W.; Bottacini, E.; Bouvier, A.; Brandt, T. J.; Bregeon, J.; Brigida, M.; Bruel, P.; Buehler, R.; Burnett, T. H.; Buson, S.; Caliandro, G. A.; Cameron, R. A.; Campana, R.; Cañadas, B.; Cannon, A.; Caraveo, P. A.; Casandjian, J. M.; Cavazzuti, E.; Ceccanti, M.; Cecchi, C.; Çelik, Ö.; Charles, E.; Chekhtman, A.; Cheung, C. C.; Chiang, J.; Chipaux, R.; Ciprini, S.; Claus, R.; Cohen-Tanugi, J.; Cominsky, L. R.; Conrad, J.; Corbet, R.; Cutini, S.; D'Ammando, F.; Davis, D. S.; de Angelis, A.; DeCesar, M. E.; DeKlotz, M.; De Luca, A.; den Hartog, P. R.; de Palma, F.; Dermer, C. D.; Digel, S. W.; Silva, E. do Couto e.; Drell, P. S.; Drlica-Wagner, A.; Dubois, R.; Dumora, D.; Enoto, T.; Escande, L.; Fabiani, D.; Falletti, L.; Favuzzi, C.; Fegan, S. J.; Ferrara, E. C.; Focke, W. B.; Fortin, P.; Frailis, M.; Fukazawa, Y.; Funk, S.; Fusco, P.; Gargano, F.; Gasparrini, D.; Gehrels, N.; Germani, S.; Giebels, B.; Giglietto, N.; Giommi, P.; Giordano, F.; Giroletti, M.; Glanzman, T.; Godfrey, G.; Grenier, I. A.; Grondin, M.-H.; Grove, J. E.; Guillemot, L.; Guiriec, S.; Gustafsson, M.; Hadasch, D.; Hanabata, Y.; Harding, A. K.; Hayashida, M.; Hays, E.; Hill, A. B.; Horan, D.; Hou, X.; Hughes, R. E.; Iafrate, G.; Itoh, R.; Jóhannesson, G.; Johnson, R. P.; Johnson, T. E.; Johnson, A. S.; Johnson, T. J.; Kamae, T.; Katagiri, H.; Kataoka, J.; Katsuta, J.; Kawai, N.; Kerr, M.; Knödlseder, J.; Kocevski, D.; Kuss, M.; Lande, J.; Landriu, D.; Latronico, L.; Lemoine-Goumard, M.; Lionetto, A. M.; Llena Garde, M.; Longo, F.; Loparco, F.; Lott, B.; Lovellette, M. N.; Lubrano, P.; Madejski, G. M.; Marelli, M.; Massaro, E.; Mazziotta, M. N.; McConville, W.; McEnery, J. E.; Mehault, J.; Michelson, P. F.; Minuti, M.; Mitthumsiri, W.; Mizuno, T.; Moiseev, A. A.; Mongelli, M.; Monte, C.; Monzani, M. E.; Morselli, A.; Moskalenko, I. V.; Murgia, S.; Nakamori, T.; Naumann-Godo, M.; Norris, J. P.; Nuss, E.; Nymark, T.; Ohno, M.; Ohsugi, T.; Okumura, A.; Omodei, N.; Orlando, E.; Ormes, J. F.; Ozaki, M.; Paneque, D.; Panetta, J. H.; Parent, D.; Perkins, J. S.; Pesce-Rollins, M.; Pierbattista, M.; Pinchera, M.; Piron, F.; Pivato, G.; Porter, T. A.; Racusin, J. L.; Rainò, S.; Rando, R.; Razzano, M.; Razzaque, S.; Reimer, A.; Reimer, O.; Reposeur, T.; Ritz, S.; Rochester, L. S.; Romani, R. W.; Roth, M.; Rousseau, R.; Ryde, F.; Sadrozinski, H. F.-W.; Salvetti, D.; Sanchez, D. A.; Saz Parkinson, P. M.; Sbarra, C.; Scargle, J. D.; Schalk, T. L.; Sgrò, C.; Shaw, M. S.; Shrader, C.; Siskind, E. J.; Smith, D. A.; Spandre, G.; Spinelli, P.; Stephens, T. E.; Strickman, M. S.; Suson, D. J.; Tajima, H.; Takahashi, H.; Takahashi, T.; Tanaka, T.; Thayer, J. G.; Thayer, J. B.; Thompson, D. J.; Tibaldo, L.; Tibolla, O.; Tinebra, F.; Tinivella, M.; Torres, D. F.; Tosti, G.; Troja, E.; Uchiyama, Y.; Vandenbroucke, J.; Van Etten, A.; Van Klaveren, B.; Vasileiou, V.; Vianello, G.; Vitale, V.; Waite, A. P.; Wallace, E.; Wang, P.; Werner, M.; Winer, B. L.; Wood, D. L.; Wood, K. S.; Wood, M.; Yang, Z.; Zimmer, S.
2012-04-01
We present the second catalog of high-energy γ-ray sources detected by the Large Area Telescope (LAT), the primary science instrument on the Fermi Gamma-ray Space Telescope (Fermi), derived from data taken during the first 24 months of the science phase of the mission, which began on 2008 August 4. Source detection is based on the average flux over the 24 month period. The second Fermi-LAT catalog (2FGL) includes source location regions, defined in terms of elliptical fits to the 95% confidence regions and spectral fits in terms of power-law, exponentially cutoff power-law, or log-normal forms. Also included are flux measurements in five energy bands and light curves on monthly intervals for each source. Twelve sources in the catalog are modeled as spatially extended. We provide a detailed comparison of the results from this catalog with those from the first Fermi-LAT catalog (1FGL). Although the diffuse Galactic and isotropic models used in the 2FGL analysis are improved compared to the 1FGL catalog, we attach caution flags to 162 of the sources to indicate possible confusion with residual imperfections in the diffuse model. The 2FGL catalog contains 1873 sources detected and characterized in the 100 MeV to 100 GeV range of which we consider 127 as being firmly identified and 1171 as being reliably associated with counterparts of known or likely γ-ray-producing source classes. We dedicate this paper to the memory of our colleague Patrick Nolan, who died on 2011 November 6. His career spanned much of the history of high-energy astronomy from space and his work on the Large Area Telescope (LAT) began nearly 20 years ago when it was just a concept. Pat was a central member in the operation of the LAT collaboration and he is greatly missed.
Cesca, S.; Battaglia, J.; Dahm, T.; Tessmer, E.; Heimann, S.; Okubo, P.
2008-01-01
The main goal of this study is to improve the modelling of the source mechanism associated with the generation of long period (LP) signals in volcanic areas. Our intent is to evaluate the effects that detailed structural features of the volcanic models play in the generation of LP signal and the consequent retrieval of LP source characteristics. In particular, effects associated with the presence of topography and crustal heterogeneities are here studied in detail. We focus our study on a LP event observed at Kilauea volcano, Hawaii, in 2001 May. A detailed analysis of this event and its source modelling is accompanied by a set of synthetic tests, which aim to evaluate the effects of topography and the presence of low velocity shallow layers in the source region. The forward problem of Green's function generation is solved numerically following a pseudo-spectral approach, assuming different 3-D models. The inversion is done in the frequency domain and the resulting source mechanism is represented by the sum of two time-dependent terms: a full moment tensor and a single force. Synthetic tests show how characteristic velocity structures, associated with shallow sources, may be partially responsible for the generation of the observed long-lasting ringing waveforms. When applying the inversion technique to Kilauea LP data set, inversions carried out for different crustal models led to very similar source geometries, indicating a subhorizontal cracks. On the other hand, the source time function and its duration are significantly different for different models. These results support the indication of a strong influence of crustal layering on the generation of the LP signal, while the assumption of homogeneous velocity model may bring to misleading results. ?? 2008 The Authors Journal compilation ?? 2008 RAS.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, M.A.
1991-12-31
In conducting a performance assessment for a low-level waste (LLW) disposal facility, one of the important considerations for determining the source term, which is defined as the amount of radioactivity being released from the facility, is the quantity of radioactive material present. This quantity, which will be referred to as the source inventory, is generally estimated through a review of historical records and waste tracking systems at the LLW facility. In theory, estimating the total source inventory for Department of Energy (DOE) LLW disposal facilities should be possible by reviewing the national data base maintained for LLW operations, the Solidmore » Waste Information Management System (SWIMS), or through the annual report that summarizes the SWIMS data, the Integrated Data Base (IDB) report. However, in practice, there are some difficulties in making this estimate. This is not unexpected, since the SWIMS and the IDB were not developed with the goal of developing a performance assessment source term in mind. The practical shortcomings using the existing data to develop a source term for DOE facilities will be discussed in this paper.« less
Theoretical considerations for mapping activation in human cardiac fibrillation
NASA Astrophysics Data System (ADS)
Rappel, Wouter-Jan; Narayan, Sanjiv M.
2013-06-01
Defining mechanisms for cardiac fibrillation is challenging because, in contrast to other arrhythmias, fibrillation exhibits complex non-repeatability in spatiotemporal activation but paradoxically exhibits conserved spatial gradients in rate, dominant frequency, and electrical propagation. Unlike animal models, in which fibrillation can be mapped at high spatial and temporal resolution using optical dyes or arrays of contact electrodes, mapping of cardiac fibrillation in patients is constrained practically to lower resolutions or smaller fields-of-view. In many animal models, atrial fibrillation is maintained by localized electrical rotors and focal sources. However, until recently, few studies had revealed localized sources in human fibrillation, so that the impact of mapping constraints on the ability to identify rotors or focal sources in humans was not described. Here, we determine the minimum spatial and temporal resolutions theoretically required to detect rigidly rotating spiral waves and focal sources, then extend these requirements for spiral waves in computer simulations. Finally, we apply our results to clinical data acquired during human atrial fibrillation using a novel technique termed focal impulse and rotor mapping (FIRM). Our results provide theoretical justification and clinical demonstration that FIRM meets the spatio-temporal resolution requirements to reliably identify rotors and focal sources for human atrial fibrillation.
NASA Astrophysics Data System (ADS)
Kwon, Hyeokjun; Kang, Yoojin; Jang, Junwoo
2017-09-01
Color fidelity has been used as one of indices to evaluate the performance of light sources. Since the Color Rendering Index (CRI) was proposed at CIE, many color fidelity metrics have been proposed to increase the accuracy of the metric. This paper focuses on a comparison of the color fidelity metrics in an aspect of accuracy with human visual assessments. To visually evaluate the color fidelity of light sources, we made a simulator that reproduces the color samples under lighting conditions. In this paper, eighteen color samples of the Macbeth color checker under test light sources and reference illuminant for each of them are simulated and displayed on a well-characterized monitor. With only a spectrum set of the test light source and reference illuminant, color samples under any lighting condition can be reproduced. In this paper, the spectrums of the two LED and two OLED light sources that have similar values of CRI are used for the visual assessment. In addition, the results of the visual assessment are compared with the two color fidelity metrics that include CRI and IES TM-30-15 (Rf), proposed by Illuminating Engineering Society (IES) in 2015. Experimental results indicate that Rf outperforms CRI in terms of the correlation with visual assessment.
Siponen, Taina; Yli-Tuomi, Tarja; Aurela, Minna; Dufva, Hilkka; Hillamo, Risto; Hirvonen, Maija-Riitta; Huttunen, Kati; Pekkanen, Juha; Pennanen, Arto; Salonen, Iiris; Tiittanen, Pekka; Salonen, Raimo O; Lanki, Timo
2015-01-01
Objective To compare short-term effects of fine particles (PM2.5; aerodynamic diameter <2.5 µm) from different sources on the blood levels of markers of systemic inflammation. Methods We followed a panel of 52 ischaemic heart disease patients from 15 November 2005 to 21 April 2006 with clinic visits in every second week in the city of Kotka, Finland, and determined nine inflammatory markers from blood samples. In addition, we monitored outdoor air pollution at a fixed site during the study period and conducted a source apportionment of PM2.5 using the Environmental Protection Agency's model EPA PMF 3.0. We then analysed associations between levels of source-specific PM2.5 and markers of systemic inflammation using linear mixed models. Results We identified five source categories: regional and long-range transport (LRT), traffic, biomass combustion, sea salt, and pulp industry. We found most evidence for the relation of air pollution and inflammation in LRT, traffic and biomass combustion; the most relevant inflammation markers were C-reactive protein, interleukin-12 and myeloperoxidase. Sea salt was not positively associated with any of the inflammatory markers. Conclusions Results suggest that PM2.5 from several sources, such as biomass combustion and traffic, are promoters of systemic inflammation, a risk factor for cardiovascular diseases. PMID:25479755
Classic flea-borne transmission does not drive plague epizootics in prairie dogs.
Webb, Colleen T; Brooks, Christopher P; Gage, Kenneth L; Antolin, Michael F
2006-04-18
We lack a clear understanding of the enzootic maintenance of the bacterium (Yersinia pestis) that causes plague and the sporadic epizootics that occur in its natural rodent hosts. A key to elucidating these epidemiological dynamics is determining the dominant transmission routes of plague. Plague can be acquired from the bites of infectious fleas (which is generally considered to occur via a blocked flea vector), inhalation of infectious respiratory droplets, or contact with a short-term infectious reservoir. We present results from a plague modeling approach that includes transmission from all three sources of infection simultaneously and uses sensitivity analysis to determine their relative importance. Our model is completely parameterized by using data from the literature and our own field studies of plague in the black-tailed prairie dog (Cynomys ludovicianus). Results of the model are qualitatively and quantitatively consistent with independent data from our field sites. Although infectious fleas might be an important source of infection and transmission via blocked fleas is a dominant paradigm in the literature, our model clearly predicts that this form of transmission cannot drive epizootics in prairie dogs. Rather, a short-term reservoir is required for epizootic dynamics. Several short-term reservoirs have the potential to affect the prairie dog system. Our model predictions of the residence time of the short-term reservoir suggest that other small mammals, infectious prairie dog carcasses, fleas that transmit plague without blockage of the digestive tract, or some combination of these three are the most likely of the candidate infectious reservoirs.
Inverse modelling of radionuclide release rates using gamma dose rate observations
NASA Astrophysics Data System (ADS)
Hamburger, Thomas; Evangeliou, Nikolaos; Stohl, Andreas; von Haustein, Christoph; Thummerer, Severin; Wallner, Christian
2015-04-01
Severe accidents in nuclear power plants such as the historical accident in Chernobyl 1986 or the more recent disaster in the Fukushima Dai-ichi nuclear power plant in 2011 have drastic impacts on the population and environment. Observations and dispersion modelling of the released radionuclides help to assess the regional impact of such nuclear accidents. Modelling the increase of regional radionuclide activity concentrations, which results from nuclear accidents, underlies a multiplicity of uncertainties. One of the most significant uncertainties is the estimation of the source term. That is, the time dependent quantification of the released spectrum of radionuclides during the course of the nuclear accident. The quantification of the source term may either remain uncertain (e.g. Chernobyl, Devell et al., 1995) or rely on estimates given by the operators of the nuclear power plant. Precise measurements are mostly missing due to practical limitations during the accident. The release rates of radionuclides at the accident site can be estimated using inverse modelling (Davoine and Bocquet, 2007). The accuracy of the method depends amongst others on the availability, reliability and the resolution in time and space of the used observations. Radionuclide activity concentrations are observed on a relatively sparse grid and the temporal resolution of available data may be low within the order of hours or a day. Gamma dose rates, on the other hand, are observed routinely on a much denser grid and higher temporal resolution and provide therefore a wider basis for inverse modelling (Saunier et al., 2013). We present a new inversion approach, which combines an atmospheric dispersion model and observations of radionuclide activity concentrations and gamma dose rates to obtain the source term of radionuclides. We use the Lagrangian particle dispersion model FLEXPART (Stohl et al., 1998; Stohl et al., 2005) to model the atmospheric transport of the released radionuclides. The inversion method uses a Bayesian formulation considering uncertainties for the a priori source term and the observations (Eckhardt et al., 2008, Stohl et al., 2012). The a priori information on the source term is a first guess. The gamma dose rate observations are used to improve the first guess and to retrieve a reliable source term. The details of this method will be presented at the conference. This work is funded by the Bundesamt für Strahlenschutz BfS, Forschungsvorhaben 3612S60026. References Davoine, X. and Bocquet, M., Atmos. Chem. Phys., 7, 1549-1564, 2007. Devell, L., et al., OCDE/GD(96)12, 1995. Eckhardt, S., et al., Atmos. Chem. Phys., 8, 3881-3897, 2008. Saunier, O., et al., Atmos. Chem. Phys., 13, 11403-11421, 2013. Stohl, A., et al., Atmos. Environ., 32, 4245-4264, 1998. Stohl, A., et al., Atmos. Chem. Phys., 5, 2461-2474, 2005. Stohl, A., et al., Atmos. Chem. Phys., 12, 2313-2343, 2012.
ERIC Educational Resources Information Center
Ohio State Univ., Columbus. National Center for Research in Vocational Education.
This Program for Acquiring Competence in Entrepreneurship (PACE) resource guide contains an "Annotated Glossary of Business Terms" and listings of sources of information. The glossary includes approximately 100 terms, of which the instructor should have a working knowledge. It may also be used as a handout for students. Sources of…
Singular Behaviour of the Electrodynamic Fields of an Oscillating Dipole
ERIC Educational Resources Information Center
Leung, P. T.
2008-01-01
The singularity of the exact electromagnetic fields is derived to include the "source terms" for harmonically oscillating electric (and magnetic) dipoles, so that the fields will be consistent with the full Maxwell equations with a source. It is shown explicitly, as somewhat expected, that the same [delta]-function terms for the case of static…
Solution of Grad-Shafranov equation by the method of fundamental solutions
NASA Astrophysics Data System (ADS)
Nath, D.; Kalra, M. S.; Kalra
2014-06-01
In this paper we have used the Method of Fundamental Solutions (MFS) to solve the Grad-Shafranov (GS) equation for the axisymmetric equilibria of tokamak plasmas with monomial sources. These monomials are the individual terms appearing on the right-hand side of the GS equation if one expands the nonlinear terms into polynomials. Unlike the Boundary Element Method (BEM), the MFS does not involve any singular integrals and is a meshless boundary-alone method. Its basic idea is to create a fictitious boundary around the actual physical boundary of the computational domain. This automatically removes the involvement of singular integrals. The results obtained by the MFS match well with the earlier results obtained using the BEM. The method is also applied to Solov'ev profiles and it is found that the results are in good agreement with analytical results.
NASA Technical Reports Server (NTRS)
Kelecy, Tom; Payne, Tim; Thurston, Robin; Stansbery, Gene
2007-01-01
A population of deep space objects is thought to be high area-to-mass ratio (AMR) debris having origins from sources in the geosynchronous orbit (GEO) belt. The typical AMR values have been observed to range anywhere from 1's to 10's of m(sup 2)/kg, and hence, higher than average solar radiation pressure effects result in long-term migration of eccentricity (0.1-0.6) and inclination over time. However, the nature of the debris orientation-dependent dynamics also results time-varying solar radiation forces about the average which complicate the short-term orbit determination processing. The orbit determination results are presented for several of these debris objects, and highlight their unique and varied dynamic attributes. Estimation or the solar pressure dynamics over time scales suitable for resolving the shorter term dynamics improves the orbit estimation, and hence, the orbit predictions needed to conduct follow-up observations.
NASA Astrophysics Data System (ADS)
Civerolo, Kevin; Hogrefe, Christian; Zalewsky, Eric; Hao, Winston; Sistla, Gopal; Lynn, Barry; Rosenzweig, Cynthia; Kinney, Patrick L.
2010-10-01
This paper compares spatial and seasonal variations and temporal trends in modeled and measured concentrations of sulfur and nitrogen compounds in wet and dry deposition over an 18-year period (1988-2005) over a portion of the northeastern United States. Substantial emissions reduction programs occurred over this time period, including Title IV of the Clean Air Act Amendments of 1990 which primarily resulted in large decreases in sulfur dioxide (SO 2) emissions by 1995, and nitrogen oxide (NO x) trading programs which resulted in large decreases in warm season NO x emissions by 2004. Additionally, NO x emissions from mobile sources declined more gradually over this period. The results presented here illustrate the use of both operational and dynamic model evaluation and suggest that the modeling system largely captures the seasonal and long-term changes in sulfur compounds. The modeling system generally captures the long-term trends in nitrogen compounds, but does not reproduce the average seasonal variation or spatial patterns in nitrate.
Park, Jin Yong; Lee, Byoung-Seob; Choi, Seyong; Kim, Seong Jun; Ok, Jung-Woo; Yoon, Jang-Hee; Kim, Hyun Gyu; Shin, Chang Seouk; Hong, Jonggi; Bahng, Jungbae; Won, Mi-Sook
2016-02-01
The 28 GHz superconducting electron cyclotron resonance (ECR) ion source has been developed to produce a high current heavy ion for the linear accelerator at KBSI (Korea Basic Science Institute). The objective of this study is to generate fast neutrons with a proton target via a p(Li,n)Be reaction. The design and fabrication of the essential components of the ECR ion source, which include a superconducting magnet with a liquid helium re-condensed cryostat and a 10 kW high-power microwave, were completed. The waveguide components were connected with a plasma chamber including a gas supply system. The plasma chamber was inserted into the warm bore of the superconducting magnet. A high voltage system was also installed for the ion beam extraction. After the installation of the ECR ion source, we reported the results for ECR plasma ignition at ECRIS 2014 in Russia. Following plasma ignition, we successfully extracted multi-charged ions and obtained the first results in terms of ion beam spectra from various species. This was verified by a beam diagnostic system for a low energy beam transport system. In this article, we present the first results and report on the current status of the KBSI accelerator project.
First results of 28 GHz superconducting electron cyclotron resonance ion source for KBSI accelerator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Jin Yong; Lee, Byoung-Seob; Choi, Seyong
The 28 GHz superconducting electron cyclotron resonance (ECR) ion source has been developed to produce a high current heavy ion for the linear accelerator at KBSI (Korea Basic Science Institute). The objective of this study is to generate fast neutrons with a proton target via a p(Li,n)Be reaction. The design and fabrication of the essential components of the ECR ion source, which include a superconducting magnet with a liquid helium re-condensed cryostat and a 10 kW high-power microwave, were completed. The waveguide components were connected with a plasma chamber including a gas supply system. The plasma chamber was inserted intomore » the warm bore of the superconducting magnet. A high voltage system was also installed for the ion beam extraction. After the installation of the ECR ion source, we reported the results for ECR plasma ignition at ECRIS 2014 in Russia. Following plasma ignition, we successfully extracted multi-charged ions and obtained the first results in terms of ion beam spectra from various species. This was verified by a beam diagnostic system for a low energy beam transport system. In this article, we present the first results and report on the current status of the KBSI accelerator project.« less
Analog performance of vertical nanowire TFETs as a function of temperature and transport mechanism
NASA Astrophysics Data System (ADS)
Martino, Marcio Dalla Valle; Neves, Felipe; Ghedini Der Agopian, Paula; Martino, João Antonio; Vandooren, Anne; Rooyackers, Rita; Simoen, Eddy; Thean, Aaron; Claeys, Cor
2015-10-01
The goal of this work is to study the analog performance of tunnel field effect transistors (TFETs) and its susceptibility to temperature variation and to different dominant transport mechanisms. The experimental input characteristic of nanowire TFETs with different source compositions (100% Si and Si1-xGex) has been presented, leading to the extraction of the Activation Energy for each bias condition. These first results have been connected to the prevailing transport mechanism for each configuration, namely band-to-band tunneling (BTBT) or trap assisted tunneling (TAT). Afterward, this work analyzes the analog behavior, with the intrinsic voltage gain calculated in terms of Early voltage, transistor efficiency, transconductance and output conductance. Comparing the results for devices with different source compositions, it is interesting to note how the analog trends vary depending on the source characteristics and the prevailing transport mechanisms. This behavior results in a different suitability analysis depending on the working temperature. In other words, devices with full-Silicon source and non-abrupt junction profile present the worst intrinsic voltage gain at room temperature, but the best results for high temperatures. This was possible since, among the 4 studied devices, this configuration was the only one with a positive intrinsic voltage gain dependence on the temperature variation.
Emulsion chamber observations of primary cosmic-ray electrons in the energy range 30-1000 GeV
NASA Technical Reports Server (NTRS)
Nishimura, J.; Fujii, M.; Taira, T.; Aizu, E.; Hiraiwa, H.; Kobayashi, T.; Niu, K.; Ohta, I.; Golden, R. L.; Koss, T. A.
1980-01-01
The results of a series of emulsion exposures, beginning in Japan in 1968 and continued in the U.S. since 1975, which have yielded a total balloon-altitude exposure of 98,700 sq m sr s, are presented. The data are discussed in terms of several models of cosmic-ray propagation. Interpreted in terms of the energy-dependent leaky-box model, the spectrum results suggest a galactic electron residence time of 1.0(+2.0, -0.5) x 10 to the 7th yr, which is consistent with results from Be-10 observations. Finally, the possibility that departures from smooth power law behavior in the spectrum due to individual nearby sources will be observable in the energy range above 1 TeV is discussed.
Survey on the Performance of Source Localization Algorithms.
Fresno, José Manuel; Robles, Guillermo; Martínez-Tarifa, Juan Manuel; Stewart, Brian G
2017-11-18
The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton-Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm.
Survey on the Performance of Source Localization Algorithms
2017-01-01
The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. PMID:29156565
Donnelly, Aoife; Naughton, Owen; Misstear, Bruce; Broderick, Brian
2016-10-14
This article describes a new methodology for increasing the spatial representativeness of individual monitoring sites. Air pollution levels at a given point are influenced by emission sources in the immediate vicinity. Since emission sources are rarely uniformly distributed around a site, concentration levels will inevitably be most affected by the sources in the prevailing upwind direction. The methodology provides a means of capturing this effect and providing additional information regarding source/pollution relationships. The methodology allows for the division of the air quality data from a given monitoring site into a number of sectors or wedges based on wind direction and estimation of annual mean values for each sector, thus optimising the information that can be obtained from a single monitoring station. The method corrects for short-term data, diurnal and seasonal variations in concentrations (which can produce uneven weighting of data within each sector) and uneven frequency of wind directions. Significant improvements in correlations between the air quality data and the spatial air quality indicators were obtained after application of the correction factors. This suggests the application of these techniques would be of significant benefit in land-use regression modelling studies. Furthermore, the method was found to be very useful for estimating long-term mean values and wind direction sector values using only short-term monitoring data. The methods presented in this article can result in cost savings through minimising the number of monitoring sites required for air quality studies while also capturing a greater degree of variability in spatial characteristics. In this way, more reliable, but also more expensive monitoring techniques can be used in preference to a higher number of low-cost but less reliable techniques. The methods described in this article have applications in local air quality management, source receptor analysis, land-use regression mapping and modelling and population exposure studies.
NASA Astrophysics Data System (ADS)
Fomina, E. V.; Lesovik, V. S.; Fomin, A. E.; Kozhukhova, N. I.; Lebedev, M. S.
2018-03-01
Argillite is a carbonaceous industrial by-product that is a potential source in environmentally friendly and source-saving construction industry. In this research, chemical and mineral composition as well as particle size distribution of argillite were studied and used to develop autoclave aerated concrete as partial substitute of quartz sand. Effect of the argillite as a mineral admixture in autoclave aerated concrete was investigated in terms of compressive and tensile strength, density, heat conductivity etc. The obtained results demonstrated an efficiency of argillite as an energy-saving material in autoclave construction composites.
The exact calculation of quadrupole sources for some incompressible flows
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.
1988-01-01
This paper is concerned with the application of the acoustic analogy of Lighthill to the acoustic and aerodynamic problems associated with moving bodies. The Ffowcs Williams-Hawkings equation, which is an interpretation of the acoustic analogy for sound generation by moving bodies, manipulates the source terms into surface and volume sources. Quite often in practice the volume sources, or quadrupoles, are neglected for various reasons. Recently, Farassat, Long and others have attempted to use the FW-H equation with the quadrupole source and neglected to solve for the surface pressure on the body. The purpose of this paper is to examine the contribution of the quadrupole source to the acoustic pressure and body surface pressure for some problems for which the exact solution is known. The inviscid, incompressible, 2-D flow, calculated using the velocity potential, is used to calculate the individual contributions of the various surface and volume source terms in the FW-H equation. The relative importance of each of the sources is then assessed.
Fernández-Soto, Pedro; Velasco Tirado, Virginia; Carranza Rodríguez, Cristina; Pérez-Arellano, José Luis; Muro, Antonio
2013-01-01
Background Human schistosomiasis remains a serious worldwide public health problem. At present, a sensitive and specific assay for routine diagnosis of schistosome infection is not yet available. The potential for detecting schistosome-derived DNA by PCR-based methods in human clinical samples is currently being investigated as a diagnostic tool with potential application in routine schistosomiasis diagnosis. Collection of diagnostic samples such as stool or blood is usually difficult in some populations. However, urine is a biological sample that can be collected in a non-invasive method, easy to get from people of all ages and easy in management, but as a sample for PCR diagnosis is still not widely used. This could be due to the high variability in the reported efficiency of detection as a result of the high variation in urine samples’ storage or conditions for handling and DNA preservation and extraction methods. Methodology/Principal Findings We evaluate different commercial DNA extraction methods from a series of long-term frozen storage human urine samples from patients with parasitological confirmed schistosomiasis in order to assess the PCR effectiveness for Schistosoma spp. detection. Patientś urine samples were frozen for 18 months up to 7 years until use. Results were compared with those obtained in PCR assays using fresh healthy human urine artificially contaminated with Schistosoma mansoni DNA and urine samples from mice experimentally infected with S. mansoni cercariae stored frozen for at least 12 months before use. PCR results in fresh human artificial urine samples using different DNA based extraction methods were much more effective than those obtained when long-term frozen human urine samples were used as the source of DNA template. Conclusions/Significance Long-term frozen human urine samples are probably not a good source for DNA extraction for use as a template in PCR detection of Schistosoma spp., regardless of the DNA method of extraction used. PMID:23613907
Fernández-Soto, Pedro; Velasco Tirado, Virginia; Carranza Rodríguez, Cristina; Pérez-Arellano, José Luis; Muro, Antonio
2013-01-01
Human schistosomiasis remains a serious worldwide public health problem. At present, a sensitive and specific assay for routine diagnosis of schistosome infection is not yet available. The potential for detecting schistosome-derived DNA by PCR-based methods in human clinical samples is currently being investigated as a diagnostic tool with potential application in routine schistosomiasis diagnosis. Collection of diagnostic samples such as stool or blood is usually difficult in some populations. However, urine is a biological sample that can be collected in a non-invasive method, easy to get from people of all ages and easy in management, but as a sample for PCR diagnosis is still not widely used. This could be due to the high variability in the reported efficiency of detection as a result of the high variation in urine samples' storage or conditions for handling and DNA preservation and extraction methods. We evaluate different commercial DNA extraction methods from a series of long-term frozen storage human urine samples from patients with parasitological confirmed schistosomiasis in order to assess the PCR effectiveness for Schistosoma spp. detection. Patients urine samples were frozen for 18 months up to 7 years until use. Results were compared with those obtained in PCR assays using fresh healthy human urine artificially contaminated with Schistosoma mansoni DNA and urine samples from mice experimentally infected with S. mansoni cercariae stored frozen for at least 12 months before use. PCR results in fresh human artificial urine samples using different DNA based extraction methods were much more effective than those obtained when long-term frozen human urine samples were used as the source of DNA template. Long-term frozen human urine samples are probably not a good source for DNA extraction for use as a template in PCR detection of Schistosoma spp., regardless of the DNA method of extraction used.
Geochemistry of amphibolites from the Kolar Schist Belt
NASA Technical Reports Server (NTRS)
Balakrishnan, S.; Hanson, G. N.; Rajamani, V.
1988-01-01
How the Nd isotope data suggest that the amphibolites from the schist belt were derived from long-term depleted mantle sources at about 2.7 Ga is described. Trace element and Pb isotope data from the amphibolites also suggest that the sources for the amphibolites on the western and eastern sides of the narrow schist belt were derived from different sources. The Pb data from one outcrop of the central tholeiitic amphibolites lies on a 2.7 Ga isochron with a low model. The other amphibolites (W komatiitic, E komatiitic, and E tholeiitic) do not define isochrons, but suggest that they were derived from sources with distinct histories of U/Pb. There is some suggestion that the E komatiitic amphibolites may have been contaminated by fluids carrying Pb from a long-term, high U/Pb source, such as the old granitic crust on the west side of the schist belt. This is consistent with published galena Pb isotope data from the ore lodes within the belt, which also show a history of long-term U/Pb enrichment.
NASA Astrophysics Data System (ADS)
Castelo, A.; Mendioroz, A.; Celorrio, R.; Salazar, A.; López de Uralde, P.; Gorosmendi, I.; Gorostegui-Colinas, E.
2017-05-01
Lock-in vibrothermography is used to characterize vertical kissing and open cracks in metals. In this technique the crack heats up during ultrasound excitation due mainly to friction between the defect's faces. We have solved the inverse problem, consisting in determining the heat source distribution produced at cracks under amplitude modulated ultrasound excitation, which is an ill-posed inverse problem. As a consequence the minimization of the residual is unstable. We have stabilized the algorithm introducing a penalty term based on Total Variation functional. In the inversion, we combine amplitude and phase surface temperature data obtained at several modulation frequencies. Inversions of synthetic data with added noise indicate that compact heat sources are characterized accurately and that the particular upper contours can be retrieved for shallow heat sources. The overall shape of open and homogeneous semicircular strip-shaped heat sources representing open half-penny cracks can also be retrieved but the reconstruction of the deeper end of the heat source loses contrast. Angle-, radius- and depth-dependent inhomogeneous heat flux distributions within these semicircular strips can also be qualitatively characterized. Reconstructions of experimental data taken on samples containing calibrated heat sources confirm the predictions from reconstructions of synthetic data. We also present inversions of experimental data obtained from a real welded Inconel 718 specimen. The results are in good qualitative agreement with the results of liquids penetrants testing.
NASA Astrophysics Data System (ADS)
Haworth, Daniel
2013-11-01
The importance of explicitly accounting for the effects of unresolved turbulent fluctuations in Reynolds-averaged and large-eddy simulations of chemically reacting turbulent flows is increasingly recognized. Transported probability density function (PDF) methods have emerged as one of the most promising modeling approaches for this purpose. In particular, PDF methods provide an elegant and effective resolution to the closure problems that arise from averaging or filtering terms that correspond to nonlinear point processes, including chemical reaction source terms and radiative emission. PDF methods traditionally have been associated with studies of turbulence-chemistry interactions in laboratory-scale, atmospheric-pressure, nonluminous, statistically stationary nonpremixed turbulent flames; and Lagrangian particle-based Monte Carlo numerical algorithms have been the predominant method for solving modeled PDF transport equations. Recent advances and trends in PDF methods are reviewed and discussed. These include advances in particle-based algorithms, alternatives to particle-based algorithms (e.g., Eulerian field methods), treatment of combustion regimes beyond low-to-moderate-Damköhler-number nonpremixed systems (e.g., premixed flamelets), extensions to include radiation heat transfer and multiphase systems (e.g., soot and fuel sprays), and the use of PDF methods as the basis for subfilter-scale modeling in large-eddy simulation. Examples are provided that illustrate the utility and effectiveness of PDF methods for physics discovery and for applications to practical combustion systems. These include comparisons of results obtained using the PDF method with those from models that neglect unresolved turbulent fluctuations in composition and temperature in the averaged or filtered chemical source terms and/or the radiation heat transfer source terms. In this way, the effects of turbulence-chemistry-radiation interactions can be isolated and quantified.
What do popular Spanish women's magazines say about caesarean section? A 21-year survey
Torloni, MR; Campos Mansilla, B; Merialdi, M; Betrán, AP
2014-01-01
Objectives Caesarean section (CS) rates are increasing worldwide and maternal request is cited as one of the main reasons for this trend. Women's preferences for route of delivery are influenced by popular media, including magazines. We assessed the information on CS presented in Spanish women's magazines. Design Systematic review. Setting Women's magazines printed from 1989 to 2009 with the largest national distribution. Sample Articles with any information on CS. Methods Articles were selected, read and abstracted in duplicate. Sources of information, scientific accuracy, comprehensiveness and women's testimonials were objectively extracted using a content analysis form designed for this study. Main outcome measures Accuracy, comprehensiveness and sources of information. Results Most (67%) of the 1223 selected articles presented exclusively personal opinion/birth stories, 12% reported the potential benefits of CS, 26% mentioned the short-term and 10% mentioned the long-term maternal risks, and 6% highlighted the perinatal risks of CS. The most frequent short-term risks were the increased time for maternal recovery (n = 86), frustration/feelings of failure (n = 83) and increased post-surgical pain (n = 71). The most frequently cited long-term risks were uterine rupture (n = 57) and the need for another CS in any subsequent pregnancy (n = 42). Less than 5% of the selected articles reported that CS could increase the risks of infection (n = 53), haemorrhage (n = 31) or placenta praevia/accreta in future pregnancies (n = 6). The sources of information were not reported by 68% of the articles. Conclusions The portrayal of CS in Spanish women's magazines is not sufficiently comprehensive and does not provide adequate important information to help the readership to understand the real benefits and risks of this route of delivery. PMID:24467797
LOOKING FOR GRANULATION AND PERIODICITY IMPRINTS IN THE SUNSPOT TIME SERIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopes, Ilídio; Silva, Hugo G., E-mail: ilidio.lopes@tecnico.ulisboa.pt, E-mail: hgsilva@uevora.pt
2015-05-10
The sunspot activity is the end result of the cyclic destruction and regeneration of magnetic fields by the dynamo action. We propose a new method to analyze the daily sunspot areas data recorded since 1874. By computing the power spectral density of daily data series using the Mexican hat wavelet, we found a power spectrum with a well-defined shape, characterized by three features. The first term is the 22 yr solar magnetic cycle, estimated in our work to be 18.43 yr. The second term is related to the daily volatility of sunspots. This term is most likely produced by themore » turbulent motions linked to the solar granulation. The last term corresponds to a periodic source associated with the solar magnetic activity, for which the maximum power spectral density occurs at 22.67 days. This value is part of the 22–27 day periodicity region that shows an above-average intensity in the power spectra. The origin of this 22.67 day periodic process is not clearly identified, and there is a possibility that it can be produced by convective flows inside the star. The study clearly shows a north–south asymmetry. The 18.43 yr periodical source is correlated between the two hemispheres, but the 22.67 day one is not correlated. It is shown that toward the large timescales an excess occurs in the northern hemisphere, especially near the previous two periodic sources. To further investigate the 22.67 day periodicity, we made a Lomb–Scargle spectral analysis. The study suggests that this periodicity is distinct from others found nearby.« less
Theorem: A Static Magnetic N-pole Becomes an Oscillating Electric N-pole in a Cosmic Axion Field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hill, Christopher T.
We show for the classical Maxwell equations, including the axion electromagnetic anomaly source term, that a cosmic axion field induces an oscillating electric N-moment for any static magnetic N-moment. This is a straightforward result, accessible to anyone who has taken a first year graduate course in electrodynamics.
The report gives results of an evaluation of the condition and air emissions from old, phase-2-certified wood heaters installed in homes and used regularly for hoe heating since the 1992/1993 heating season or earlier. (NOTE: Wood stoves have been identified as a major source of ...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-07
... Environmental Assessment (NCEA) within EPA's Office of Research and Development. In November 2006, EPA released... classified as preliminary and not included in the quantitative inventory. The updated inventory lists the top... 2000. The quantitative results are expressed in terms of the toxicity equivalent (TEQ) of the mixture...
George E. Myers
1983-01-01
A number of commercial panel products, primarily particleboard and hardwood plywood, were tested for their formaldehyde emission behavior using desiccator, perforator, and dynamic chamber methods. The results were analyzed in terms of the source of formaldehyde observed in the tests (free vs. hydrolytically produced) and the potential utility of the testa as product...
Globalization, women's migration, and the long-term-care workforce.
Browne, Colette V; Braun, Kathryn L
2008-02-01
With the aging of the world's population comes the rising need for qualified direct long-term-care (DLTC) workers (i.e., those who provide personal care to frail and disabled older adults). Developed nations are increasingly turning to immigrant women to fill these needs. In this article, we examine the impact of three global trends-population aging, globalization, and women's migration-on the supply and demand for DLTC workers in the United States. Following an overview of these trends, we identify three areas with embedded social justice issues that are shaping the DLTC workforce in the United States, with a specific focus on immigrant workers in these settings. These include world poverty and economic inequalities, the feminization and colorization of labor (especially in long-term care), and empowerment and women's rights. We conclude with a discussion of the contradictory effects that both population aging and globalization have on immigrant women, source countries, and the long-term-care workforce in the United States. We raise a number of policy, practice, and research implications and questions. For policy makers and long-term-care administrators in receiver nations such as the United States, the meeting of DLTC worker needs with immigrants may result in greater access to needed employees but also in the continued devaluation of eldercare as a profession. Source (supply) nations must balance the real and potential economic benefits of remittances from women who migrate for labor with the negative consequences of disrupting family care traditions and draining the long-term-care workforce of those countries.
Fermi Large Area Telescope First Source Catalog
Abdo, A. A.; Ackermann, M.; Ajello, M.; ...
2010-05-25
Here, we present a catalog of high-energy gamma-ray sources detected by the Large Area Telescope (LAT), the primary science instrument on the Fermi Gamma-ray Space Telescope (Fermi), during the first 11 months of the science phase of the mission, which began on 2008 August 4. The First Fermi-LAT catalog (1FGL) contains 1451 sources detected and characterized in the 100 MeV to 100 GeV range. Source detection was based on the average flux over the 11 month period, and the threshold likelihood Test Statistic is 25, corresponding to a significance of just over 4σ. The 1FGL catalog includes source location regions,more » defined in terms of elliptical fits to the 95% confidence regions and power-law spectral fits as well as flux measurements in five energy bands for each source. In addition, monthly light curves are provided. Using a protocol defined before launch we have tested for several populations of gamma-ray sources among the sources in the catalog. For individual LAT-detected sources we provide firm identifications or plausible associations with sources in other astronomical catalogs. Identifications are based on correlated variability with counterparts at other wavelengths, or on spin or orbital periodicity. For the catalogs and association criteria that we have selected, 630 of the sources are unassociated. In conclusion, care was taken to characterize the sensitivity of the results to the model of interstellar diffuse gamma-ray emission used to model the bright foreground, with the result that 161 sources at low Galactic latitudes and toward bright local interstellar clouds are flagged as having properties that are strongly dependent on the model or as potentially being due to incorrectly modeled structure in the Galactic diffuse emission.« less
What is What in the Nanoworld: A Handbook on Nanoscience and Nanotechnology
NASA Astrophysics Data System (ADS)
Borisenko, Victor E.; Ossicini, Stefano
2004-10-01
This introductory, reference handbook summarizes the terms and definitions, most important phenomena, and regulations discovered in the physics, chemistry, technology, and application of nanostructures. These nanostructures are typically inorganic and organic structures at the atomic scale. Fast progressing nanoelectronics and optoelectronics, molecular electronics and spintronics, nanotechnology and quantum processing of information, are of strategic importance for the information society of the 21st century. The short form of information taken from textbooks, special encyclopedias, recent original books and papers provides fast support in understanding "old" and new terms of nanoscience and technology widely used in scientific literature on recent developments. Such support is indeed important when one reads a scientific paper presenting new results in nanoscience. A representative collection of fundamental terms and definitions from quantum physics, and quantum chemistry, special mathematics, organic and inorganic chemistry, solid state physics, material science and technology accompanies recommended second sources (books, reviews, websites) for an extended study of a subject. Each entry interprets the term or definition under consideration and briefly presents main features of the phenomena behind it. Additional information in the form of notes ("First described in: ?", "Recognition: ?", "More details in: ?") supplements entries and gives a historical retrospective of the subject with reference to further sources. Ideal for answering questions related to unknown terms and definitions of undergraduate and Ph.D. students studying the physics of low-dimensional structures, nanoelectronics, nanotechnology. The handbook provides fast support, when one likes to know or to remind the essence of a scientific term, especially when it contains a personal name in its title, like in terms "Anderson localization", "Aharonov-Bohm effect", "Bose-Einstein condensate", e.t.c. More than 1000 entries, from a few sentences to a page in length.
NASA Astrophysics Data System (ADS)
Nan, Tongchao; Li, Kaixuan; Wu, Jichun; Yin, Lihe
2018-04-01
Sustainability has been one of the key criteria of effective water exploitation. Groundwater exploitation and water-table decline at Haolebaoji water source site in the Ordos basin in NW China has drawn public attention due to concerns about potential threats to ecosystems and grazing land in the area. To better investigate the impact of production wells at Haolebaoji on the water table, an adapted algorithm called the random walk on grid method (WOG) is applied to simulate the hydraulic head in the unconfined and confined aquifers. This is the first attempt to apply WOG to a real groundwater problem. The method can not only evaluate the head values but also the contributions made by each source/sink term. One is allowed to analyze the impact of source/sink terms just as if one had an analytical solution. The head values evaluated by WOG match the values derived from the software Groundwater Modeling System (GMS). It suggests that WOG is effective and applicable in a heterogeneous aquifer with respect to practical problems, and the resultant information is useful for groundwater management.
Mosayebi, Z; Movahedian, A H; Soori, T
2011-07-01
Outbreaks of sepsis due to water or contaminated equipment can cause significant mortality and morbidity in neonatal intensive care units. We studied an outbreak among neonates caused by flavobacterium and investigated the characteristics of the infected neonates, antimicrobial susceptibilities, and the source of the outbreak. Forty-five neonates with documented flavobacterium sepsis were evaluated in this descriptive study. Data including sex, vaginal delivery or caesarean, preterm or term, birth weight, results of blood cultures and antibiograms were recorded and cases followed up until death or recovery. Environmental sampling for detecting the source of contamination was performed. Among the 45 patients, 28 (62.2%) were male and 17 (37.8%) female (P<0.001). The commonest clinical manifestation was respiratory distress (60%). Eighteen neonates (40%) were low birth weight. Thirty-seven neonates (82.2%) were born via caesarean section. Twenty (44.4%) of them were preterm whereas 25 (55.6%) were term (P<0.001). Mortality was 17.7%. All strains were resistant to ampicillin, and susceptible to amikacin. The source of outbreak was contaminated distilled water. Copyright © 2010 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.
Short-term variability of mineral dust, metals and carbon emission from road dust resuspension
NASA Astrophysics Data System (ADS)
Amato, Fulvio; Schaap, Martijn; Denier van der Gon, Hugo A. C.; Pandolfi, Marco; Alastuey, Andrés; Keuken, Menno; Querol, Xavier
2013-08-01
Particulate matter (PM) pollution in cities has severe impact on morbidity and mortality of their population. In these cities, road dust resuspension contributes largely to PM and airborne heavy metals concentrations. However, the short-term variation of emission through resuspension is not well described in the air quality models, hampering a reliable description of air pollution and related health effects. In this study we experimentally show that the emission strength of resuspension varies widely among road dust components/sources. Our results offer the first experimental evidence of different emission rates for mineral dust, heavy metals and carbon fractions due to traffic-induced resuspension. Also, the same component (or source) recovers differently in a road in Barcelona (Spain) and a road in Utrecht (The Netherlands). This finding has important implications on atmospheric pollution modelling, mostly for mineral dust, heavy metals and carbon species. After rain events, recoveries were generally faster in Barcelona rather than in Utrecht. The largest difference was found for the mineral dust (Al, Si, Ca). Tyre wear particles (organic carbon and zinc) recovered faster than other road dust particles in both cities. The source apportionment of road dust mass provides useful information for air quality management.
Kumarathilaka, Prasanna; Seneweera, Saman; Meharg, Andrew; Bundschuh, Jochen
2018-04-21
Rice is the main staple carbohydrate source for billions of people worldwide. Natural geogenic and anthropogenic sources has led to high arsenic (As) concentrations in rice grains. This is because As is highly bioavailable to rice roots under conditions in which rice is cultivated. A multifaceted and interdisciplinary understanding, both of short-term and long-term effects, are required to identify spatial and temporal changes in As contamination levels in paddy soil-water systems. During flooding, soil pore waters are elevated in inorganic As compared to dryland cultivation systems, as anaerobism results in poorly mobile As(V), being reduced to highly mobile As(III). The formation of iron (Fe) plaque on roots, availability of metal (hydro)oxides (Fe and Mn), organic matter, clay mineralogy and competing ions and compounds (PO 4 3- and Si(OH) 4 ) are all known to influence As(V) and As(III) mobility in paddy soil-water environments. Microorganisms play a key role in As transformation through oxidation/reduction, and methylation/volatilization reactions, but transformation kinetics are poorly understood. Scientific-based optimization of all biogeochemical parameters may help to significantly reduce the bioavailability of inorganic As. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fu, Shihang; Zhang, Li; Hu, Yao; Ding, Xiang
2018-01-01
Confocal Raman Microscopy (CRM) has matured to become one of the most powerful instruments in analytical science because of its molecular sensitivity and high spatial resolution. Compared with conventional Raman Microscopy, CRM can perform three dimensions mapping of tiny samples and has the advantage of high spatial resolution thanking to the unique pinhole. With the wide application of the instrument, there is a growing requirement for the evaluation of the imaging performance of the system. Point-spread function (PSF) is an important approach to the evaluation of imaging capability of an optical instrument. Among a variety of measurement methods of PSF, the point source method has been widely used because it is easy to operate and the measurement results are approximate to the true PSF. In the point source method, the point source size has a significant impact on the final measurement accuracy. In this paper, the influence of the point source sizes on the measurement accuracy of PSF is analyzed and verified experimentally. A theoretical model of the lateral PSF for CRM is established and the effect of point source size on full-width at half maximum of lateral PSF is simulated. For long-term preservation and measurement convenience, PSF measurement phantom using polydimethylsiloxane resin, doped with different sizes of polystyrene microspheres is designed. The PSF of CRM with different sizes of microspheres are measured and the results are compared with the simulation results. The results provide a guide for measuring the PSF of the CRM.
Preston, Stephen D.; Alexander, Richard B.; Schwarz, Gregory E.; Crawford, Charles G.
2011-01-01
We compared the results of 12 recently calibrated regional SPARROW (SPAtially Referenced Regressions On Watershed attributes) models covering most of the continental United States to evaluate the consistency and regional differences in factors affecting stream nutrient loads. The models - 6 for total nitrogen and 6 for total phosphorus - all provide similar levels of prediction accuracy, but those for major river basins in the eastern half of the country were somewhat more accurate. The models simulate long-term mean annual stream nutrient loads as a function of a wide range of known sources and climatic (precipitation, temperature), landscape (e.g., soils, geology), and aquatic factors affecting nutrient fate and transport. The results confirm the dominant effects of urban and agricultural sources on stream nutrient loads nationally and regionally, but reveal considerable spatial variability in the specific types of sources that control water quality. These include regional differences in the relative importance of different types of urban (municipal and industrial point vs. diffuse urban runoff) and agriculture (crop cultivation vs. animal waste) sources, as well as the effects of atmospheric deposition, mining, and background (e.g., soil phosphorus) sources on stream nutrients. Overall, we found that the SPARROW model results provide a consistent set of information for identifying the major sources and environmental factors affecting nutrient fate and transport in United States watersheds at regional and subregional scales. ?? 2011 American Water Resources Association. This article is a U.S. Government work and is in the public domain in the USA.
Leveraging terminological resources for mapping between rare disease information sources.
Rance, Bastien; Snyder, Michelle; Lewis, Janine; Bodenreider, Olivier
2013-01-01
Rare disease information sources are incompletely and inconsistently cross-referenced to one another, making it difficult for information seekers to navigate across them. The development of such cross-references established manually by experts is generally labor intensive and costly. To develop an automatic mapping between two of the major rare diseases information sources, GARD and Orphanet, by leveraging terminological resources, especially the UMLS. We map the rare disease terms from Orphanet and ORDR to the UMLS. We use the UMLS as a pivot to bridge between the rare disease terminologies. We compare our results to a mapping obtained through manually established cross-references to OMIM. Our mapping has a precision of 94%, a recall of 63% and an F1-score of 76%. Our automatic mapping should help facilitate the development of more complete and consistent cross-references between GARD and Orphanet, and is applicable to other rare disease information sources as well.
Flow diagram analysis of electrical fatalities in construction industry.
Chi, Chia-Fen; Lin, Yuan-Yuan; Ikhwan, Mohamad
2012-01-01
The current study reanalyzed 250 electrical fatalities in the construction industry from 1996 to 2002 into seven patterns based on source of electricity (power line, energized equipment, improperly installed or damaged equipment), direct contact or indirect contact through some source of injury (boom vehicle, metal bar or pipe, and other conductive material). Each fatality was coded in terms of age, company size, experience, performing tasks, source of injury, accident cause and hazard pattern. The Chi-square Automatic Interaction Detector (CHAID) was applied to the coded data of the fatal electrocution to find a subset of predictors that might derive meaningful classifications or accidents scenarios. A series of Flow Diagrams was constructed based on CHAID result to illustrate the flow of electricity travelling from electrical source to human body. Each of the flow diagrams can be directly linked with feasible prevention strategies by cutting the flow of electricity.
Solutions of Boltzmann`s Equation for Mono-energetic Neutrons in an Infinite Homogeneous Medium
DOE R&D Accomplishments Database
Wigner, E. P.
1943-11-30
Boltzman's equation is solved for the case of monoenergetic neutrons created by a plane or point source in an infinite medium which has spherically symmetric scattering. The customary solution of the diffusion equation appears to be multiplied by a constant factor which is smaller than 1. In addition to this term the total neutron density contains another term which is important in the neighborhood of the source. It varies as 1/r{sup 2} in the neighborhood of a point source. (auth)
Solute source depletion control of forward and back diffusion through low-permeability zones
NASA Astrophysics Data System (ADS)
Yang, Minjune; Annable, Michael D.; Jawitz, James W.
2016-10-01
Solute diffusive exchange between low-permeability aquitards and high-permeability aquifers acts as a significant mediator of long-term contaminant fate. Aquifer contaminants diffuse into aquitards, but as contaminant sources are depleted, aquifer concentrations decline, triggering back diffusion from aquitards. The dynamics of the contaminant source depletion, or the source strength function, controls the timing of the transition of aquitards from sinks to sources. Here, we experimentally evaluate three archetypical transient source depletion models (step-change, linear, and exponential), and we use novel analytical solutions to accurately account for dynamic aquitard-aquifer diffusive transfer. Laboratory diffusion experiments were conducted using a well-controlled flow chamber to assess solute exchange between sand aquifer and kaolinite aquitard layers. Solute concentration profiles in the aquitard were measured in situ using electrical conductivity. Back diffusion was shown to begin earlier and produce larger mass flux for rapidly depleting sources. The analytical models showed very good correspondence with measured aquifer breakthrough curves and aquitard concentration profiles. The modeling approach links source dissolution and back diffusion, enabling assessment of human exposure risk and calculation of the back diffusion initiation time, as well as the resulting plume persistence.
Solute source depletion control of forward and back diffusion through low-permeability zones.
Yang, Minjune; Annable, Michael D; Jawitz, James W
2016-10-01
Solute diffusive exchange between low-permeability aquitards and high-permeability aquifers acts as a significant mediator of long-term contaminant fate. Aquifer contaminants diffuse into aquitards, but as contaminant sources are depleted, aquifer concentrations decline, triggering back diffusion from aquitards. The dynamics of the contaminant source depletion, or the source strength function, controls the timing of the transition of aquitards from sinks to sources. Here, we experimentally evaluate three archetypical transient source depletion models (step-change, linear, and exponential), and we use novel analytical solutions to accurately account for dynamic aquitard-aquifer diffusive transfer. Laboratory diffusion experiments were conducted using a well-controlled flow chamber to assess solute exchange between sand aquifer and kaolinite aquitard layers. Solute concentration profiles in the aquitard were measured in situ using electrical conductivity. Back diffusion was shown to begin earlier and produce larger mass flux for rapidly depleting sources. The analytical models showed very good correspondence with measured aquifer breakthrough curves and aquitard concentration profiles. The modeling approach links source dissolution and back diffusion, enabling assessment of human exposure risk and calculation of the back diffusion initiation time, as well as the resulting plume persistence. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Murillo, J.; García-Navarro, P.
2012-02-01
In this work, the source term discretization in hyperbolic conservation laws with source terms is considered using an approximate augmented Riemann solver. The technique is applied to the shallow water equations with bed slope and friction terms with the focus on the friction discretization. The augmented Roe approximate Riemann solver provides a family of weak solutions for the shallow water equations, that are the basis of the upwind treatment of the source term. This has proved successful to explain and to avoid the appearance of instabilities and negative values of the thickness of the water layer in cases of variable bottom topography. Here, this strategy is extended to capture the peculiarities that may arise when defining more ambitious scenarios, that may include relevant stresses in cases of mud/debris flow. The conclusions of this analysis lead to the definition of an accurate and robust first order finite volume scheme, able to handle correctly transient problems considering frictional stresses in both clean water and debris flow, including in this last case a correct modelling of stopping conditions.
Toward a common language for biobanking.
Fransson, Martin N; Rial-Sebbag, Emmanuelle; Brochhausen, Mathias; Litton, Jan-Eric
2015-01-01
To encourage the process of harmonization, the biobank community should support and use a common terminology. Relevant terms may be found in general thesauri for medicine, legal instruments or specific glossaries for biobanking. A comparison of the use of these sources has so far not been conducted and would be a useful instrument to further promote harmonization and data sharing. Thus, the purpose of the present study was to investigate the preference of definitions important for sharing biological samples and data. Definitions for 10 terms -[human] biobank, sample/specimen, sample collection, study, aliquot, coded, identifying information, anonymised, personal data and informed consent-were collected from several sources. A web-based questionnaire was sent to 560 European individuals working with biobanks asking to select their preferred definition for the terms. A total of 123 people participated in the survey, giving a response rate of 23%. The result was evaluated from four aspects: scope of definitions, potential regional differences, differences in semantics and definitions in the context of ontologies, guided by comments from responders. Indicative from the survey is the risk of focusing only on the research aspect of biobanking in definitions. Hence, it is recommended that important terms should be formulated in such a way that all areas of biobanking are covered to improve the bridges between research and clinical application. Since several of the terms investigated here within can also be found in a legal context, which may differ between countries, establishing what is a proper definition on how it adheres to law is also crucial.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawrie, Scott R., E-mail: scott.lawrie@stfc.ac.uk; John Adams Institute for Accelerator Science, Department of Physics, University of Oxford; Faircloth, Daniel C.
2015-04-08
In order to facilitate the testing of advanced H{sup −} ion sources for the ISIS and Front End Test Stand (FETS) facilities at the Rutherford Appleton Laboratory (RAL), a Vessel for Extraction and Source Plasma Analyses (VESPA) has been constructed. This will perform the first detailed plasma measurements on the ISIS Penning-type H{sup −} ion source using emission spectroscopic techniques. In addition, the 30-year-old extraction optics are re-designed from the ground up in order to fully transport the beam. Using multiple beam and plasma diagnostics devices, the ultimate aim is improve H{sup −} production efficiency and subsequent transport for eithermore » long-term ISIS user operations or high power FETS requirements. The VESPA will also accommodate and test a new scaled-up Penning H{sup −} source design. This paper details the VESPA design, construction and commissioning, as well as initial beam and spectroscopy results.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin David.
When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modularmore » In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.« less
Room temperature single photon source using fiber-integrated hexagonal boron nitride
NASA Astrophysics Data System (ADS)
Vogl, Tobias; Lu, Yuerui; Lam, Ping Koy
2017-07-01
Single photons are a key resource for quantum optics and optical quantum information processing. The integration of scalable room temperature quantum emitters into photonic circuits remains to be a technical challenge. Here we utilize a defect center in hexagonal boron nitride (hBN) attached by Van der Waals force onto a multimode fiber as a single photon source. We perform an optical characterization of the source in terms of spectrum, state lifetime, power saturation and photostability. A special feature of our source is that it allows for easy switching between fiber-coupled and free space single photon generation modes. In order to prove the quantum nature of the emission we measure the second-order correlation function {{g}(2)}≤ft(τ \\right) . For both fiber-coupled and free space emission, the {{g}(2)}≤ft(τ \\right) dips below 0.5 indicating operation in the single photon regime. The results so far demonstrate the feasibility of 2D material single photon sources for scalable photonic quantum information processing.
Source calibrations and SDC calorimeter requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, D.
Several studies of the problem of calibration of the SDC calorimeter exist. In this note the attempt is made to give a connected account of the requirements on the source calibration from the point of view of the desired, and acceptable, constant term induced in the EM resolution. It is assumed that a local'' calibration resulting from exposing each tower to a beam of electrons is not feasible. It is further assumed that an in situ'' calibration is either not yet performed, or is unavailable due to tracking alignment problems or high luminosity operation rendering tracking inoperative. Therefore, the assumptionsmore » used are rather conservative. In this scenario, each scintillator plate of each tower is exposed to a moving radioactive source. That reading is used to mask'' an optical cookie'' in a grey code chosen so as to make the response uniform. The source is assumed to be the sole calibration of the tower. Therefore, the phrase global'' calibration of towers by movable radioactive sources is adopted.« less
Source calibrations and SDC calorimeter requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, D.
Several studies of the problem of calibration of the SDC calorimeter exist. In this note the attempt is made to give a connected account of the requirements on the source calibration from the point of view of the desired, and acceptable, constant term induced in the EM resolution. It is assumed that a ``local`` calibration resulting from exposing each tower to a beam of electrons is not feasible. It is further assumed that an ``in situ`` calibration is either not yet performed, or is unavailable due to tracking alignment problems or high luminosity operation rendering tracking inoperative. Therefore, the assumptionsmore » used are rather conservative. In this scenario, each scintillator plate of each tower is exposed to a moving radioactive source. That reading is used to ``mask`` an optical ``cookie`` in a grey code chosen so as to make the response uniform. The source is assumed to be the sole calibration of the tower. Therefore, the phrase ``global`` calibration of towers by movable radioactive sources is adopted.« less