Starns, Jeffrey J.; Pazzaglia, Angela M.; Rotello, Caren M.; Hautus, Michael J.; Macmillan, Neil A.
2014-01-01
Source memory zROC slopes change from below 1 to above 1 depending on which source gets the strongest learning. This effect has been attributed to memory processes, either in terms of a threshold source recollection process or changes in the variability of continuous source evidence. We propose two decision mechanisms that can produce the slope effect, and we test them in three experiments. The evidence mixing account assumes that people change how they weight item versus source evidence based on which source is stronger, and the converging criteria account assumes that participants become more willing to make high confidence source responses for test probes that have higher item strength. Results failed to support the evidence mixing account, in that the slope effect emerged even when item evidence was not informative for the source judgment (that is, in tests that included strong and weak items from both sources). In contrast, results showed strong support for the converging criteria account. This account not only accommodated the unequal-strength slope effect, but also made a prediction for unstudied (new) items that was empirically confirmed: participants made more high confidence source responses for new items when they were more confident that the item was studied. The converging criteria account has an advantage over accounts based on source recollection or evidence variability, as the latter accounts do not predict the relationship between recognition and source confidence for new items. PMID:23565789
McDonald, Brian C; Goldstein, Allen H; Harley, Robert A
2015-04-21
A fuel-based approach is used to assess long-term trends (1970-2010) in mobile source emissions of black carbon (BC) and organic aerosol (OA, including both primary emissions and secondary formation). The main focus of this analysis is the Los Angeles Basin, where a long record of measurements is available to infer trends in ambient concentrations of BC and organic carbon (OC), with OC used here as a proxy for OA. Mobile source emissions and ambient concentrations have decreased similarly, reflecting the importance of on- and off-road engines as sources of BC and OA in urban areas. In 1970, the on-road sector accounted for ∼90% of total mobile source emissions of BC and OA (primary + secondary). Over time, as on-road engine emissions have been controlled, the relative importance of off-road sources has grown. By 2010, off-road engines were estimated to account for 37 ± 20% and 45 ± 16% of total mobile source contributions to BC and OA, respectively, in the Los Angeles area. This study highlights both the success of efforts to control on-road emission sources, and the importance of considering off-road engine and other VOC source contributions when assessing long-term emission and ambient air quality trends.
Nonlinearly driven harmonics of Alfvén modes
NASA Astrophysics Data System (ADS)
Zhang, B.; Breizman, B. N.; Zheng, L. J.; Berk, H. L.
2014-01-01
In order to study the leading order nonlinear magneto-hydrodynamic (MHD) harmonic response of a plasma in realistic geometry, the AEGIS code has been generalized to account for inhomogeneous source terms. These source terms are expressed in terms of the quadratic corrections that depend on the functional form of a linear MHD eigenmode, such as the Toroidal Alfvén Eigenmode. The solution of the resultant equation gives the second order harmonic response. Preliminary results are presented here.
31 CFR 203.19 - Sources of balances.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 31 Money and Finance:Treasury 2 2011-07-01 2011-07-01 false Sources of balances. 203.19 Section... § 203.19 Sources of balances. A financial institution must be a collector depositary that accepts term... TIP main account balance pursuant to subpart C of this part; (b) EFTPS ACH credit and debit...
31 CFR 203.19 - Sources of balances.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 31 Money and Finance: Treasury 2 2010-07-01 2010-07-01 false Sources of balances. 203.19 Section... § 203.19 Sources of balances. A financial institution must be a collector depositary that accepts term... TIP main account balance pursuant to subpart C of this part; (b) EFTPS ACH credit and debit...
High-order scheme for the source-sink term in a one-dimensional water temperature model
Jing, Zheng; Kang, Ling
2017-01-01
The source-sink term in water temperature models represents the net heat absorbed or released by a water system. This term is very important because it accounts for solar radiation that can significantly affect water temperature, especially in lakes. However, existing numerical methods for discretizing the source-sink term are very simplistic, causing significant deviations between simulation results and measured data. To address this problem, we present a numerical method specific to the source-sink term. A vertical one-dimensional heat conduction equation was chosen to describe water temperature changes. A two-step operator-splitting method was adopted as the numerical solution. In the first step, using the undetermined coefficient method, a high-order scheme was adopted for discretizing the source-sink term. In the second step, the diffusion term was discretized using the Crank-Nicolson scheme. The effectiveness and capability of the numerical method was assessed by performing numerical tests. Then, the proposed numerical method was applied to a simulation of Guozheng Lake (located in central China). The modeling results were in an excellent agreement with measured data. PMID:28264005
High-order scheme for the source-sink term in a one-dimensional water temperature model.
Jing, Zheng; Kang, Ling
2017-01-01
The source-sink term in water temperature models represents the net heat absorbed or released by a water system. This term is very important because it accounts for solar radiation that can significantly affect water temperature, especially in lakes. However, existing numerical methods for discretizing the source-sink term are very simplistic, causing significant deviations between simulation results and measured data. To address this problem, we present a numerical method specific to the source-sink term. A vertical one-dimensional heat conduction equation was chosen to describe water temperature changes. A two-step operator-splitting method was adopted as the numerical solution. In the first step, using the undetermined coefficient method, a high-order scheme was adopted for discretizing the source-sink term. In the second step, the diffusion term was discretized using the Crank-Nicolson scheme. The effectiveness and capability of the numerical method was assessed by performing numerical tests. Then, the proposed numerical method was applied to a simulation of Guozheng Lake (located in central China). The modeling results were in an excellent agreement with measured data.
Observation-based source terms in the third-generation wave model WAVEWATCH
NASA Astrophysics Data System (ADS)
Zieger, Stefan; Babanin, Alexander V.; Erick Rogers, W.; Young, Ian R.
2015-12-01
Measurements collected during the AUSWEX field campaign, at Lake George (Australia), resulted in new insights into the processes of wind wave interaction and whitecapping dissipation, and consequently new parameterizations of the input and dissipation source terms. The new nonlinear wind input term developed accounts for dependence of the growth on wave steepness, airflow separation, and for negative growth rate under adverse winds. The new dissipation terms feature the inherent breaking term, a cumulative dissipation term and a term due to production of turbulence by waves, which is particularly relevant for decaying seas and for swell. The latter is consistent with the observed decay rate of ocean swell. This paper describes these source terms implemented in WAVEWATCH III ®and evaluates the performance against existing source terms in academic duration-limited tests, against buoy measurements for windsea-dominated conditions, under conditions of extreme wind forcing (Hurricane Katrina), and against altimeter data in global hindcasts. Results show agreement by means of growth curves as well as integral and spectral parameters in the simulations and hindcast.
Modeling individual differences in working memory performance: a source activation account
Daily, Larry Z.; Lovett, Marsha C.; Reder, Lynne M.
2008-01-01
Working memory resources are needed for processing and maintenance of information during cognitive tasks. Many models have been developed to capture the effects of limited working memory resources on performance. However, most of these models do not account for the finding that different individuals show different sensitivities to working memory demands, and none of the models predicts individual subjects' patterns of performance. We propose a computational model that accounts for differences in working memory capacity in terms of a quantity called source activation, which is used to maintain goal-relevant information in an available state. We apply this model to capture the working memory effects of individual subjects at a fine level of detail across two experiments. This, we argue, strengthens the interpretation of source activation as working memory capacity. PMID:19079561
ERIC Educational Resources Information Center
Challies, Danna M.; Hunt, Maree; Garry, Maryanne; Harper, David N.
2011-01-01
The misinformation effect is a term used in the cognitive psychological literature to describe both experimental and real-world instances in which misleading information is incorporated into an account of an historical event. In many real-world situations, it is not possible to identify a distinct source of misinformation, and it appears that the…
The missing link in Aboriginal care: resource accounting.
Ashton, C W; Duffie-Ashton, Denise
2008-01-01
Resource accounting principles provide more effective planning for Aboriginal healthcare delivery through driving best management practices, efficacious techniques for long-term resource allocation, transparency of information and performance measurement. Major improvements to Aboriginal health in New Zealand and Australia were facilitated in the context of this public finance paradigm, rather than cash accounting systems that remain the current method for public departments in Canada. Multiple funding sources and fragmented delivery of Aboriginal healthcare can be remedied through similar adoption of such principles.
Hayes, Scott M; Buchler, Norbou; Stokes, Jared; Kragel, James; Cabeza, Roberto
2011-12-01
Although the medial-temporal lobes (MTL), PFC, and parietal cortex are considered primary nodes in the episodic memory network, there is much debate regarding the contributions of MTL, PFC, and parietal subregions to recollection versus familiarity (dual-process theory) and the feasibility of accounts on the basis of a single memory strength process (strength theory). To investigate these issues, the current fMRI study measured activity during retrieval of memories that differed quantitatively in terms of strength (high vs. low-confidence trials) and qualitatively in terms of recollection versus familiarity (source vs. item memory tasks). Support for each theory varied depending on which node of the episodic memory network was considered. Results from MTL best fit a dual-process account, as a dissociation was found between a right hippocampal region showing high-confidence activity during the source memory task and bilateral rhinal regions showing high-confidence activity during the item memory task. Within PFC, several left-lateralized regions showed greater activity for source than item memory, consistent with recollective orienting, whereas a right-lateralized ventrolateral area showed low-confidence activity in both tasks, consistent with monitoring processes. Parietal findings were generally consistent with strength theory, with dorsal areas showing low-confidence activity and ventral areas showing high-confidence activity in both tasks. This dissociation fits with an attentional account of parietal functions during episodic retrieval. The results suggest that both dual-process and strength theories are partly correct, highlighting the need for an integrated model that links to more general cognitive theories to account for observed neural activity during episodic memory retrieval.
Hayes, Scott M.; Buchler, Norbou; Stokes, Jared; Kragel, James; Cabeza, Roberto
2012-01-01
Although the medial-temporal lobes (MTL), PFC, and parietal cortex are considered primary nodes in the episodic memory network, there is much debate regarding the contributions of MTL, PFC, and parietal subregions to recollection versus familiarity (dual-process theory) and the feasibility of accounts on the basis of a single memory strength process (strength theory). To investigate these issues, the current fMRI study measured activity during retrieval of memories that differed quantitatively in terms of strength (high vs. low-confidence trials) and qualitatively in terms of recollection versus familiarity (source vs. item memory tasks). Support for each theory varied depending on which node of the episodic memory network was considered. Results from MTL best fit a dual-process account, as a dissociation was found between a right hippocampal region showing high-confidence activity during the source memory task and bilateral rhinal regions showing high-confidence activity during the item memory task. Within PFC, several left-lateralized regions showed greater activity for source than item memory, consistent with recollective orienting, whereas a right-lateralized ventrolateral area showed low-confidence activity in both tasks, consistent with monitoring processes. Parietal findings were generally consistent with strength theory, with dorsal areas showing low-confidence activity and ventral areas showing high-confidence activity in both tasks. This dissociation fits with an attentional account of parietal functions during episodic retrieval. The results suggest that both dual-process and strength theories are partly correct, highlighting the need for an integrated model that links to more general cognitive theories to account for observed neural activity during episodic memory retrieval. PMID:21736454
NASA Astrophysics Data System (ADS)
Barrett, Steven R. H.; Britter, Rex E.
Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean dispersion are shown to produce results several orders of magnitude more efficiently with a loss of accuracy small compared to the absolute accuracy of advanced dispersion models near sources. The method can be readily incorporated into existing dispersion models, and may allow for additional computation time to be expended on modelling dispersion processes more accurately in future, rather than on accounting for source geometry.
NASA Technical Reports Server (NTRS)
Lebedeff, S. A.; Hameed, S.
1975-01-01
The problem investigated can be solved exactly in a simple manner if the equations are written in terms of a similarity variable. The exact solution is used to explore two questions of interest in the modelling of urban air pollution, taking into account the distribution of surface concentration downwind of an area source and the distribution of concentration with height.
Extended lattice Boltzmann scheme for droplet combustion.
Ashna, Mostafa; Rahimian, Mohammad Hassan; Fakhari, Abbas
2017-05-01
The available lattice Boltzmann (LB) models for combustion or phase change are focused on either single-phase flow combustion or two-phase flow with evaporation assuming a constant density for both liquid and gas phases. To pave the way towards simulation of spray combustion, we propose a two-phase LB method for modeling combustion of liquid fuel droplets. We develop an LB scheme to model phase change and combustion by taking into account the density variation in the gas phase and accounting for the chemical reaction based on the Cahn-Hilliard free-energy approach. Evaporation of liquid fuel is modeled by adding a source term, which is due to the divergence of the velocity field being nontrivial, in the continuity equation. The low-Mach-number approximation in the governing Navier-Stokes and energy equations is used to incorporate source terms due to heat release from chemical reactions, density variation, and nonluminous radiative heat loss. Additionally, the conservation equation for chemical species is formulated by including a source term due to chemical reaction. To validate the model, we consider the combustion of n-heptane and n-butanol droplets in stagnant air using overall single-step reactions. The diameter history and flame standoff ratio obtained from the proposed LB method are found to be in good agreement with available numerical and experimental data. The present LB scheme is believed to be a promising approach for modeling spray combustion.
Second-harmonic generation in shear wave beams with different polarizations
NASA Astrophysics Data System (ADS)
Spratt, Kyle S.; Ilinskii, Yurii A.; Zabolotskaya, Evgenia A.; Hamilton, Mark F.
2015-10-01
A coupled pair of nonlinear parabolic equations was derived by Zabolotskaya [1] that model the transverse components of the particle motion in a collimated shear wave beam propagating in an isotropic elastic solid. Like the KZK equation, the parabolic equation for shear wave beams accounts consistently for the leading order effects of diffraction, viscosity and nonlinearity. The nonlinearity includes a cubic nonlinear term that is equivalent to that present in plane shear waves, as well as a quadratic nonlinear term that is unique to diffracting beams. The work by Wochner et al. [2] considered shear wave beams with translational polarizations (linear, circular and elliptical), wherein second-order nonlinear effects vanish and the leading order nonlinear effect is third-harmonic generation by the cubic nonlinearity. The purpose of the current work is to investigate the quadratic nonlinear term present in the parabolic equation for shear wave beams by considering second-harmonic generation in Gaussian beams as a second-order nonlinear effect using standard perturbation theory. In order for second-order nonlinear effects to be present, a broader class of source polarizations must be considered that includes not only the familiar translational polarizations, but also polarizations accounting for stretching, shearing and rotation of the source plane. It is found that the polarization of the second harmonic generated by the quadratic nonlinearity is not necessarily the same as the polarization of the source-frequency beam, and we are able to derive a general analytic solution for second-harmonic generation from a Gaussian source condition that gives explicitly the relationship between the polarization of the source-frequency beam and the polarization of the second harmonic.
Effect of source location and listener location on ILD cues in a reverberant room
NASA Astrophysics Data System (ADS)
Ihlefeld, Antje; Shinn-Cunningham, Barbara G.
2004-05-01
Short-term interaural level differences (ILDs) were analyzed for simulations of the signals that would reach a listener in a reverberant room. White noise was convolved with manikin head-related impulse responses measured in a classroom to simulate different locations of the source relative to the manikin and different manikin positions in the room. The ILDs of the signals were computed within each third-octave band over a relatively short time window to investigate how reliably ILD cues encode source laterality. Overall, the mean of the ILD magnitude increases with lateral angle and decreases with distance, as expected. Increasing reverberation decreases the mean ILD magnitude and increases the variance of the short-term ILD, so that the spatial information carried by ILD cues is degraded by reverberation. These results suggest that the mean ILD is not a reliable cue for determining source laterality in a reverberant room. However, by taking into account both the mean and variance, the distribution of high-frequency short-term ILDs provides some spatial information. This analysis suggests that, in order to use ILDs to judge source direction in reverberant space, listeners must accumulate information about how the short-term ILD varies over time. [Work supported by NIDCD and AFOSR.
The Feminization of Poverty: Women, Work, and Welfare
ERIC Educational Resources Information Center
Pearce, Diane
1978-01-01
Statistics are presented which show that women are accounting for an increasingly large proportion of the economically disadvantaged. Different sources of income (earned, public, and private transfer income) and the welfare system are discussed in terms of their roles in the perpetuation of female poverty. (Author/GC)
An extension of the Lighthill theory of jet noise to encompass refraction and shielding
NASA Technical Reports Server (NTRS)
Ribner, Herbert S.
1995-01-01
A formalism for jet noise prediction is derived that includes the refractive 'cone of silence' and other effects; outside the cone it approximates the simple Lighthill format. A key step is deferral of the simplifying assumption of uniform density in the dominant 'source' term. The result is conversion to a convected wave equation retaining the basic Lighthill source term. The main effect is to amend the Lighthill solution to allow for refraction by mean flow gradients, achieved via a frequency-dependent directional factor. A general formula for power spectral density emitted from unit volume is developed as the Lighthill-based value multiplied by a squared 'normalized' Green's function (the directional factor), referred to a stationary point source. The convective motion of the sources, with its powerful amplifying effect, also directional, is already accounted for in the Lighthill format: wave convection and source convection are decoupled. The normalized Green's function appears to be near unity outside the refraction dominated 'cone of silence', this validates our long term practice of using Lighthill-based approaches outside the cone, with extension inside via the Green's function. The function is obtained either experimentally (injected 'point' source) or numerically (computational aeroacoustics). Approximation by unity seems adequate except near the cone and except when there are shrouding jets: in that case the difference from unity quantifies the shielding effect. Further extension yields dipole and monopole source terms (cf. Morfey, Mani, and others) when the mean flow possesses density gradients (e.g., hot jets).
Hockenberry, Jason M; Lien, Hsien-Ming; Chou, Shin-Yi
2010-01-01
Objective To investigate whether provider volume has an impact on the hazard of mortality for coronary artery bypass grafting (CABG) patients in Taiwan. Data Sources/Study Setting Multiple sources of linked data from the National Health Insurance Program in Taiwan. Study Design The linked data were used to identify 27,463 patients who underwent CABG without concomitant angioplasty or valve procedures and the surgeon and hospital volumes. Generalized estimating equations and hazard models were estimated to assess the impact of volume on mortality. The hazard modeling technique used accounts for bias stemming from unobserved heterogeneity. Principal Findings Both surgeon and hospital volume quartiles are inversely related to the hazard of mortality after CABG. Patients whose surgeon is in the three higher volume quartiles have lower 1-, 3-, 6-, and 12-month mortality after CABG, while only those having their procedure performed at the highest quartile of volume hospitals have lower mortality outcomes. Conclusions Mortality outcomes are related to provider CABG volume in Taiwan. Unobserved heterogeneity is a concern in the volume–outcome relationship; after accounting for it, surgeon volume effects on short-term mortality are large. Using models controlling for unobserved heterogeneity and examining longer term mortality may still differentiate provider quality by volume. PMID:20662948
Theory of step on leading edge of negative corona current pulse
NASA Astrophysics Data System (ADS)
Gupta, Deepak K.; Mahajan, Sangeeta; John, P. I.
2000-03-01
Theoretical models taking into account different feedback source terms (e.g., ion-impact electron emission, photo-electron emission, field emission, etc) have been proposed for the existence and explanation of the shape of negative corona current pulse, including the step on the leading edge. In the present work, a negative corona current pulse with the step on the leading edge is obtained in the presence of ion-impact electron emission feedback source only. The step on the leading edge is explained in terms of the plasma formation process and enhancement of the feedback source. Ionization wave-like movement toward the cathode is observed after the step. The conditions for the existence of current pulse, with and without the step on the leading edge, are also described. A qualitative comparison with earlier theoretical and experimental work is also included.
A multi-source data assimilation framework for flood forecasting: Accounting for runoff routing lags
NASA Astrophysics Data System (ADS)
Meng, S.; Xie, X.
2015-12-01
In the flood forecasting practice, model performance is usually degraded due to various sources of uncertainties, including the uncertainties from input data, model parameters, model structures and output observations. Data assimilation is a useful methodology to reduce uncertainties in flood forecasting. For the short-term flood forecasting, an accurate estimation of initial soil moisture condition will improve the forecasting performance. Considering the time delay of runoff routing is another important effect for the forecasting performance. Moreover, the observation data of hydrological variables (including ground observations and satellite observations) are becoming easily available. The reliability of the short-term flood forecasting could be improved by assimilating multi-source data. The objective of this study is to develop a multi-source data assimilation framework for real-time flood forecasting. In this data assimilation framework, the first step is assimilating the up-layer soil moisture observations to update model state and generated runoff based on the ensemble Kalman filter (EnKF) method, and the second step is assimilating discharge observations to update model state and runoff within a fixed time window based on the ensemble Kalman smoother (EnKS) method. This smoothing technique is adopted to account for the runoff routing lag. Using such assimilation framework of the soil moisture and discharge observations is expected to improve the flood forecasting. In order to distinguish the effectiveness of this dual-step assimilation framework, we designed a dual-EnKF algorithm in which the observed soil moisture and discharge are assimilated separately without accounting for the runoff routing lag. The results show that the multi-source data assimilation framework can effectively improve flood forecasting, especially when the runoff routing has a distinct time lag. Thus, this new data assimilation framework holds a great potential in operational flood forecasting by merging observations from ground measurement and remote sensing retrivals.
NASA Astrophysics Data System (ADS)
Marques, G.; Fraga, C. C. S.; Medellin-Azuara, J.
2016-12-01
The expansion and operation of urban water supply systems under growing demands, hydrologic uncertainty and water scarcity requires a strategic combination of supply sources for reliability, reduced costs and improved operational flexibility. The design and operation of such portfolio of water supply sources involves integration of long and short term planning to determine what and when to expand, and how much to use of each supply source accounting for interest rates, economies of scale and hydrologic variability. This research presents an integrated methodology coupling dynamic programming optimization with quadratic programming to optimize the expansion (long term) and operations (short term) of multiple water supply alternatives. Lagrange Multipliers produced by the short-term model provide a signal about the marginal opportunity cost of expansion to the long-term model, in an iterative procedure. A simulation model hosts the water supply infrastructure and hydrologic conditions. Results allow (a) identification of trade offs between cost and reliability of different expansion paths and water use decisions; (b) evaluation of water transfers between urban supply systems; and (c) evaluation of potential gains by reducing water system losses as a portfolio component. The latter is critical in several developing countries where water supply system losses are high and often neglected in favor of more system expansion.
Source term evaluation for combustion modeling
NASA Technical Reports Server (NTRS)
Sussman, Myles A.
1993-01-01
A modification is developed for application to the source terms used in combustion modeling. The modification accounts for the error of the finite difference scheme in regions where chain-branching chemical reactions produce exponential growth of species densities. The modification is first applied to a one-dimensional scalar model problem. It is then generalized to multiple chemical species, and used in quasi-one-dimensional computations of shock-induced combustion in a channel. Grid refinement studies demonstrate the improved accuracy of the method using this modification. The algorithm is applied in two spatial dimensions and used in simulations of steady and unsteady shock-induced combustion. Comparisons with ballistic range experiments give confidence in the numerical technique and the 9-species hydrogen-air chemistry model.
Geocoronal hydrogen studies using Fabry Perot interferometers, part 2: Long-term observations
NASA Astrophysics Data System (ADS)
Nossal, S. M.; Mierkiewicz, E. J.; Roesler, F. L.; Reynolds, R. J.; Haffner, L. M.
2006-09-01
Long-term data sets are required to investigate sources of natural variability in the upper atmosphere. Understanding the influence of sources of natural variability such as the solar cycle is needed to characterize the thermosphere + exosphere, to understand coupling processes between atmospheric regions, and to isolate signatures of natural variability from those due to human-caused change. Multi-year comparisons of thermospheric + exospheric Balmer α emissions require cross-calibrated and well-understood instrumentation, a stable calibration source, reproducible observing conditions, separation of the terrestrial from the Galactic emission line, and consistent data analysis accounting for differences in viewing geometry. We discuss how we address these criteria in the acquisition and analysis of a mid-latitude geocoronal Balmer α column emission data set now spanning two solar cycles and taken mainly from Wisconsin and Kitt Peak, Arizona. We also discuss results and outstanding challenges for increasing the accuracy and use of these observations.
Vlachogianni, Thomais; Fortibuoni, Tomaso; Ronchi, Francesca; Zeri, Christina; Mazziotti, Cristina; Tutman, Pero; Varezić, Dubravka Bojanić; Palatinus, Andreja; Trdan, Štefan; Peterlin, Monika; Mandić, Milica; Markovic, Olivera; Prvan, Mosor; Kaberi, Helen; Prevenios, Michael; Kolitari, Jerina; Kroqi, Gulielm; Fusco, Marina; Kalampokis, Evangelos; Scoullos, Michael
2018-06-01
The abundance, composition and sources of marine litter were determined on beaches located in the seven countries of the Adriatic-Ionian macroregion, namely Albania, Bosnia and Herzegovina, Croatia, Greece, Italy, Montenegro and Slovenia. A total of 70,581 marine litter items were classified and recorded through one-year long surveys carried out in 31 sites. The average litter density of 0.67 items/m 2 found within this study is considered to be relatively high. The beaches investigated differed in terms of human-induced pressures; their majority is classified either as semi-urban or semi-rural, while very few beaches could be characterized as urban or remote/natural. The majority of litter items were made of artificial/anthropogenic polymer materials accounting for 91.1% of all litter. Litter from shoreline sources accounted for 33.4% of all litter collected. The amount of litter from sea-based sources ranged in the different countries from 1.54% to 14.84%, with an average of 6.30% at regional level. Copyright © 2018 Elsevier Ltd. All rights reserved.
The role of long-term familiarity and attentional maintenance in short-term memory for timbre.
Siedenburg, Kai; McAdams, Stephen
2017-04-01
We study short-term recognition of timbre using familiar recorded tones from acoustic instruments and unfamiliar transformed tones that do not readily evoke sound-source categories. Participants indicated whether the timbre of a probe sound matched with one of three previously presented sounds (item recognition). In Exp. 1, musicians better recognised familiar acoustic compared to unfamiliar synthetic sounds, and this advantage was particularly large in the medial serial position. There was a strong correlation between correct rejection rate and the mean perceptual dissimilarity of the probe to the tones from the sequence. Exp. 2 compared musicians' and non-musicians' performance with concurrent articulatory suppression, visual interference, and with a silent control condition. Both suppression tasks disrupted performance by a similar margin, regardless of musical training of participants or type of sounds. Our results suggest that familiarity with sound source categories and attention play important roles in short-term memory for timbre, which rules out accounts solely based on sensory persistence.
Yi, Qitao; Chen, Qiuwen; Hu, Liuming; Shi, Wenqing
2017-05-16
This research developed an innovative approach to reveal nitrogen sources, transformation, and transport in large and complex river networks in the Taihu Lake basin using measurement of dual stable isotopes of nitrate. The spatial patterns of δ 15 N corresponded to the urbanization level, and the nitrogen cycle was associated with the hydrological regime at the basin level. During the high flow season of summer, nonpoint sources from fertilizer/soils and atmospheric deposition constituted the highest proportion of the total nitrogen load. The point sources from sewage/manure, with high ammonium concentrations and high δ 15 N and δ 18 O contents in the form of nitrate, accounted for the largest inputs among all sources during the low flow season of winter. Hot spot areas with heavy point source pollution were identified, and the pollutant transport routes were revealed. Nitrification occurred widely during the warm seasons, with decreased δ 18 O values; whereas great potential for denitrification existed during the low flow seasons of autumn and spring. The study showed that point source reduction could have effects over the short-term; however, long-term efforts to substantially control agriculture nonpoint sources are essential to eutrophication alleviation for the receiving lake, which clarifies the relationship between point and nonpoint source control.
NASA Astrophysics Data System (ADS)
Guinot, Vincent
2017-11-01
The validity of flux and source term formulae used in shallow water models with porosity for urban flood simulations is assessed by solving the two-dimensional shallow water equations over computational domains representing periodic building layouts. The models under assessment are the Single Porosity (SP), the Integral Porosity (IP) and the Dual Integral Porosity (DIP) models. 9 different geometries are considered. 18 two-dimensional initial value problems and 6 two-dimensional boundary value problems are defined. This results in a set of 96 fine grid simulations. Analysing the simulation results leads to the following conclusions: (i) the DIP flux and source term models outperform those of the SP and IP models when the Riemann problem is aligned with the main street directions, (ii) all models give erroneous flux closures when is the Riemann problem is not aligned with one of the main street directions or when the main street directions are not orthogonal, (iii) the solution of the Riemann problem is self-similar in space-time when the street directions are orthogonal and the Riemann problem is aligned with one of them, (iv) a momentum balance confirms the existence of the transient momentum dissipation model presented in the DIP model, (v) none of the source term models presented so far in the literature allows all flow configurations to be accounted for(vi) future laboratory experiments aiming at the validation of flux and source term closures should focus on the high-resolution, two-dimensional monitoring of both water depth and flow velocity fields.
10 CFR 35.406 - Brachytherapy sources accountability.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 1 2010-01-01 2010-01-01 false Brachytherapy sources accountability. 35.406 Section 35....406 Brachytherapy sources accountability. (a) A licensee shall maintain accountability at all times... area. (c) A licensee shall maintain a record of the brachytherapy source accountability in accordance...
NASA Astrophysics Data System (ADS)
Collier, J. D.; Tingay, S. J.; Callingham, J. R.; Norris, R. P.; Filipović, M. D.; Galvin, T. J.; Huynh, M. T.; Intema, H. T.; Marvil, J.; O'Brien, A. N.; Roper, Q.; Sirothia, S.; Tothill, N. F. H.; Bell, M. E.; For, B.-Q.; Gaensler, B. M.; Hancock, P. J.; Hindson, L.; Hurley-Walker, N.; Johnston-Hollitt, M.; Kapińska, A. D.; Lenc, E.; Morgan, J.; Procopio, P.; Staveley-Smith, L.; Wayth, R. B.; Wu, C.; Zheng, Q.; Heywood, I.; Popping, A.
2018-06-01
We present very long baseline interferometry observations of a faint and low-luminosity (L1.4 GHz < 1027 W Hz-1) gigahertz-peaked spectrum (GPS) and compact steep-spectrum (CSS) sample. We select eight sources from deep radio observations that have radio spectra characteristic of a GPS or CSS source and an angular size of θ ≲ 2 arcsec, and detect six of them with the Australian Long Baseline Array. We determine their linear sizes, and model their radio spectra using synchrotron self-absorption (SSA) and free-free absorption (FFA) models. We derive statistical model ages, based on a fitted scaling relation, and spectral ages, based on the radio spectrum, which are generally consistent with the hypothesis that GPS and CSS sources are young and evolving. We resolve the morphology of one CSS source with a radio luminosity of 10^{25} W Hz^{-1}, and find what appear to be two hotspots spanning 1.7 kpc. We find that our sources follow the turnover-linear size relation, and that both homogeneous SSA and an inhomogeneous FFA model can account for the spectra with observable turnovers. All but one of the FFA models do not require a spectral break to account for the radio spectrum, while all but one of the alternative SSA and power-law models do require a spectral break to account for the radio spectrum. We conclude that our low-luminosity sample is similar to brighter samples in terms of their spectral shape, turnover frequencies, linear sizes, and ages, but cannot test for a difference in morphology.
10 CFR 835.1202 - Accountable sealed radioactive sources.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Accountable sealed radioactive sources. 835.1202 Section 835.1202 Energy DEPARTMENT OF ENERGY OCCUPATIONAL RADIATION PROTECTION Sealed Radioactive Source Control § 835.1202 Accountable sealed radioactive sources. (a) Each accountable sealed radioactive source...
10 CFR 835.1202 - Accountable sealed radioactive sources.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 4 2013-01-01 2013-01-01 false Accountable sealed radioactive sources. 835.1202 Section 835.1202 Energy DEPARTMENT OF ENERGY OCCUPATIONAL RADIATION PROTECTION Sealed Radioactive Source Control § 835.1202 Accountable sealed radioactive sources. (a) Each accountable sealed radioactive source...
10 CFR 835.1202 - Accountable sealed radioactive sources.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Accountable sealed radioactive sources. 835.1202 Section 835.1202 Energy DEPARTMENT OF ENERGY OCCUPATIONAL RADIATION PROTECTION Sealed Radioactive Source Control § 835.1202 Accountable sealed radioactive sources. (a) Each accountable sealed radioactive source...
10 CFR 835.1202 - Accountable sealed radioactive sources.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Accountable sealed radioactive sources. 835.1202 Section 835.1202 Energy DEPARTMENT OF ENERGY OCCUPATIONAL RADIATION PROTECTION Sealed Radioactive Source Control § 835.1202 Accountable sealed radioactive sources. (a) Each accountable sealed radioactive source...
10 CFR 835.1202 - Accountable sealed radioactive sources.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Accountable sealed radioactive sources. 835.1202 Section 835.1202 Energy DEPARTMENT OF ENERGY OCCUPATIONAL RADIATION PROTECTION Sealed Radioactive Source Control § 835.1202 Accountable sealed radioactive sources. (a) Each accountable sealed radioactive source...
On Seizing the Source: Toward a Phenomenology of Religious Violence
Staudigl, Michael
2016-01-01
Abstract In this paper I argue that we need to analyze ‘religious violence’ in the ‘post-secular context’ in a twofold way: rather than simply viewing it in terms of mere irrationality, senselessness, atavism, or monstrosity – terms which, as we witness today on an immense scale, are strongly endorsed by the contemporary theater of cruelty committed in the name of religion – we also need to understand it in terms of an ‘originary supplement’ of ‘disengaged reason’. In order to confront its specificity beyond traditional explanations of violence, I propose an integrated phenomenological account of religion that traces the phenomenality of religion in terms of a correlation between the originary givenness of transcendence and capable man’s creative capacities to respond to it. Following Ricœur, I discuss ‘religious violence’ in terms of a monopolizing appropriation of the originary source of givenness that conflates man’s freedom to poetically respond to the appeal of the foundational with the surreptitiously claimed sovereignty to make it happen in a practical transfiguration of the everyday. PMID:28690372
Neutron crosstalk between liquid scintillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verbeke, J. M.; Prasad, M. K.; Snyderman, N. J.
2015-05-01
We propose a method to quantify the fractions of neutrons scattering between liquid scintillators. Using a spontaneous fission source, this method can be utilized to quickly characterize an array of liquid scintillators in terms of crosstalk. The point model theory due to Feynman is corrected to account for these multiple scatterings. Using spectral information measured by the liquid scintillators, fractions of multiple scattering can be estimated, and mass reconstruction of fissile materials under investigation can be improved. Monte Carlo simulations of mono-energetic neutron sources were performed to estimate neutron crosstalk. A californium source in an array of liquid scintillators wasmore » modeled to illustrate the improvement of the mass reconstruction.« less
Lawrence, Corey R.; Reynolds, Richard L.; Kettterer, Michael E.; Neff, Jason C.
2013-01-01
When dust inputs are large or have persisted for long periods of time, the signature of dust additions are often apparent in soils. The of dust will be greatest where the geochemical composition of dust is distinct from local sources of soil parent material. In this study the influence of dust accretion on soil geochemistry is quantified for two different soils from the San Juan Mountains of southwestern Colorado, USA. At both study sites, dust is enriched in several trace elements relative to local rock, especially Cd, Cu, Pb, and Zn. Mass-balance calculations that do not explicitly account for dust inputs indicate the accumulation of some elements in soil beyond what can be explained by weathering of local rock. Most observed elemental enrichments are explained by accounting for the long-term accretion of dust, based on modern isotopic and geochemical estimates. One notable exception is Pb, which based on mass-balance calculations and isotopic measurements may have an additional source at one of the study sites. These results suggest that dust is a major factor influencing the development of soil in these settings and is also an important control of soil weathering fluxes. After accounting for dust inputs in mass-balance calculations, Si weathering fluxes from San Juan Mountain soils are within the range observed for other temperate systems. Comparing dust inputs with mass-balanced based flux estimates suggests dust could account for as much as 50–80% of total long-term chemical weathering fluxes. These results support the notion that dust inputs may sustain chemical weathering fluxes even in relatively young continental settings. Given the widespread input of far-traveled dust, the weathering of dust is likely and important and underappreciated aspect of the global weathering engine.
NASA Astrophysics Data System (ADS)
Lawrence, Corey R.; Reynolds, Richard L.; Ketterer, Michael E.; Neff, Jason C.
2013-04-01
When dust inputs are large or have persisted for long periods of time, the signature of dust additions are often apparent in soils. The of dust will be greatest where the geochemical composition of dust is distinct from local sources of soil parent material. In this study the influence of dust accretion on soil geochemistry is quantified for two different soils from the San Juan Mountains of southwestern Colorado, USA. At both study sites, dust is enriched in several trace elements relative to local rock, especially Cd, Cu, Pb, and Zn. Mass-balance calculations that do not explicitly account for dust inputs indicate the accumulation of some elements in soil beyond what can be explained by weathering of local rock. Most observed elemental enrichments are explained by accounting for the long-term accretion of dust, based on modern isotopic and geochemical estimates. One notable exception is Pb, which based on mass-balance calculations and isotopic measurements may have an additional source at one of the study sites. These results suggest that dust is a major factor influencing the development of soil in these settings and is also an important control of soil weathering fluxes. After accounting for dust inputs in mass-balance calculations, Si weathering fluxes from San Juan Mountain soils are within the range observed for other temperate systems. Comparing dust inputs with mass-balanced based flux estimates suggests dust could account for as much as 50-80% of total long-term chemical weathering fluxes. These results support the notion that dust inputs may sustain chemical weathering fluxes even in relatively young continental settings. Given the widespread input of far-traveled dust, the weathering of dust is likely and important and underappreciated aspect of the global weathering engine.
NASA's Quiet Aircraft Technology Project
NASA Technical Reports Server (NTRS)
Whitfield, Charlotte E.
2004-01-01
NASA's Quiet Aircraft Technology Project is developing physics-based understanding, models and concepts to discover and realize technology that will, when implemented, achieve the goals of a reduction of one-half in perceived community noise (relative to 1997) by 2007 and a further one-half in the far term. Noise sources generated by both the engine and the airframe are considered, and the effects of engine/airframe integration are accounted for through the propulsion airframe aeroacoustics element. Assessments of the contribution of individual source noise reductions to the reduction in community noise are developed to guide the work and the development of new tools for evaluation of unconventional aircraft is underway. Life in the real world is taken into account with the development of more accurate airport noise models and flight guidance methodology, and in addition, technology is being developed that will further reduce interior noise at current weight levels or enable the use of lighter-weight structures at current noise levels.
10 CFR 35.2406 - Records of brachytherapy source accountability.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 1 2010-01-01 2010-01-01 false Records of brachytherapy source accountability. 35.2406... Records of brachytherapy source accountability. (a) A licensee shall maintain a record of brachytherapy source accountability required by § 35.406 for 3 years. (b) For temporary implants, the record must...
Regularized Dual Averaging Image Reconstruction for Full-Wave Ultrasound Computed Tomography.
Matthews, Thomas P; Wang, Kun; Li, Cuiping; Duric, Neb; Anastasio, Mark A
2017-05-01
Ultrasound computed tomography (USCT) holds great promise for breast cancer screening. Waveform inversion-based image reconstruction methods account for higher order diffraction effects and can produce high-resolution USCT images, but are computationally demanding. Recently, a source encoding technique has been combined with stochastic gradient descent (SGD) to greatly reduce image reconstruction times. However, this method bundles the stochastic data fidelity term with the deterministic regularization term. This limitation can be overcome by replacing SGD with a structured optimization method, such as the regularized dual averaging method, that exploits knowledge of the composition of the cost function. In this paper, the dual averaging method is combined with source encoding techniques to improve the effectiveness of regularization while maintaining the reduced reconstruction times afforded by source encoding. It is demonstrated that each iteration can be decomposed into a gradient descent step based on the data fidelity term and a proximal update step corresponding to the regularization term. Furthermore, the regularization term is never explicitly differentiated, allowing nonsmooth regularization penalties to be naturally incorporated. The wave equation is solved by the use of a time-domain method. The effectiveness of this approach is demonstrated through computer simulation and experimental studies. The results suggest that the dual averaging method can produce images with less noise and comparable resolution to those obtained by the use of SGD.
Cost of care of haemophilia with inhibitors.
Di Minno, M N D; Di Minno, G; Di Capua, M; Cerbone, A M; Coppola, A
2010-01-01
In Western countries, the treatment of patients with inhibitors is presently the most challenging and serious issue in haemophilia management, direct costs of clotting factor concentrates accounting for >98% of the highest economic burden absorbed for the healthcare of patients in this setting. Being designed to address questions of resource allocation and effectiveness, decision models are the golden standard to reliably assess the overall economic implications of haemophilia with inhibitors in terms of mortality, bleeding-related morbidity, and severity of arthropathy. However, presently, most data analyses stem from retrospective short-term evaluations, that only allow for the analysis of direct health costs. In the setting of chronic diseases, the cost-utility analysis, that takes into account the beneficial effects of a given treatment/healthcare intervention in terms of health-related quality of life, is likely to be the most appropriate approach. To calculate net benefits, the quality adjusted life year, that significantly reflects such health gain, has to be compared with specific economic impacts. Differences in data sources, in medical practice and/or in healthcare systems and costs, imply that most current pharmacoeconomic analyses are confined to a narrow healthcare payer perspective. Long-term/lifetime prospective or observational studies, devoted to a careful definition of when to start a treatment; of regimens (dose and type of product) to employ, and of inhibitor population (children/adults, low-responding/high responding inhibitors) to study, are thus urgently needed to allow for newer insights, based on reliable data sources into resource allocation, effectiveness and cost-utility analysis in the treatment of haemophiliacs with inhibitors.
Logic-based assessment of the compatibility of UMLS ontology sources
2011-01-01
Background The UMLS Metathesaurus (UMLS-Meta) is currently the most comprehensive effort for integrating independently-developed medical thesauri and ontologies. UMLS-Meta is being used in many applications, including PubMed and ClinicalTrials.gov. The integration of new sources combines automatic techniques, expert assessment, and auditing protocols. The automatic techniques currently in use, however, are mostly based on lexical algorithms and often disregard the semantics of the sources being integrated. Results In this paper, we argue that UMLS-Meta’s current design and auditing methodologies could be significantly enhanced by taking into account the logic-based semantics of the ontology sources. We provide empirical evidence suggesting that UMLS-Meta in its 2009AA version contains a significant number of errors; these errors become immediately apparent if the rich semantics of the ontology sources is taken into account, manifesting themselves as unintended logical consequences that follow from the ontology sources together with the information in UMLS-Meta. We then propose general principles and specific logic-based techniques to effectively detect and repair such errors. Conclusions Our results suggest that the methodologies employed in the design of UMLS-Meta are not only very costly in terms of human effort, but also error-prone. The techniques presented here can be useful for both reducing human effort in the design and maintenance of UMLS-Meta and improving the quality of its contents. PMID:21388571
Phytoplankton fuels Delta food web
Jassby, Alan D.; Cloern, James E.; Muller-Solger, A. B.
2003-01-01
Populations of certain fishes and invertebrates in the Sacramento-San Joaquin Delta have declined in abundance in recent decades and there is evidence that food supply is partly responsible. While many sources of organic matter in the Delta could be supporting fish populations indirectly through the food web (including aquatic vegetation and decaying organic matter from agricultural drainage), a careful accounting shows that phytoplankton is the dominant food source. Phytoplankton, communities of microscopic free-floating algae, are the most important food source on a Delta-wide scale when both food quantity and quality are taken into account. These microscopic algae have declined since the late 1960s. Fertilizer and pesticide runoff do not appear to be playing a direct role in long-term phytoplankton changes; rather, species invasions, increasing water transparency and fluctuations in water transport are responsible. Although the potential toxicity of herbicides and pesticides to plank- ton in the Delta is well documented, the ecological significance remains speculative. Nutrient inputs from agricultural runoff at current levels, in combination with increasing transparency, could result in harmful al- gal blooms.
NASA Astrophysics Data System (ADS)
García-Mayordomo, J.; Gaspar-Escribano, J. M.; Benito, B.
2007-10-01
A probabilistic seismic hazard assessment of the Province of Murcia in terms of peak ground acceleration (PGA) and spectral accelerations [SA( T)] is presented in this paper. In contrast to most of the previous studies in the region, which were performed for PGA making use of intensity-to-PGA relationships, hazard is here calculated in terms of magnitude and using European spectral ground-motion models. Moreover, we have considered the most important faults in the region as specific seismic sources, and also comprehensively reviewed the earthquake catalogue. Hazard calculations are performed following the Probabilistic Seismic Hazard Assessment (PSHA) methodology using a logic tree, which accounts for three different seismic source zonings and three different ground-motion models. Hazard maps in terms of PGA and SA(0.1, 0.2, 0.5, 1.0 and 2.0 s) and coefficient of variation (COV) for the 475-year return period are shown. Subsequent analysis is focused on three sites of the province, namely, the cities of Murcia, Lorca and Cartagena, which are important industrial and tourism centres. Results at these sites have been analysed to evaluate the influence of the different input options. The most important factor affecting the results is the choice of the attenuation relationship, whereas the influence of the selected seismic source zonings appears strongly site dependant. Finally, we have performed an analysis of source contribution to hazard at each of these cities to provide preliminary guidance in devising specific risk scenarios. We have found that local source zones control the hazard for PGA and SA( T ≤ 1.0 s), although contribution from specific fault sources and long-distance north Algerian sources becomes significant from SA(0.5 s) onwards.
DOE Office of Scientific and Technical Information (OSTI.GOV)
A.A. Bingham; R.M. Ferrer; A.M. ougouag
2009-09-01
An accurate and computationally efficient two or three-dimensional neutron diffusion model will be necessary for the development, safety parameters computation, and fuel cycle analysis of a prismatic Very High Temperature Reactor (VHTR) design under Next Generation Nuclear Plant Project (NGNP). For this purpose, an analytical nodal Green’s function solution for the transverse integrated neutron diffusion equation is developed in two and three-dimensional hexagonal geometry. This scheme is incorporated into HEXPEDITE, a code first developed by Fitzpatrick and Ougouag. HEXPEDITE neglects non-physical discontinuity terms that arise in the transverse leakage due to the transverse integration procedure application to hexagonal geometry andmore » cannot account for the effects of burnable poisons across nodal boundaries. The test code being developed for this document accounts for these terms by maintaining an inventory of neutrons by using the nodal balance equation as a constraint of the neutron flux equation. The method developed in this report is intended to restore neutron conservation and increase the accuracy of the code by adding these terms to the transverse integrated flux solution and applying the nodal Green’s function solution to the resulting equation to derive a semi-analytical solution.« less
Association of Internet search trends with suicide death in Taipei City, Taiwan, 2004-2009.
Yang, Albert C; Tsai, Shi-Jen; Huang, Norden E; Peng, Chung-Kang
2011-07-01
Although Internet has become an important source for affected people seeking suicide information, the connection between Internet searches for suicide information and suicidal death remains largely unknown. This study aims to evaluate the association between suicide and Internet searches trends for 37 suicide-related terms representing major known risks of suicide. This study analyzes suicide death data in Taipei City, Taiwan and corresponding local Internet search trend data provided by Google Insights for Search during the period from January 2004 to December 2009. The investigation uses cross correlation analysis to estimate the temporal relationship between suicide and Internet search trends and multiple linear regression analysis to identify significant factors associated with suicide from a pool of search trend data that either coincides or precedes the suicide death. Results show that a set of suicide-related search terms, the trends of which either temporally coincided or preceded trends of suicide data, were associated with suicide death. These search factors varied among different suicide samples. Searches for "major depression" and "divorce" accounted for, at most, 30.2% of the variance in suicide data. When considering only leading suicide trends, searches for "divorce" and the pro-suicide term "complete guide of suicide," accounted for 22.7% of variance in suicide data. Appropriate filtering and detection of potentially harmful source in keyword-driven search results by search engine providers may be a reasonable strategy to reduce suicide deaths. Copyright © 2011 Elsevier B.V. All rights reserved.
Studying the Puzzle of the Pion Nucleon Sigma Term
NASA Astrophysics Data System (ADS)
Kane, Christopher; Lin, Huey-Wen
2017-09-01
The pion nucleon sigma term (σπN) is a fundamental parameter of QCD and is integral in the experimental search for dark matter particles as it is used to calculate the cross section of interactions between potential dark matter candidates and nucleons. Recent calculations of this term from lattice-QCD data disagree with calculations done using phenomenological data. This disparity is large enough to cause concern in the dark matter community as it would change the constraints on their experiments. We investigate one potential source of this disparity by studying the flavor dependence on LQCD data used to calculate σπN. To calculate σπN, we study the nucleon mass dependence on the pion mass and implement the Hellmann-Feynman Theorem. Previous calculations only consider LQCD data that accounted for 2 and 3 of the lightest quarks in the quark sea. We extend this study by using new high statistic data that considers 2, 3, and 4 quarks in the quark sea to see if the exclusion of the heavier quarks can account for this disparity. National Science Foundation.
Factors influencing atmospheric composition over subarctic North America during summer
NASA Technical Reports Server (NTRS)
Wofsy, Steven C.; Fan, S. -M.; Blake, D. R.; Bradshaw, J. D.; Sandholm, S. T.; Singh, H. B.; Sachse, G. W.; Harriss, R. C.
1994-01-01
Elevated concentrations of hydrocarbons, CO, and nitrogen oxides were observed in extensive haze layers over northeastern Canada in the summer of 1990, during ABLE 3B. Halocarbon concentrations remained near background in most layers, indicating a source from biomass wildfires. Elevated concentrations of C2Cl4 provided a sensitive indicator for pollution from urban/industrial sources. Detailed analysis of regional budgets for CO and hydrocarbons indicates that biomass fires accounted for approximately equal to 70% of the input to the subarctic for most hydrocarbons and for acetone and more than 50% for CO. Regional sources for many species (including CO) exceeded chemical sinks during summer, and the boreal region provided a net source to midlatitudes. Interannual variations and long-term trends in atmospheric composition are sensitive to climatic change; a shift to warmer, drier conditions could increase the areas burned and thus the sources of many trace gases.
Successful Municipal Separate Storm Sewer System Programs Implemented in the Navy - NESDI #494
2014-06-01
account. Lastly, upon speaking with numerous stormwater personnel who use a spreadsheet software for data tracking, they recommended that staying well...existing data sources, gathering and maintaining the data needed, and completing and reviewing this collection of information. Send comments regarding...an organized manner. In the long-term, a comprehensive electronic methodology is recommended to keep data organized, be more efficient and to keep
Widdowson, M.A.; Chapelle, F.H.; Brauner, J.S.; ,
2003-01-01
A method is developed for optimizing monitored natural attenuation (MNA) and the reduction in the aqueous source zone concentration (??C) required to meet a site-specific regulatory target concentration. The mathematical model consists of two one-dimensional equations of mass balance for the aqueous phase contaminant, to coincide with up to two distinct zones of transformation, and appropriate boundary and intermediate conditions. The solution is written in terms of zone-dependent Peclet and Damko??hler numbers. The model is illustrated at a chlorinated solvent site where MNA was implemented following source treatment using in-situ chemical oxidation. The results demonstrate that by not taking into account a variable natural attenuation capacity (NAC), a lower target ??C is predicted, resulting in unnecessary source concentration reduction and cost with little benefit to achieving site-specific remediation goals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.C. Ryman
This calculation is a revision of a previous calculation (Ref. 7.5) that bears the same title and has the document identifier BBAC00000-01717-0210-00006 REV 01. The purpose of this revision is to remove TBV (to-be-verified) -41 10 associated with the output files of the previous version (Ref. 7.30). The purpose of this and the previous calculation is to generate source terms for a representative boiling water reactor (BWR) spent nuclear fuel (SNF) assembly for the first one million years after the SNF is discharged from the reactors. This calculation includes an examination of several ways to represent BWR assemblies and operatingmore » conditions in SAS2H in order to quantify the effects these representations may have on source terms. These source terms provide information characterizing the neutron and gamma spectra in particles per second, the decay heat in watts, and radionuclide inventories in curies. Source terms are generated for a range of burnups and enrichments (see Table 2) that are representative of the waste stream and stainless steel (SS) clad assemblies. During this revision, it was determined that the burnups used for the computer runs of the previous revision were actually about 1.7% less than the stated, or nominal, burnups. See Section 6.6 for a discussion of how to account for this effect before using any source terms from this calculation. The source term due to the activation of corrosion products deposited on the surfaces of the assembly from the coolant is also calculated. The results of this calculation support many areas of the Monitored Geologic Repository (MGR), which include thermal evaluation, radiation dose determination, radiological safety analyses, surface and subsurface facility designs, and total system performance assessment. This includes MGR items classified as Quality Level 1, for example, the Uncanistered Spent Nuclear Fuel Disposal Container (Ref. 7.27, page 7). Therefore, this calculation is subject to the requirements of the Quality Assurance Requirements and Description (Ref. 7.28). The performance of the calculation and development of this document are carried out in accordance with AP-3.124, ''Design Calculation and Analyses'' (Ref. 7.29).« less
Renewable energies in electricity generation for reduction of greenhouse gases in Mexico 2025.
Islas, Jorge; Manzini, Fabio; Martínez, Manuel
2002-02-01
This study presents 4 scenarios relating to the environmental futures of electricity generation in Mexico up to the year 2025. The first scenario emphasizes the use of oil products, particularly fuel oil, and represents the historic path of Mexico's energy policy. The second scenario prioritizes the use of natural gas, reflecting the energy consumption pattern that arose in the mid-1990s as a result of reforms in the energy sector. In the third scenario, the high participation of renewable sources of energy is considered feasible from a technical and economic point of view. The fourth scenario takes into account the present- and medium-term use of natural-gas technologies that the energy reform has produced, but after 2007 a high and feasible participation of renewable sources of energy is considered. The 4 scenarios are evaluated up to the year 2025 in terms of greenhouse gases (GHG) and acid rain precursor gases (ARPG).
Computational study of radiation doses at UNLV accelerator facility
NASA Astrophysics Data System (ADS)
Hodges, Matthew; Barzilov, Alexander; Chen, Yi-Tung; Lowe, Daniel
2017-09-01
A Varian K15 electron linear accelerator (linac) has been considered for installation at University of Nevada, Las Vegas (UNLV). Before experiments can be performed, it is necessary to evaluate the photon and neutron spectra as generated by the linac, as well as the resulting dose rates within the accelerator facility. A computational study using MCNPX was performed to characterize the source terms for the bremsstrahlung converter. The 15 MeV electron beam available in the linac is above the photoneutron threshold energy for several materials in the linac assembly, and as a result, neutrons must be accounted for. The angular and energy distributions for bremsstrahlung flux generated by the interaction of the 15 MeV electron beam with the linac target were determined. This source term was used in conjunction with the K15 collimators to determine the dose rates within the facility.
40 CFR 74.50 - Deducting opt-in source allowances from ATS accounts.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) AIR PROGRAMS (CONTINUED) SULFUR DIOXIDE OPT-INS Allowance Tracking and Transfer and End of Year... any Allowance Tracking System accounts in which they are held, the allowances in an amount specified... any Allowance Tracking System Account other than the account of the source that includes opt-in source...
A Turbulence model taking into account the longitudinal flow inhomogeneity in mixing layers and jets
NASA Astrophysics Data System (ADS)
Troshin, A. I.
2017-06-01
The problem of potential core length overestimation of subsonic free jets by Reynolds-averaged Navier-Stokes (RANS) based turbulence models is addressed. It is shown that the issue is due to the incorrect velocity profile modeling of the jet mixing layers. An additional source term in ω equation is proposed which takes into account the effect of longitudinal flow inhomogeneity on turbulence in mixing layers. Computations confirm that the modified Speziale-Sarkar-Gatski/Launder- Reece-Rodi-omega (SSG/LRR-ω) turbulence model correctly predicts the mean velocity profiles in both initial and far-field regions of subsonic free plane jet as well as the centerline velocity decay rate.
2002-03-01
source term. Several publications provided a thorough accounting of the accident, including “ Chernobyl Record” [Mould], and the NRC technical report...Report on the Accident at the Chernobyl Nuclear Power Station” [NUREG-1250]. The most comprehensive study of transport models to predict the...from the Chernobyl Accident: The ATMES Report” [Klug, et al.]. The Atmospheric Transport 5 Model Evaluation Study (ATMES) report used data
A bittersweet story: the true nature of the laurel of the Oracle of Delphi.
Harissis, Haralampos V
2014-01-01
It is known from ancient sources that "laurel," identified with sweet bay, was used at the ancient Greek oracle of Delphi. The Pythia, the priestess who spoke the prophecies, purportedly used laurel as a means to inspire her divine frenzy. However, the clinical symptoms of the Pythia, as described in ancient sources, cannot be attributed to the use of sweet bay, which is harmless. A review of contemporary toxicological literature indicates that it is oleander that causes symptoms similar to those of the Pythia, while a closer examination of ancient literary texts indicates that oleander was often included under the generic term laurel. It is therefore likely that it was oleander, not sweet bay, that the Pythia used before the oracular procedure. This explanation could also shed light on other ancient accounts regarding the alleged spirit and chasm of Delphi, accounts that have been the subject of intense debate and interdisciplinary research for the last hundred years.
Endophytic Fungi—Alternative Sources of Cytotoxic Compounds: A Review
Uzma, Fazilath; Mohan, Chakrabhavi D.; Hashem, Abeer; Konappa, Narasimha M.; Rangappa, Shobith; Kamath, Praveen V.; Singh, Bhim P.; Mudili, Venkataramana; Gupta, Vijai K.; Siddaiah, Chandra N.; Chowdappa, Srinivas; Alqarawi, Abdulaziz A.; Abd_Allah, Elsayed F.
2018-01-01
Cancer is a major cause of death worldwide, with an increasing number of cases being reported annually. The elevated rate of mortality necessitates a global challenge to explore newer sources of anticancer drugs. Recent advancements in cancer treatment involve the discovery and development of new and improved chemotherapeutics derived from natural or synthetic sources. Natural sources offer the potential of finding new structural classes with unique bioactivities for cancer therapy. Endophytic fungi represent a rich source of bioactive metabolites that can be manipulated to produce desirable novel analogs for chemotherapy. This review offers a current and integrative account of clinically used anticancer drugs such as taxol, podophyllotoxin, camptothecin, and vinca alkaloids in terms of their mechanism of action, isolation from endophytic fungi and their characterization, yield obtained, and fungal strain improvement strategies. It also covers recent literature on endophytic fungal metabolites from terrestrial, mangrove, and marine sources as potential anticancer agents and emphasizes the findings for cytotoxic bioactive compounds tested against specific cancer cell lines. PMID:29755344
Humanitarian accountability, bureaucracy, and self-regulation: the view from the archive.
Roddy, Sarah; Strange, Julie-Marie; Taithe, Bertrand
2015-10-01
This paper contains a systematic exploration of local and national archives and sources relevant to charities and humanitarian fund appeals of the late Victorian and Edwardian eras (1870-1912) in Great Britain. It shows that the charitable world and humanitarian work share the same matrix and originate from the same roots, with considerable overlap between fundraising for domestic charity and overseas relief. These campaigns engaged in crucial self-regulatory processes very early on that involved concepts such as formal accountability and the close monitoring of delivery. Far from lagging behind in terms of formal practices of auditing and accounts, charities and humanitarian funds often were in the pioneering group as compared with mainstream businesses of the period. The charitable sector, notably through the Charity Organisation Society in cooperation with the press, developed and delivered accountability and monitoring, while the state and the Charity Commission played a negligible role in this process. © 2015 The Author(s). Disasters © Overseas Development Institute, 2015.
Xu, Julia
2015-01-01
Objective Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT) is the emergent international health terminology standard for encoding clinical information in electronic health records. The CORE Problem List Subset was created to facilitate the terminology’s implementation. This study evaluates the CORE Subset’s coverage and examines its growth pattern as source datasets are being incorporated. Methods Coverage of frequently used terms and the corresponding usage of the covered terms were assessed by “leave-one-out” analysis of the eight datasets constituting the current CORE Subset. The growth pattern was studied using a retrospective experiment, growing the Subset one dataset at a time and examining the relationship between the size of the starting subset and the coverage of frequently used terms in the incoming dataset. Linear regression was used to model that relationship. Results On average, the CORE Subset covered 80.3% of the frequently used terms of the left-out dataset, and the covered terms accounted for 83.7% of term usage. There was a significant positive correlation between the CORE Subset’s size and the coverage of the frequently used terms in an incoming dataset. This implies that the CORE Subset will grow at a progressively slower pace as it gets bigger. Conclusion The CORE Problem List Subset is a useful resource for the implementation of Systematized Nomenclature of Medicine Clinical Terms in electronic health records. It offers good coverage of frequently used terms, which account for a high proportion of term usage. If future datasets are incorporated into the CORE Subset, it is likely that its size will remain small and manageable. PMID:25725003
An investigation on nuclear energy policy in Turkey and public perception
NASA Astrophysics Data System (ADS)
Coskun, Mehmet Burhanettin; Tanriover, Banu
2016-11-01
Turkey, which meets nearly 70 per cent of its energy demands with import, is facing the problems of energy security and current account deficit as a result of its dependence on foreign sources in terms of energy input. It is also known that Turkey is having environmental problems due to the increases in CO2 emission. Considering these problems in Turkish economy, where energy input is commonly used, it is necessary to use energy sources efficiently and provide alternative energy sources. Due to the dependency of renewable sources on meteorological conditions (the absence of enough sun, wind, and water sources), the energy generation could not be provided efficiently and permanently from these sources. At this point, nuclear energy as analternative energy source maintains its importance as a sustainable energy source that providing energy in 7 days and 24 hours. The main purpose of this study is to evaluate the nuclear energy subject within the context of negative public perceptions emerged after Chernobyl (1986) and Fukushima (2011) disasters and to investigate in the economic framework.
The evolution of methods for noise prediction of high speed rotors and propellers in the time domain
NASA Technical Reports Server (NTRS)
Farassat, F.
1986-01-01
Linear wave equation models which have been used over the years at NASA Langley for describing noise emissions from high speed rotating blades are summarized. The noise sources are assumed to lie on a moving surface, and analysis of the situation has been based on the Ffowcs Williams-Hawkings (FW-H) equation. Although the equation accounts for two surface and one volume source, the NASA analyses have considered only the surface terms. Several variations on the FW-H model are delineated for various types of applications, noting the computational benefits of removing the frequency dependence of the calculations. Formulations are also provided for compact and noncompact sources, and features of Long's subsonic integral equation and Farassat's high speed integral equation are discussed. The selection of subsonic or high speed models is dependent on the Mach number of the blade surface where the source is located.
Upper and lower bounds of ground-motion variabilities: implication for source properties
NASA Astrophysics Data System (ADS)
Cotton, Fabrice; Reddy-Kotha, Sreeram; Bora, Sanjay; Bindi, Dino
2017-04-01
One of the key challenges of seismology is to be able to analyse the physical factors that control earthquakes and ground-motion variabilities. Such analysis is particularly important to calibrate physics-based simulations and seismic hazard estimations at high frequencies. Within the framework of the development of ground-motion prediction equation (GMPE) developments, ground-motions residuals (differences between recorded ground motions and the values predicted by a GMPE) are computed. The exponential growth of seismological near-source records and modern GMPE analysis technics allow to partition these residuals into between- and a within-event components. In particular, the between-event term quantifies all those repeatable source effects (e.g. related to stress-drop or kappa-source variability) which have not been accounted by the magnitude-dependent term of the model. In this presentation, we first discuss the between-event variabilities computed both in the Fourier and Response Spectra domains, using recent high-quality global accelerometric datasets (e.g. NGA-west2, Resorce, Kiknet). These analysis lead to the assessment of upper bounds for the ground-motion variability. Then, we compare these upper bounds with lower bounds estimated by analysing seismic sequences which occurred on specific fault systems (e.g., located in Central Italy or in Japan). We show that the lower bounds of between-event variabilities are surprisingly large which indicates a large variability of earthquake dynamic properties even within the same fault system. Finally, these upper and lower bounds of ground-shaking variability are discussed in term of variability of earthquake physical properties (e.g., stress-drop and kappa_source).
NASA Astrophysics Data System (ADS)
Navas-Montilla, A.; Murillo, J.
2016-07-01
In this work, an arbitrary order HLL-type numerical scheme is constructed using the flux-ADER methodology. The proposed scheme is based on an augmented Derivative Riemann solver that was used for the first time in Navas-Montilla and Murillo (2015) [1]. Such solver, hereafter referred to as Flux-Source (FS) solver, was conceived as a high order extension of the augmented Roe solver and led to the generation of a novel numerical scheme called AR-ADER scheme. Here, we provide a general definition of the FS solver independently of the Riemann solver used in it. Moreover, a simplified version of the solver, referred to as Linearized-Flux-Source (LFS) solver, is presented. This novel version of the FS solver allows to compute the solution without requiring reconstruction of derivatives of the fluxes, nevertheless some drawbacks are evidenced. In contrast to other previously defined Derivative Riemann solvers, the proposed FS and LFS solvers take into account the presence of the source term in the resolution of the Derivative Riemann Problem (DRP), which is of particular interest when dealing with geometric source terms. When applied to the shallow water equations, the proposed HLLS-ADER and AR-ADER schemes can be constructed to fulfill the exactly well-balanced property, showing that an arbitrary quadrature of the integral of the source inside the cell does not ensure energy balanced solutions. As a result of this work, energy balanced flux-ADER schemes that provide the exact solution for steady cases and that converge to the exact solution with arbitrary order for transient cases are constructed.
Reducing the cost of health care capital.
Silberman, R
1984-08-01
Although one may ask four financial experts their opinion on the future of the hospital capital market and receive five answers, the blatant need for financial strategic planning is evident. Clearly, the hospital or system with sound financial management will be better positioned to gain and/or maintain an edge in the competitive environment of the health care sector. The trends of the future include hospitals attempting to: Maximize the efficiency of invested capital. Use the expertise of Board members. Use alternative capital sources. Maximize rate of return on investments. Increase productivity. Adjust to changes in reimbursements. Restructure to use optimal financing for capital needs, i.e., using short-term to build up debt capacity if long-term financing is needed in the future. Take advantage of arbitrage (obtain capital and reinvest it until the funds are needed). Delay actual underwriting until funds are to be used. Better management of accounts receivable and accounts payable to avoid short-term financing for cash flow shortfalls. Use for-profit subsidiaries to obtain venture capital by issuing stock. Use product line management. Use leasing to obtain balance sheet advantages. These trends indicate a need for hospital executives to possess a thorough understanding of the capital formation process. In essence, the bottom line is that the short-term viability and long-term survival of a health care organization will greatly depend on the financial expertise of its decision-makers.
An Ultradeep Chandra Catalog of X-Ray Point Sources in the Galactic Center Star Cluster
NASA Astrophysics Data System (ADS)
Zhu, Zhenlin; Li, Zhiyuan; Morris, Mark R.
2018-04-01
We present an updated catalog of X-ray point sources in the inner 500″ (∼20 pc) of the Galactic center (GC), where the nuclear star cluster (NSC) stands, based on a total of ∼4.5 Ms of Chandra observations taken from 1999 September to 2013 April. This ultradeep data set offers unprecedented sensitivity for detecting X-ray sources in the GC, down to an intrinsic 2–10 keV luminosity of 1.0 × 1031 erg s‑1. A total of 3619 sources are detected in the 2–8 keV band, among which ∼3500 are probable GC sources and ∼1300 are new identifications. The GC sources collectively account for ∼20% of the total 2–8 keV flux from the inner 250″ region where detection sensitivity is the greatest. Taking advantage of this unprecedented sample of faint X-ray sources that primarily traces the old stellar populations in the NSC, we revisit global source properties, including long-term variability, cumulative spectra, luminosity function, and spatial distribution. Based on the equivalent width and relative strength of the iron lines, we suggest that in addition to the arguably predominant population of magnetic cataclysmic variables (CVs), nonmagnetic CVs contribute substantially to the detected sources, especially in the lower-luminosity group. On the other hand, the X-ray sources have a radial distribution closely following the stellar mass distribution in the NSC, but much flatter than that of the known X-ray transients, which are presumably low-mass X-ray binaries (LMXBs) caught in outburst. This, together with the very modest long-term variability of the detected sources, strongly suggests that quiescent LMXBs are a minor (less than a few percent) population.
Provincial health accounts in Kerman, Iran: an evidence of a "mixed" healthcare financing system.
Mehrolhassani, Mohammad Hossein; Jafari, Mohammad; Zeinali, Javad; Ansari, Mina
2014-02-01
Provincial Health Accounts (PHA) as a subset of National Health Accounts (NHA) present financial information for health sectors. It leads to a logical decision making for policy-makers in order to achieve health system goals, especially Fair Financial Contribution (FFC). This study aimed to examine Health Accounts in Kerman Province. The present analytical study was carried out retrospectively between 2008 and 2011. The research population consisted of urban and rural households as well as providers and financial agents in health sectors of Kerman Province. The purposeful sampling included 16 provincial organizations. To complete data, the report on Kerman household expenditure was taken as a data source from the Governor-General's office. In order to classify the data, the International Classification for Health Accounts (ICHA) method was used, in which data set was adjusted for the province. During the study, the governmental and non-governmental fund shares of the health sector in Kerman were 27.22% and 72.78% respectively. The main portion of financial sources (59.41) was related to private household funds, of which the Out-of-Pocket (OOP) payment mounted to 92.35%. Overall, 54.86% of all financial sources were covered by OOP. The greatest portion of expenditure of Total Healthcare Expenditures (THEs) (65.19%) was related to curative services. The major portion of healthcare expenditures was related to the OOP payment which is compatible with the national average rate in Iran. However, health expenditure per capita, was two and a half times higher than the national average. By emphasizing on Social Determinant of Health (SDH) approach in the Iranian health system, the portion of OOP payment and curative expenditure are expected to be controlled in the medium term. It is suggested that PHA should be examined annually in a more comprehensive manner to monitor initiatives and reforms in healthcare sector.
Source calibrations and SDC calorimeter requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, D.
Several studies of the problem of calibration of the SDC calorimeter exist. In this note the attempt is made to give a connected account of the requirements on the source calibration from the point of view of the desired, and acceptable, constant term induced in the EM resolution. It is assumed that a local'' calibration resulting from exposing each tower to a beam of electrons is not feasible. It is further assumed that an in situ'' calibration is either not yet performed, or is unavailable due to tracking alignment problems or high luminosity operation rendering tracking inoperative. Therefore, the assumptionsmore » used are rather conservative. In this scenario, each scintillator plate of each tower is exposed to a moving radioactive source. That reading is used to mask'' an optical cookie'' in a grey code chosen so as to make the response uniform. The source is assumed to be the sole calibration of the tower. Therefore, the phrase global'' calibration of towers by movable radioactive sources is adopted.« less
Source calibrations and SDC calorimeter requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Green, D.
Several studies of the problem of calibration of the SDC calorimeter exist. In this note the attempt is made to give a connected account of the requirements on the source calibration from the point of view of the desired, and acceptable, constant term induced in the EM resolution. It is assumed that a ``local`` calibration resulting from exposing each tower to a beam of electrons is not feasible. It is further assumed that an ``in situ`` calibration is either not yet performed, or is unavailable due to tracking alignment problems or high luminosity operation rendering tracking inoperative. Therefore, the assumptionsmore » used are rather conservative. In this scenario, each scintillator plate of each tower is exposed to a moving radioactive source. That reading is used to ``mask`` an optical ``cookie`` in a grey code chosen so as to make the response uniform. The source is assumed to be the sole calibration of the tower. Therefore, the phrase ``global`` calibration of towers by movable radioactive sources is adopted.« less
NASA Astrophysics Data System (ADS)
Saniga, Metod
1995-03-01
It is demonstrated that the kinematic 'peculiarity' of the early Sab galaxy NGC 4826 can easily be understood in terms of the Abelian Higgs (AH) model of spiral galaxies. A cylindrically symmetric AH vorto-source (-sink) with a disk-to-bulge ratio Omega greater than 1 is discussed and the distributions of the diagonal components of the corresponding stress-energy tensor Tmu,nu are presented. It is argued that the sign-changing component Tphiphi could account for the existence of two counter-rotating gas disks while negative values of Trr imply inward gas motions as observed in the outer and transition regions of the galaxy.
Observed ground-motion variabilities and implication for source properties
NASA Astrophysics Data System (ADS)
Cotton, F.; Bora, S. S.; Bindi, D.; Specht, S.; Drouet, S.; Derras, B.; Pina-Valdes, J.
2016-12-01
One of the key challenges of seismology is to be able to calibrate and analyse the physical factors that control earthquake and ground-motion variabilities. Within the framework of empirical ground-motion prediction equation (GMPE) developments, ground-motions residuals (differences between recorded ground motions and the values predicted by a GMPE) are computed. The exponential growth of seismological near-field records and modern regression algorithms allow to decompose these residuals into between-event and a within-event residual components. The between-event term quantify all the residual effects of the source (e.g. stress-drops) which are not accounted by magnitude term as the only source parameter of the model. Between-event residuals provide a new and rather robust way to analyse the physical factors that control earthquake source properties and associated variabilities. We first will show the correlation between classical stress-drops and between-event residuals. We will also explain why between-event residuals may be a more robust way (compared to classical stress-drop analysis) to analyse earthquake source-properties. We will finally calibrate between-events variabilities using recent high-quality global accelerometric datasets (NGA-West 2, RESORCE) and datasets from recent earthquakes sequences (Aquila, Iquique, Kunamoto). The obtained between-events variabilities will be used to evaluate the variability of earthquake stress-drops but also the variability of source properties which cannot be explained by a classical Brune stress-drop variations. We will finally use the between-event residual analysis to discuss regional variations of source properties, differences between aftershocks and mainshocks and potential magnitude dependencies of source characteristics.
Filtered Mass Density Function for Design Simulation of High Speed Airbreathing Propulsion Systems
NASA Technical Reports Server (NTRS)
Drozda, T. G.; Sheikhi, R. M.; Givi, Peyman
2001-01-01
The objective of this research is to develop and implement new methodology for large eddy simulation of (LES) of high-speed reacting turbulent flows. We have just completed two (2) years of Phase I of this research. This annual report provides a brief and up-to-date summary of our activities during the period: September 1, 2000 through August 31, 2001. In the work within the past year, a methodology termed "velocity-scalar filtered density function" (VSFDF) is developed and implemented for large eddy simulation (LES) of turbulent flows. In this methodology the effects of the unresolved subgrid scales (SGS) are taken into account by considering the joint probability density function (PDF) of all of the components of the velocity and scalar vectors. An exact transport equation is derived for the VSFDF in which the effects of the unresolved SGS convection, SGS velocity-scalar source, and SGS scalar-scalar source terms appear in closed form. The remaining unclosed terms in this equation are modeled. A system of stochastic differential equations (SDEs) which yields statistically equivalent results to the modeled VSFDF transport equation is constructed. These SDEs are solved numerically by a Lagrangian Monte Carlo procedure. The consistency of the proposed SDEs and the convergence of the Monte Carlo solution are assessed by comparison with results obtained by an Eulerian LES procedure in which the corresponding transport equations for the first two SGS moments are solved. The unclosed SGS convection, SGS velocity-scalar source, and SGS scalar-scalar source in the Eulerian LES are replaced by corresponding terms from VSFDF equation. The consistency of the results is then analyzed for a case of two dimensional mixing layer.
Research Blogs and the Discussion of Scholarly Information
Shema, Hadas; Bar-Ilan, Judit; Thelwall, Mike
2012-01-01
The research blog has become a popular mechanism for the quick discussion of scholarly information. However, unlike peer-reviewed journals, the characteristics of this form of scientific discourse are not well understood, for example in terms of the spread of blogger levels of education, gender and institutional affiliations. In this paper we fill this gap by analyzing a sample of blog posts discussing science via an aggregator called ResearchBlogging.org (RB). ResearchBlogging.org aggregates posts based on peer-reviewed research and allows bloggers to cite their sources in a scholarly manner. We studied the bloggers, blog posts and referenced journals of bloggers who posted at least 20 items. We found that RB bloggers show a preference for papers from high-impact journals and blog mostly about research in the life and behavioral sciences. The most frequently referenced journal sources in the sample were: Science, Nature, PNAS and PLoS One. Most of the bloggers in our sample had active Twitter accounts connected with their blogs, and at least 90% of these accounts connect to at least one other RB-related Twitter account. The average RB blogger in our sample is male, either a graduate student or has been awarded a PhD and blogs under his own name. PMID:22606239
Madjidi, Faramarz; Behroozy, Ali
2014-01-01
Exposure to visible light and near infrared (NIR) radiation in the wavelength region of 380 to 1400 nm may cause thermal retinal injury. In this analysis, the effective spectral radiance of a hot source is replaced by its temperature in the exposure limit values in the region of 380-1400 nm. This article describes the development and implementation of a computer code to predict those temperatures, corresponding to the exposure limits proposed by the American Conference of Governmental Industrial Hygienists (ACGIH). Viewing duration and apparent diameter of the source were inputs for the computer code. At the first stage, an infinite series was created for calculation of spectral radiance by integration with Planck's law. At the second stage for calculation of effective spectral radiance, the initial terms of this infinite series were selected and integration was performed by multiplying these terms by a weighting factor R(λ) in the wavelength region 380-1400 nm. At the third stage, using a computer code, the source temperature that can emit the same effective spectral radiance was found. As a result, based only on measuring the source temperature and accounting for the exposure time and the apparent diameter of the source, it is possible to decide whether the exposure to visible and NIR in any 8-hr workday is permissible. The substitution of source temperature for effective spectral radiance provides a convenient way to evaluate exposure to visible light and NIR.
Guagliardi, Ilaria; Rovella, Natalia; Apollaro, Carmine; Bloise, Andrea; De Rosa, Rosanna; Scarciglia, Fabio; Buttafuoco, Gabriele
2016-05-01
The study, which represents an innovative scientific strategy to approach the study of natural radioactivity in terms of spatial and temporal variability, was aimed to characterize the background levels of natural radionuclides in soil and rock in the urban and peri-urban soil of a southern Italy area; to quantify their variations due to radionuclide bearing minerals and soil properties, taking into account nature and extent of seasonality influence. Its main novelty is taking into account the effect of climate in controlling natural gamma radioactivity as well as analysing soil radioactivity in terms of soil properties and pedogenetic processes. In different bedrocks and soils, activities of natural radionuclides ((238)U, (232)Th (4) K) and total radioactivity were measured at 181 locations by means of scintillation γ-ray spectrometry. In addition, selected rocks samples were collected and analysed, using a Scanning Electron Microscope (SEM) equipped with an Energy Dispersive Spectrometer (EDS) and an X-Ray Powder Diffraction (XRPD), to assess the main sources of radionuclides. The natural-gamma background is intimately related to differing petrologic features of crystalline source rocks and to peculiar pedogenetic features and processes. The radioactivity survey was conducted during two different seasons with marked changes in the main climatic characteristics, namely dry summer and moist winter, to evaluate possible effects of seasonal climatic variations and soil properties on radioactivity measurements. Seasonal variations of radionuclides activities show their peak values in summer. The activities of (238)U, (232)Th and (4) K exhibit a positive correlation with the air temperature and are negatively correlated with precipitations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Chen, Tianle; Zeng, Donglin
2015-01-01
Summary Predicting disease risk and progression is one of the main goals in many clinical research studies. Cohort studies on the natural history and etiology of chronic diseases span years and data are collected at multiple visits. Although kernel-based statistical learning methods are proven to be powerful for a wide range of disease prediction problems, these methods are only well studied for independent data but not for longitudinal data. It is thus important to develop time-sensitive prediction rules that make use of the longitudinal nature of the data. In this paper, we develop a novel statistical learning method for longitudinal data by introducing subject-specific short-term and long-term latent effects through a designed kernel to account for within-subject correlation of longitudinal measurements. Since the presence of multiple sources of data is increasingly common, we embed our method in a multiple kernel learning framework and propose a regularized multiple kernel statistical learning with random effects to construct effective nonparametric prediction rules. Our method allows easy integration of various heterogeneous data sources and takes advantage of correlation among longitudinal measures to increase prediction power. We use different kernels for each data source taking advantage of the distinctive feature of each data modality, and then optimally combine data across modalities. We apply the developed methods to two large epidemiological studies, one on Huntington's disease and the other on Alzheimer's Disease (Alzheimer's Disease Neuroimaging Initiative, ADNI) where we explore a unique opportunity to combine imaging and genetic data to study prediction of mild cognitive impairment, and show a substantial gain in performance while accounting for the longitudinal aspect of the data. PMID:26177419
40 CFR 97.10 - Authorization and responsibilities of NOX authorized account representative.
Code of Federal Regulations, 2014 CFR
2014-07-01
... account certificate of representation under § 97.13, the NOX authorized account representative of the source shall represent and, by his or her representations, actions, inactions, or submissions, legally... at a source, until the Administrator has received a complete account certificate of representation...
40 CFR 97.10 - Authorization and responsibilities of NOX authorized account representative.
Code of Federal Regulations, 2013 CFR
2013-07-01
... account certificate of representation under § 97.13, the NOX authorized account representative of the source shall represent and, by his or her representations, actions, inactions, or submissions, legally... at a source, until the Administrator has received a complete account certificate of representation...
40 CFR 97.10 - Authorization and responsibilities of NOX authorized account representative.
Code of Federal Regulations, 2012 CFR
2012-07-01
... account certificate of representation under § 97.13, the NOX authorized account representative of the source shall represent and, by his or her representations, actions, inactions, or submissions, legally... at a source, until the Administrator has received a complete account certificate of representation...
40 CFR 97.10 - Authorization and responsibilities of NOX authorized account representative.
Code of Federal Regulations, 2010 CFR
2010-07-01
... account certificate of representation under § 97.13, the NOX authorized account representative of the source shall represent and, by his or her representations, actions, inactions, or submissions, legally... at a source, until the Administrator has received a complete account certificate of representation...
40 CFR 97.10 - Authorization and responsibilities of NOX authorized account representative.
Code of Federal Regulations, 2011 CFR
2011-07-01
... account certificate of representation under § 97.13, the NOX authorized account representative of the source shall represent and, by his or her representations, actions, inactions, or submissions, legally... at a source, until the Administrator has received a complete account certificate of representation...
ESREF 98 - 9th European Symposium on Reliability of Electron Devices, Failure Physics and Analysis
1998-10-19
Gap effects within a nanosecond time scale. Effective Density of States Because of the limited possibility of compact Mobility models to deal with the...brackets) and the dislocation source action mobility M. The force is the balance between the normal stress a,, on the dislocations and the osmotic The...on, the electromigration force acts theration kt. on the atoms and a transport to the anode side of the t It is the mobility term that accounts for
Quantifying Transmission of Clostridium difficile within and outside Healthcare Settings
Olsen, Margaret A.; Dubberke, Erik R.; Galvani, Alison P.; Townsend, Jeffrey P.
2016-01-01
To quantify the effect of hospital and community-based transmission and control measures on Clostridium difficile infection (CDI), we constructed a transmission model within and between hospital, community, and long-term care-facility settings. By parameterizing the model from national databases and calibrating it to C. difficile prevalence and CDI incidence, we found that hospitalized patients with CDI transmit C. difficile at a rate 15 (95% CI 7.2–32) times that of asymptomatic patients. Long-term care facility residents transmit at a rate of 27% (95% CI 13%–51%) that of hospitalized patients, and persons in the community at a rate of 0.1% (95% CI 0.062%–0.2%) that of hospitalized patients. Despite lower transmission rates for asymptomatic carriers and community sources, these transmission routes have a substantial effect on hospital-onset CDI because of the larger reservoir of hospitalized carriers and persons in the community. Asymptomatic carriers and community sources should be accounted for when designing and evaluating control interventions. PMID:26982504
Long-term carbon loss in fragmented Neotropical forests.
Pütz, Sandro; Groeneveld, Jürgen; Henle, Klaus; Knogge, Christoph; Martensen, Alexandre Camargo; Metz, Markus; Metzger, Jean Paul; Ribeiro, Milton Cezar; de Paula, Mateus Dantas; Huth, Andreas
2014-10-07
Tropical forests play an important role in the global carbon cycle, as they store a large amount of carbon (C). Tropical forest deforestation has been identified as a major source of CO2 emissions, though biomass loss due to fragmentation--the creation of additional forest edges--has been largely overlooked as an additional CO2 source. Here, through the combination of remote sensing and knowledge on ecological processes, we present long-term carbon loss estimates due to fragmentation of Neotropical forests: within 10 years the Brazilian Atlantic Forest has lost 69 (±14) Tg C, and the Amazon 599 (±120) Tg C due to fragmentation alone. For all tropical forests, we estimate emissions up to 0.2 Pg C y(-1) or 9 to 24% of the annual global C loss due to deforestation. In conclusion, tropical forest fragmentation increases carbon loss and should be accounted for when attempting to understand the role of vegetation in the global carbon balance.
Incorporation of an Energy Equation into a Pulsed Inductive Thruster Performance Model
NASA Technical Reports Server (NTRS)
Polzin, Kurt A.; Reneau, Jarred P.; Sankaran, Kameshwaran
2011-01-01
A model for pulsed inductive plasma acceleration containing an energy equation to account for the various sources and sinks in such devices is presented. The model consists of a set of circuit equations coupled to an equation of motion and energy equation for the plasma. The latter two equations are obtained for the plasma current sheet by treating it as a one-element finite volume, integrating the equations over that volume, and then matching known terms or quantities already calculated in the model to the resulting current sheet-averaged terms in the equations. Calculations showing the time-evolution of the various sources and sinks in the system are presented to demonstrate the efficacy of the model, with two separate resistivity models employed to show an example of how the plasma transport properties can affect the calculation. While neither resistivity model is fully accurate, the demonstration shows that it is possible within this modeling framework to time-accurately update various plasma parameters.
NASA Astrophysics Data System (ADS)
Di Pietro, Daniele A.; Marche, Fabien
2018-02-01
In this paper, we further investigate the use of a fully discontinuous Finite Element discrete formulation for the study of shallow water free surface flows in the fully nonlinear and weakly dispersive flow regime. We consider a decoupling strategy in which we approximate the solutions of the classical shallow water equations supplemented with a source term globally accounting for the non-hydrostatic effects. This source term can be computed through the resolution of elliptic second-order linear sub-problems, which only involve second order partial derivatives in space. We then introduce an associated Symmetric Weighted Internal Penalty discrete bilinear form, allowing to deal with the discontinuous nature of the elliptic problem's coefficients in a stable and consistent way. Similar discrete formulations are also introduced for several recent optimized fully nonlinear and weakly dispersive models. These formulations are validated again several benchmarks involving h-convergence, p-convergence and comparisons with experimental data, showing optimal convergence properties.
Hockenberry, Jason M; Lien, Hsien-Ming; Chou, Shin-Yi
2010-10-01
To investigate whether provider volume has an impact on the hazard of mortality for coronary artery bypass grafting (CABG) patients in Taiwan. Multiple sources of linked data from the National Health Insurance Program in Taiwan. The linked data were used to identify 27,463 patients who underwent CABG without concomitant angioplasty or valve procedures and the surgeon and hospital volumes. Generalized estimating equations and hazard models were estimated to assess the impact of volume on mortality. The hazard modeling technique used accounts for bias stemming from unobserved heterogeneity. Both surgeon and hospital volume quartiles are inversely related to the hazard of mortality after CABG. Patients whose surgeon is in the three higher volume quartiles have lower 1-, 3-, 6-, and 12-month mortality after CABG, while only those having their procedure performed at the highest quartile of volume hospitals have lower mortality outcomes. Mortality outcomes are related to provider CABG volume in Taiwan. Unobserved heterogeneity is a concern in the volume-outcome relationship; after accounting for it, surgeon volume effects on short-term mortality are large. Using models controlling for unobserved heterogeneity and examining longer term mortality may still differentiate provider quality by volume. Copyright © Health Research and Educational Trust.
ERIC Educational Resources Information Center
Bayen, Ute J.; Kuhlmann, Beatrice G.
2011-01-01
The authors investigated conditions under which judgments in source-monitoring tasks are influenced by prior schematic knowledge. According to a probability-matching account of source guessing (Spaniol & Bayen, 2002), when people do not remember the source of information, they match source-guessing probabilities to the perceived contingency…
NASA Astrophysics Data System (ADS)
Winiarek, Victor; Bocquet, Marc; Saunier, Olivier; Mathieu, Anne
2012-03-01
A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativity of the measurements, those that are instrumental, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. We propose to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We apply the method to the estimation of the Fukushima Daiichi source term using activity concentrations in the air. The results are compared to an L-curve estimation technique and to Desroziers's scheme. The total reconstructed activities significantly depend on the chosen method. Because of the poor observability of the Fukushima Daiichi emissions, these methods provide lower bounds for cesium-137 and iodine-131 reconstructed activities. These lower bound estimates, 1.2 × 1016 Bq for cesium-137, with an estimated standard deviation range of 15%-20%, and 1.9 - 3.8 × 1017 Bq for iodine-131, with an estimated standard deviation range of 5%-10%, are of the same order of magnitude as those provided by the Japanese Nuclear and Industrial Safety Agency and about 5 to 10 times less than the Chernobyl atmospheric releases.
NASA Astrophysics Data System (ADS)
Salameh, T.; Sauvage, S.; Afif, C.; Borbon, A.; Locoge, N.
2015-10-01
We applied the Positive Matrix Factorization model to two large datasets collected during two intensive measurement campaigns (summer 2011 and winter 2012) at a sub-urban site in Beirut, Lebanon, in order to identify NMHC sources and quantify their contribution to ambient levels. Six factors were identified in winter and five factors in summer. PMF-resolved source profiles were consistent with source profiles established by near-field measurements. The major sources were traffic-related emissions (combustion and gasoline evaporation) in winter and in summer accounting for 51 and 74 wt % respectively in agreement with the national emission inventory. The gasoline evaporation related to traffic source had a significant contribution regardless of the season (22 wt % in winter and 30 wt % in summer). The NMHC emissions from road transport are estimated from observations and PMF results, and compared to local and global emission inventories. The national road transport inventory shows lowest emissions than the ones from PMF but with a reasonable difference lower than 50 %. Global inventories show higher discrepancies with lower emissions up to a factor of 10 for the transportation sector. When combining emission inventory to our results, there is a strong evidence that control measures in Lebanon should be targeted on mitigating the NMHC emissions from the traffic-related sources. From a global perspective, an assessment of VOC anthropogenic emission inventories for the Middle East region as a whole seems necessary as these emissions could be much higher than expected at least from the road transport sector. Highlights: - PMF model was applied to identify major NMHC sources and their seasonal variation. - Gasoline evaporation accounts for more than 40 % both in winter and in summer. - NMHC urban emissions are dominated by traffic related sources in both seasons. - Agreement with the emission inventory regarding the relative contribution of the on-road mobile source but disagreement in terms of emission quantities suggesting an underestimation of the inventories.
40 CFR 96.10 - Authorization and responsibilities of the NOX authorized account representative.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Administrator of a complete account certificate of representation under § 96.13, the NOX authorized account representative of the source shall represent and, by his or her representations, actions, inactions, or... at a source, until the Administrator has received a complete account certificate of representation...
40 CFR 96.10 - Authorization and responsibilities of the NOX authorized account representative.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Administrator of a complete account certificate of representation under § 96.13, the NOX authorized account representative of the source shall represent and, by his or her representations, actions, inactions, or... at a source, until the Administrator has received a complete account certificate of representation...
40 CFR 96.10 - Authorization and responsibilities of the NOX authorized account representative.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Administrator of a complete account certificate of representation under § 96.13, the NOX authorized account representative of the source shall represent and, by his or her representations, actions, inactions, or... at a source, until the Administrator has received a complete account certificate of representation...
40 CFR 96.10 - Authorization and responsibilities of the NOX authorized account representative.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Administrator of a complete account certificate of representation under § 96.13, the NOX authorized account representative of the source shall represent and, by his or her representations, actions, inactions, or... at a source, until the Administrator has received a complete account certificate of representation...
40 CFR 96.10 - Authorization and responsibilities of the NOX authorized account representative.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Administrator of a complete account certificate of representation under § 96.13, the NOX authorized account representative of the source shall represent and, by his or her representations, actions, inactions, or... at a source, until the Administrator has received a complete account certificate of representation...
Effects of volcano topography on seismic broad-band waveforms
NASA Astrophysics Data System (ADS)
Neuberg, Jürgen; Pointer, Tim
2000-10-01
Volcano seismology often deals with rather shallow seismic sources and seismic stations deployed in their near field. The complex stratigraphy on volcanoes and near-field source effects have a strong impact on the seismic wavefield, complicating the interpretation techniques that are usually employed in earthquake seismology. In addition, as most volcanoes have a pronounced topography, the interference of the seismic wavefield with the stress-free surface results in severe waveform perturbations that affect seismic interpretation methods. In this study we deal predominantly with the surface effects, but take into account the impact of a typical volcano stratigraphy as well as near-field source effects. We derive a correction term for plane seismic waves and a plane-free surface such that for smooth topographies the effect of the free surface can be totally removed. Seismo-volcanic sources radiate energy in a broad frequency range with a correspondingly wide range of different Fresnel zones. A 2-D boundary element method is employed to study how the size of the Fresnel zone is dependent on source depth, dominant wavelength and topography in order to estimate the limits of the plane wave approximation. This approximation remains valid if the dominant wavelength does not exceed twice the source depth. Further aspects of this study concern particle motion analysis to locate point sources and the influence of the stratigraphy on particle motions. Furthermore, the deployment strategy of seismic instruments on volcanoes, as well as the direct interpretation of the broad-band waveforms in terms of pressure fluctuations in the volcanic plumbing system, are discussed.
46 CFR 403.400 - Uniform pilot's source form.
Code of Federal Regulations, 2010 CFR
2010-10-01
... ACCOUNTING SYSTEM Source Forms § 403.400 Uniform pilot's source form. (a) Each Association shall record... billing office for accounting record; (4) Third copy to pilot's own Association for pilot's personal...
Solute source depletion control of forward and back diffusion through low-permeability zones
NASA Astrophysics Data System (ADS)
Yang, Minjune; Annable, Michael D.; Jawitz, James W.
2016-10-01
Solute diffusive exchange between low-permeability aquitards and high-permeability aquifers acts as a significant mediator of long-term contaminant fate. Aquifer contaminants diffuse into aquitards, but as contaminant sources are depleted, aquifer concentrations decline, triggering back diffusion from aquitards. The dynamics of the contaminant source depletion, or the source strength function, controls the timing of the transition of aquitards from sinks to sources. Here, we experimentally evaluate three archetypical transient source depletion models (step-change, linear, and exponential), and we use novel analytical solutions to accurately account for dynamic aquitard-aquifer diffusive transfer. Laboratory diffusion experiments were conducted using a well-controlled flow chamber to assess solute exchange between sand aquifer and kaolinite aquitard layers. Solute concentration profiles in the aquitard were measured in situ using electrical conductivity. Back diffusion was shown to begin earlier and produce larger mass flux for rapidly depleting sources. The analytical models showed very good correspondence with measured aquifer breakthrough curves and aquitard concentration profiles. The modeling approach links source dissolution and back diffusion, enabling assessment of human exposure risk and calculation of the back diffusion initiation time, as well as the resulting plume persistence.
Solute source depletion control of forward and back diffusion through low-permeability zones.
Yang, Minjune; Annable, Michael D; Jawitz, James W
2016-10-01
Solute diffusive exchange between low-permeability aquitards and high-permeability aquifers acts as a significant mediator of long-term contaminant fate. Aquifer contaminants diffuse into aquitards, but as contaminant sources are depleted, aquifer concentrations decline, triggering back diffusion from aquitards. The dynamics of the contaminant source depletion, or the source strength function, controls the timing of the transition of aquitards from sinks to sources. Here, we experimentally evaluate three archetypical transient source depletion models (step-change, linear, and exponential), and we use novel analytical solutions to accurately account for dynamic aquitard-aquifer diffusive transfer. Laboratory diffusion experiments were conducted using a well-controlled flow chamber to assess solute exchange between sand aquifer and kaolinite aquitard layers. Solute concentration profiles in the aquitard were measured in situ using electrical conductivity. Back diffusion was shown to begin earlier and produce larger mass flux for rapidly depleting sources. The analytical models showed very good correspondence with measured aquifer breakthrough curves and aquitard concentration profiles. The modeling approach links source dissolution and back diffusion, enabling assessment of human exposure risk and calculation of the back diffusion initiation time, as well as the resulting plume persistence. Copyright © 2016 Elsevier B.V. All rights reserved.
Modeling Tree Growth Taking into Account Carbon Source and Sink Limitations.
Hayat, Amaury; Hacket-Pain, Andrew J; Pretzsch, Hans; Rademacher, Tim T; Friend, Andrew D
2017-01-01
Increasing CO 2 concentrations are strongly controlled by the behavior of established forests, which are believed to be a major current sink of atmospheric CO 2 . There are many models which predict forest responses to environmental changes but they are almost exclusively carbon source (i.e., photosynthesis) driven. Here we present a model for an individual tree that takes into account the intrinsic limits of meristems and cellular growth rates, as well as control mechanisms within the tree that influence its diameter and height growth over time. This new framework is built on process-based understanding combined with differential equations solved by numerical method. Our aim is to construct a model framework of tree growth for replacing current formulations in Dynamic Global Vegetation Models, and so address the issue of the terrestrial carbon sink. Our approach was successfully tested for stands of beech trees in two different sites representing part of a long-term forest yield experiment in Germany. This model provides new insights into tree growth and limits to tree height, and addresses limitations of previous models with respect to sink-limited growth.
A comparison between active and passive sensing of soil moisture from vegetated terrains
NASA Technical Reports Server (NTRS)
Fung, A. K.; Eom, H. J.
1985-01-01
A comparison between active and passive sensing of soil moisture over vegetated areas is studied via scattering models. In active sensing three contributing terms to radar backscattering can be identified: (1) the ground surface scatter term; (2) the volume scatter term representing scattering from the vegetation layer; and (3) the surface volume scatter term accounting for scattering from both surface and volume. In emission three sources of contribution can also be identified: (1) surface emission; (2) upward volume emission from the vegetation layer; and (3) downward volume emission scattered upward by the ground surface. As ground moisture increases, terms (1) and (3) increase due to increase in permittivity in the active case. However, in passive sensing, term (1) decreases but term (3) increases for the same reason. This self compensating effect produces a loss in sensitivity to change in ground moisture. Furthermore, emission from vegetation may be larger than that from the ground. Hence, the presence of vegetation layer causes a much greater loss of sensitivity to passive than active sensing of soil moisture.
A comparison between active and passive sensing of soil moisture from vegetated terrains
NASA Technical Reports Server (NTRS)
Fung, A. K.; Eom, H. J.
1984-01-01
A comparison between active and passive sensing of soil moisture over vegetated areas is studied via scattering models. In active sensing three contributing terms to radar backscattering can be identified: (1) the ground surface scatter term; (2) the volume scatter term representing scattering from the vegetation layer; and (3) the surface volume scatter term accounting for scattering from both surface and volume. In emission three sources of contribution can also be identified: (1) surface emission; (2) upward volume emission from the vegetation layer; and (3) downward volume emission scattered upward by the ground surface. As ground moisture increases, terms (1) and (3) increase due to increase in permittivity in the active case. However, in passive sensing, term (1) decreases but term (3) increases for the same reason. This self conpensating effect produces a loss in sensitivity to change in ground moisture. Furthermore, emission from vegetation may be larger than that from the ground. Hence, the presence of vegetation layer causes a much greater loss of sensitivity to passive than active sensing of soil moisture.
Buccioli, Matteo; Agnoletti, Vanni; Padovani, Emanuele; Perger, Peter
2014-01-01
The economic and financial crisis has also had an important impact on the healthcare sector. Available resources have decreased, while at the same time costs as well as demand for healthcare services are on the rise. This coalescing negative impact on availability of healthcare resources is exacerbated even further by a widespread ignorance of management accounting matters. Little knowledge about costs is a strong source of costs augmentation. Although it is broadly recognized that cost accounting has a positive impact on healthcare organizations, it is not widespread adopted. Hospitals are essential components in providing overall healthcare. Operating rooms are critical hospital units not only in patient safety terms but also in expenditure terms. Understanding OR procedures in the hospital provides important information about how health care resources are used. There have been several scientific studies on management accounting in healthcare environments and more than ever there is a need for innovation, particularly by connecting business administration research findings to modern IT tools. IT adoption constitutes one of the most important innovation fields within the healthcare sector, with beneficial effects on the decision making processes. The e-HCM (e-Healthcare Cost Management) project consists of a cost calculation model which is applicable to Business Intelligence. The cost calculation approach comprises elements from both traditional cost accounting and activity-based costing. Direct costs for all surgical procedures can be calculated through a seven step implementation process.
ERIC Educational Resources Information Center
Barzilai, Sarit; Tzadok, Eynav; Eshet-Alkalai, Yoram
2015-01-01
Sourcing is vital for knowledge construction from online information sources, yet learners may find it difficult to engage in effective sourcing. Sourcing can be particularly challenging when lay readers encounter conflicting expert accounts of controversial topics, a situation which is increasingly common when learning online. The aim of this…
NASA Astrophysics Data System (ADS)
Peruzza, Laura; Azzaro, Raffaele; Gee, Robin; D'Amico, Salvatore; Langer, Horst; Lombardo, Giuseppe; Pace, Bruno; Pagani, Marco; Panzera, Francesco; Ordaz, Mario; Suarez, Miguel Leonardo; Tusa, Giuseppina
2017-11-01
This paper describes the model implementation and presents results of a probabilistic seismic hazard assessment (PSHA) for the Mt. Etna volcanic region in Sicily, Italy, considering local volcano-tectonic earthquakes. Working in a volcanic region presents new challenges not typically faced in standard PSHA, which are broadly due to the nature of the local volcano-tectonic earthquakes, the cone shape of the volcano and the attenuation properties of seismic waves in the volcanic region. These have been accounted for through the development of a seismic source model that integrates data from different disciplines (historical and instrumental earthquake datasets, tectonic data, etc.; presented in Part 1, by Azzaro et al., 2017) and through the development and software implementation of original tools for the computation, such as a new ground-motion prediction equation and magnitude-scaling relationship specifically derived for this volcanic area, and the capability to account for the surficial topography in the hazard calculation, which influences source-to-site distances. Hazard calculations have been carried out after updating the most recent releases of two widely used PSHA software packages (CRISIS, as in Ordaz et al., 2013; the OpenQuake engine, as in Pagani et al., 2014). Results are computed for short- to mid-term exposure times (10 % probability of exceedance in 5 and 30 years, Poisson and time dependent) and spectral amplitudes of engineering interest. A preliminary exploration of the impact of site-specific response is also presented for the densely inhabited Etna's eastern flank, and the change in expected ground motion is finally commented on. These results do not account for M > 6 regional seismogenic sources which control the hazard at long return periods. However, by focusing on the impact of M < 6 local volcano-tectonic earthquakes, which dominate the hazard at the short- to mid-term exposure times considered in this study, we present a different viewpoint that, in our opinion, is relevant for retrofitting the existing buildings and for driving impending interventions of risk reduction.
Energy decay of a viscoelastic wave equation with supercritical nonlinearities
NASA Astrophysics Data System (ADS)
Guo, Yanqiu; Rammaha, Mohammad A.; Sakuntasathien, Sawanya
2018-06-01
This paper presents a study of the asymptotic behavior of the solutions for the history value problem of a viscoelastic wave equation which features a fading memory term as well as a supercritical source term and a frictional damping term: u_{tt}- k(0) Δ u - \\int \\limits _0^{&infty } k'(s) Δ u(t-s) ds +|u_t|^{m-1}u_t =|u|^{p-1}u, { in } Ω × (0,T), u(x,t)=u_0(x,t), \\quad { in } Ω × (-∞,0]), where Ω is a bounded domain in R^3 with a Dirichlét boundary condition and u_0 represents the history value. A suitable notion of a potential well is introduced for the system, and global existence of solutions is justified, provided that the history value u_0 is taken from a subset of the potential well. Also, uniform energy decay rate is obtained which depends on the relaxation kernel -k'(s) as well as the growth rate of the damping term. This manuscript complements our previous work (Guo et al. in J Differ Equ 257:3778-3812, 2014, J Differ Equ 262:1956-1979, 2017) where Hadamard well-posedness and the singularity formulation have been studied for the system. It is worth stressing the special features of the model, namely the source term here has a supercritical growth rate and the memory term accounts to the full past history that goes back to -∞.
Pinto, U; Maheshwari, B L; Ollerton, R L
2013-06-01
The Hawkesbury-Nepean River (HNR) system in South-Eastern Australia is the main source of water supply for the Sydney Metropolitan area and is one of the more complex river systems due to the influence of urbanisation and other activities in the peri-urban landscape through which it flows. The long-term monitoring of river water quality is likely to suffer from data gaps due to funding cuts, changes in priority and related reasons. Nevertheless, we need to assess river health based on the available information. In this study, we demonstrated how the Factor Analysis (FA), Hierarchical Agglomerative Cluster Analysis (HACA) and Trend Analysis (TA) can be applied to evaluate long-term historic data sets. Six water quality parameters, viz., temperature, chlorophyll-a, dissolved oxygen, oxides of nitrogen, suspended solids and reactive silicates, measured at weekly intervals between 1985 and 2008 at 12 monitoring stations located along the 300 km length of the HNR system were evaluated to understand the human and natural influences on the river system in a peri-urban landscape. The application of FA extracted three latent factors which explained more than 70 % of the total variance of the data and related to the 'bio-geographical', 'natural' and 'nutrient pollutant' dimensions of the HNR system. The bio-geographical and nutrient pollution factors more likely related to the direct influence of changes and activities of peri-urban natures and accounted for approximately 50 % of variability in water quality. The application of HACA indicated two major clusters representing clean and polluted zones of the river. On the spatial scale, one cluster was represented by the upper and lower sections of the river (clean zone) and accounted for approximately 158 km of the river. The other cluster was represented by the middle section (polluted zone) with a length of approximately 98 km. Trend Analysis indicated how the point sources influence river water quality on spatio-temporal scales, taking into account the various effects of nutrient and other pollutant loads from sewerage effluents, agriculture and other point and non-point sources along the river and major tributaries of the HNR. Over the past 26 years, water temperature has significantly increased while suspended solids have significantly decreased (p < 0.05). The analysis of water quality data through FA, HACA and TA helped to characterise the key sections and cluster the key water quality variables of the HNR system. The insights gained from this study have the potential to improve the effectiveness of river health-monitoring programs in terms of cost, time and effort, particularly in a peri-urban context.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 28 Judicial Administration 2 2011-07-01 2011-07-01 false How may family, friends, or other sources... may family, friends, or other sources deposit funds into an inmate commissary account? (a) Family and friends must mail deposits to the centralized inmate commissary account at the address we provide. (1) The...
Code of Federal Regulations, 2010 CFR
2010-07-01
... 28 Judicial Administration 2 2010-07-01 2010-07-01 false How may family, friends, or other sources... may family, friends, or other sources deposit funds into an inmate commissary account? (a) Family and friends must mail deposits to the centralized inmate commissary account at the address we provide. (1) The...
18 CFR 367.2250 - Account 225, Unamortized premium on long-term debt.
Code of Federal Regulations, 2010 CFR
2010-04-01
... POWER ACT AND NATURAL GAS ACT Balance Sheet Chart of Accounts Long-Term Debt § 367.2250 Account 225... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Account 225, Unamortized premium on long-term debt. 367.2250 Section 367.2250 Conservation of Power and Water Resources...
18 CFR 367.2260 - Account 226, Unamortized discount on long-term debt-Debit.
Code of Federal Regulations, 2010 CFR
2010-04-01
..., FEDERAL POWER ACT AND NATURAL GAS ACT Balance Sheet Chart of Accounts Long-Term Debt § 367.2260 Account... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Account 226, Unamortized discount on long-term debt-Debit. 367.2260 Section 367.2260 Conservation of Power and Water...
40 CFR 97.620 - Establishment of compliance accounts, assurance accounts, and general accounts.
Code of Federal Regulations, 2013 CFR
2013-07-01
... representation under § 97.616, the Administrator will establish a compliance account for the TR SO2 Group 1 source for which the certificate of representation was submitted, unless the source already has a... such persons and that each such person shall be fully bound by my representations, actions, inactions...
40 CFR 97.720 - Establishment of compliance accounts, assurance accounts, and general accounts.
Code of Federal Regulations, 2013 CFR
2013-07-01
... representation under § 97.716, the Administrator will establish a compliance account for the TR SO2 Group 2 source for which the certificate of representation was submitted, unless the source already has a... such persons and that each such person shall be fully bound by my representations, actions, inactions...
40 CFR 97.420 - Establishment of compliance accounts, assurance accounts, and general accounts.
Code of Federal Regulations, 2013 CFR
2013-07-01
... representation under § 97.416, the Administrator will establish a compliance account for the TR NOX Annual source for which the certificate of representation was submitted, unless the source already has a compliance... such persons and that each such person shall be fully bound by my representations, actions, inactions...
Accounting for multiple sources of uncertainty in impact assessments: The example of the BRACE study
NASA Astrophysics Data System (ADS)
O'Neill, B. C.
2015-12-01
Assessing climate change impacts often requires the use of multiple scenarios, types of models, and data sources, leading to a large number of potential sources of uncertainty. For example, a single study might require a choice of a forcing scenario, climate model, bias correction and/or downscaling method, societal development scenario, model (typically several) for quantifying elements of societal development such as economic and population growth, biophysical model (such as for crop yields or hydrology), and societal impact model (e.g. economic or health model). Some sources of uncertainty are reduced or eliminated by the framing of the question. For example, it may be useful to ask what an impact outcome would be conditional on a given societal development pathway, forcing scenario, or policy. However many sources of uncertainty remain, and it is rare for all or even most of these sources to be accounted for. I use the example of a recent integrated project on the Benefits of Reduced Anthropogenic Climate changE (BRACE) to explore useful approaches to uncertainty across multiple components of an impact assessment. BRACE comprises 23 papers that assess the differences in impacts between two alternative climate futures: those associated with Representative Concentration Pathways (RCPs) 4.5 and 8.5. It quantifies difference in impacts in terms of extreme events, health, agriculture, tropical cyclones, and sea level rise. Methodologically, it includes climate modeling, statistical analysis, integrated assessment modeling, and sector-specific impact modeling. It employs alternative scenarios of both radiative forcing and societal development, but generally uses a single climate model (CESM), partially accounting for climate uncertainty by drawing heavily on large initial condition ensembles. Strengths and weaknesses of the approach to uncertainty in BRACE are assessed. Options under consideration for improving the approach include the use of perturbed physics ensembles of CESM, employing results from multiple climate models, and combining the results from single impact models with statistical representations of uncertainty across multiple models. A key consideration is the relationship between the question being addressed and the uncertainty approach.
NASA Astrophysics Data System (ADS)
Larmat, C. S.; Rougier, E.; Knight, E.; Yang, X.; Patton, H. J.
2013-12-01
A goal of the Source Physics Experiments (SPE) is to develop explosion source models expanding monitoring capabilities beyond empirical methods. The SPE project combines field experimentation with numerical modelling. The models take into account non-linear processes occurring from the first moment of the explosion as well as complex linear propagation effects of signals reaching far-field recording stations. The hydrodynamic code CASH is used for modelling high-strain rate, non-linear response occurring in the material near the source. Our development efforts focused on incorporating in-situ stress and fracture processes. CASH simulates the material response from the near-source, strong shock zone out to the small-strain and ultimately the elastic regime where a linear code can take over. We developed an interface with the Spectral Element Method code, SPECFEM3D, that is an efficient implementation on parallel computers of a high-order finite element method. SPECFEM3D allows accurate modelling of wave propagation to remote monitoring distance at low cost. We will present CASH-SPECFEM3D results for SPE1, which was a chemical detonation of about 85 kg of TNT at 55 m depth in a granitic geologic unit. Spallation was observed for SPE1. Keeping yield fixed we vary the depth of the source systematically and compute synthetic seismograms to distances where the P and Rg waves are separated, so that analysis can be performed without concern about interference effects due to overlapping energy. We study the time and frequency characteristics of P and Rg waves and analyse them in regard to the impact of free-surface interactions and rock damage resulting from those interactions. We also perform traditional CMT inversions as well as advanced CMT inversions, developed at LANL to take into account the damage. This will allow us to assess the effect of spallation on CMT solutions as well as to validate our inversion procedure. Further work will aim to validate the developed models with the data recorded on SPEs. This long-term goal requires taking into account the 3D structure and thus a comprehensive characterization of the site.
Optimization of a mirror-based neutron source using differential evolution algorithm
NASA Astrophysics Data System (ADS)
Yurov, D. V.; Prikhodko, V. V.
2016-12-01
This study is dedicated to the assessment of capabilities of gas-dynamic trap (GDT) and gas-dynamic multiple-mirror trap (GDMT) as potential neutron sources for subcritical hybrids. In mathematical terms the problem of the study has been formulated as determining the global maximum of fusion gain (Q pl), the latter represented as a function of trap parameters. A differential evolution method has been applied to perform the search. Considered in all calculations has been a configuration of the neutron source with 20 m long distance between the mirrors and 100 MW heating power. It is important to mention that the numerical study has also taken into account a number of constraints on plasma characteristics so as to provide physical credibility of searched-for trap configurations. According to the results obtained the traps considered have demonstrated fusion gain up to 0.2, depending on the constraints applied. This enables them to be used either as neutron sources within subcritical reactors for minor actinides incineration or as material-testing facilities.
The scope and control of attention: Sources of variance in working memory capacity.
Chow, Michael; Conway, Andrew R A
2015-04-01
Working memory capacity is a strong positive predictor of many cognitive abilities, across various domains. The pattern of positive correlations across domains has been interpreted as evidence for a unitary source of inter-individual differences in behavior. However, recent work suggests that there are multiple sources of variance contributing to working memory capacity. The current study (N = 71) investigates individual differences in the scope and control of attention, in addition to the number and resolution of items maintained in working memory. Latent variable analyses indicate that the scope and control of attention reflect independent sources of variance and each account for unique variance in general intelligence. Also, estimates of the number of items maintained in working memory are consistent across tasks and related to general intelligence whereas estimates of resolution are task-dependent and not predictive of intelligence. These results provide insight into the structure of working memory, as well as intelligence, and raise new questions about the distinction between number and resolution in visual short-term memory.
McBride, Marissa F; Wilson, Kerrie A; Bode, Michael; Possingham, Hugh P
2007-12-01
Uncertainty in the implementation and outcomes of conservation actions that is not accounted for leaves conservation plans vulnerable to potential changes in future conditions. We used a decision-theoretic approach to investigate the effects of two types of investment uncertainty on the optimal allocation of global conservation resources for land acquisition in the Mediterranean Basin. We considered uncertainty about (1) whether investment will continue and (2) whether the acquired biodiversity assets are secure, which we termed transaction uncertainty and performance uncertainty, respectively. We also developed and tested the robustness of different rules of thumb for guiding the allocation of conservation resources when these sources of uncertainty exist. In the presence of uncertainty in future investment ability (transaction uncertainty), the optimal strategy was opportunistic, meaning the investment priority should be to act where uncertainty is highest while investment remains possible. When there was a probability that investments would fail (performance uncertainty), the optimal solution became a complex trade-off between the immediate biodiversity benefits of acting in a region and the perceived longevity of the investment. In general, regions were prioritized for investment when they had the greatest performance certainty, even if an alternative region was highly threatened or had higher biodiversity value. The improved performance of rules of thumb when accounting for uncertainty highlights the importance of explicitly incorporating sources of investment uncertainty and evaluating potential conservation investments in the context of their likely long-term success.
NASA Astrophysics Data System (ADS)
Volpe, M.; Selva, J.; Tonini, R.; Romano, F.; Lorito, S.; Brizuela, B.; Argyroudis, S.; Salzano, E.; Piatanesi, A.
2016-12-01
Seismic Probabilistic Tsunami Hazard Analysis (SPTHA) is a methodology to assess the exceedance probability for different thresholds of tsunami hazard intensity, at a specific site or region in a given time period, due to a seismic source. A large amount of high-resolution inundation simulations is typically required for taking into account the full variability of potential seismic sources and their slip distributions. Starting from regional SPTHA offshore results, the computational cost can be reduced by considering for inundation calculations only a subset of `important' scenarios. We here use a method based on an event tree for the treatment of the seismic source aleatory variability; a cluster analysis on the offshore results to define the important sources; epistemic uncertainty treatment through an ensemble modeling approach. We consider two target sites in the Mediterranean (Milazzo, Italy, and Thessaloniki, Greece) where coastal (non nuclear) critical infrastructures (CIs) are located. After performing a regional SPTHA covering the whole Mediterranean, for each target site, few hundreds of representative scenarios are filtered out of all the potential seismic sources and the tsunami inundation is explicitly modeled, obtaining a site-specific SPTHA, with a complete characterization of the tsunami hazard in terms of flow depth and velocity time histories. Moreover, we also explore the variability of SPTHA at the target site accounting for coseismic deformation (i.e. uplift or subsidence) due to near field sources located in very shallow water. The results are suitable and will be applied for subsequent multi-hazard risk analysis for the CIs. These applications have been developed in the framework of the Italian Flagship Project RITMARE, EC FP7 ASTARTE (Grant agreement 603839) and STREST (Grant agreement 603389) projects, and of the INGV-DPC Agreement.
The acoustic field of a point source in a uniform boundary layer over an impedance plane
NASA Technical Reports Server (NTRS)
Zorumski, W. E.; Willshire, W. L., Jr.
1986-01-01
The acoustic field of a point source in a boundary layer above an impedance plane is investigated anatytically using Obukhov quasi-potential functions, extending the normal-mode theory of Chunchuzov (1984) to account for the effects of finite ground-plane impedance and source height. The solution is found to be asymptotic to the surface-wave term studies by Wenzel (1974) in the limit of vanishing wind speed, suggesting that normal-mode theory can be used to model the effects of an atmospheric boundary layer on infrasonic sound radiation. Model predictions are derived for noise-generation data obtained by Willshire (1985) at the Medicine Bow wind-turbine facility. Long-range downwind propagation is found to behave as a cylindrical wave, with attention proportional to the wind speed, the boundary-layer displacement thickness, the real part of the ground admittance, and the square of the frequency.
History and theory in "applied ethics".
Beauchamp, Tom L
2007-03-01
Robert Baker and Laurence McCullough argue that the "applied ethics model" is deficient and in need of a replacement model. However, they supply no clear meaning to "applied ethics" and miss most of what is important in the literature on methodology that treats this question. The Baker-McCullough account of medical and applied ethics is a straw man that has had no influence in these fields or in philosophical ethics. The authors are also on shaky historical grounds in dealing with two problems: (1) the historical source of the notion of "practical ethics" and (2) the historical source of and the assimilation of the term "autonomy" into applied philosophy and professional ethics. They mistakenly hold (1) that the expression "practical ethics" was first used in a publication by Thomas Percival and (2) that Kant is the primary historical source of the notion of autonomy as that notion is used in contemporary applied ethics.
Fortibuoni, Tomaso; Libralato, Simone; Raicevich, Saša; Giovanardi, Otello; Solidoro, Cosimo
2010-01-01
The understanding of fish communities' changes over the past centuries has important implications for conservation policy and marine resource management. However, reconstructing these changes is difficult because information on marine communities before the second half of the 20th century is, in most cases, anecdotal and merely qualitative. Therefore, historical qualitative records and modern quantitative data are not directly comparable, and their integration for long-term analyses is not straightforward. We developed a methodology that allows the coding of qualitative information provided by early naturalists into semi-quantitative information through an intercalibration with landing proportions. This approach allowed us to reconstruct and quantitatively analyze a 200-year-long time series of fish community structure indicators in the Northern Adriatic Sea (Mediterranean Sea). Our analysis provides evidence of long-term changes in fish community structure, including the decline of Chondrichthyes, large-sized and late-maturing species. This work highlights the importance of broadening the time-frame through which we look at marine ecosystem changes and provides a methodology to exploit, in a quantitative framework, historical qualitative sources. To the purpose, naturalists' eyewitness accounts proved to be useful for extending the analysis on fish community back in the past, well before the onset of field-based monitoring programs. PMID:21103349
40 CFR 97.520 - Establishment of compliance accounts, assurance accounts, and general accounts.
Code of Federal Regulations, 2013 CFR
2013-07-01
... representation under § 97.516, the Administrator will establish a compliance account for the TR NOX Ozone Season source for which the certificate of representation was submitted, unless the source already has a... Program on behalf of such persons and that each such person shall be fully bound by my representations...
Focal colors across languages are representative members of color categories.
Abbott, Joshua T; Griffiths, Thomas L; Regier, Terry
2016-10-04
Focal colors, or best examples of color terms, have traditionally been viewed as either the underlying source of cross-language color-naming universals or derived from category boundaries that vary widely across languages. Existing data partially support and partially challenge each of these views. Here, we advance a position that synthesizes aspects of these two traditionally opposed positions and accounts for existing data. We do so by linking this debate to more general principles. We show that best examples of named color categories across 112 languages are well-predicted from category extensions by a statistical model of how representative a sample is of a distribution, independently shown to account for patterns of human inference. This model accounts for both universal tendencies and variation in focal colors across languages. We conclude that categorization in the contested semantic domain of color may be governed by principles that apply more broadly in cognition and that these principles clarify the interplay of universal and language-specific forces in color naming.
Focal colors across languages are representative members of color categories
Abbott, Joshua T.; Griffiths, Thomas L.; Regier, Terry
2016-01-01
Focal colors, or best examples of color terms, have traditionally been viewed as either the underlying source of cross-language color-naming universals or derived from category boundaries that vary widely across languages. Existing data partially support and partially challenge each of these views. Here, we advance a position that synthesizes aspects of these two traditionally opposed positions and accounts for existing data. We do so by linking this debate to more general principles. We show that best examples of named color categories across 112 languages are well-predicted from category extensions by a statistical model of how representative a sample is of a distribution, independently shown to account for patterns of human inference. This model accounts for both universal tendencies and variation in focal colors across languages. We conclude that categorization in the contested semantic domain of color may be governed by principles that apply more broadly in cognition and that these principles clarify the interplay of universal and language-specific forces in color naming. PMID:27647896
Development of axisymmetric lattice Boltzmann flux solver for complex multiphase flows
NASA Astrophysics Data System (ADS)
Wang, Yan; Shu, Chang; Yang, Li-Ming; Yuan, Hai-Zhuan
2018-05-01
This paper presents an axisymmetric lattice Boltzmann flux solver (LBFS) for simulating axisymmetric multiphase flows. In the solver, the two-dimensional (2D) multiphase LBFS is applied to reconstruct macroscopic fluxes excluding axisymmetric effects. Source terms accounting for axisymmetric effects are introduced directly into the governing equations. As compared to conventional axisymmetric multiphase lattice Boltzmann (LB) method, the present solver has the kinetic feature for flux evaluation and avoids complex derivations of external forcing terms. In addition, the present solver also saves considerable computational efforts in comparison with three-dimensional (3D) computations. The capability of the proposed solver in simulating complex multiphase flows is demonstrated by studying single bubble rising in a circular tube. The obtained results compare well with the published data.
A general circulation model study of atmospheric carbon monoxide
NASA Technical Reports Server (NTRS)
Pinto, J. P.; Rind, D.; Russell, G. L.; Lerner, J. A.; Hansen, J. E.; Yung, Y. L.; Hameed, S.
1983-01-01
The carbon monoxide cycle is studied by incorporating the known and hypothetical sources and sinks in a tracer model that uses the winds generated by a general circulation model. Photochemical production and loss terms, which depend on OH radical concentrations, are calculated in an interactive fashion. The computed global distribution and seasonal variations of CO are compared with observations to obtain constraints on the distribution and magnitude of the sources and sinks of CO, and on the tropospheric abundance of OH. The simplest model that accounts for available observations requires a low latitude plant source of about 1.3 x 10 to the 15th g/yr, in addition to sources from incomplete combustion of fossil fuels and oxidation of methane. The globally averaged OH concentration calculated in the model is 750,000/cu cm. Models that calculate globally averaged OH concentrations much lower than this nominal value are not consistent with the observed variability of CO. Such models are also inconsistent with measurements of CO isotopic abundances, which imply the existence of plant sources.
The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.
Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre
2016-10-01
Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.
Assessment of Information on Concussion Available to Adolescents on Social Media.
Kollia, Betty; Basch, Corey H; Mouser, Christina; Deleon, Aurea J
2018-01-01
Considering how many people obtain information about their health online, the aim of this study was to describe the content of the currently most widely viewed YouTube videos related to concussions and to test the hypothesis that consumer videos would be anecdotal, while other sources would be more informational. The term "concussion" was used to search for videos with 100,000 or more views on YouTube that were posted in English or Spanish. Descriptive information about each video was recorded, as was information on whether certain content was conveyed during the video. The main outcome measures are sources of upload and content of videos. Consumer videos accounted for 48% of the videos, television based accounted for 50% of the videos, and internet based accounted for only 2% of the videos. None of the videos viewed fell into the professional category. Television based videos were viewed significantly more than consumer or internet based videos. Consumer and television based videos were equally anecdotal. Many of the videos focused on adolescents and were related to sports injuries. The majority of the videos (70.4%) addressed concussion causes, with 48% stating sports. Few videos discussed symptoms of concussion and prevention. The potential for widespread misinformation necessitates caution when obtaining information on concussion on a freely accessible and editable medium, such as YouTube.
25 CFR 115.702 - What specific sources of money will be accepted for deposit into a trust account?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false What specific sources of money will be accepted for deposit into a trust account? 115.702 Section 115.702 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR FINANCIAL ACTIVITIES TRUST FUNDS FOR TRIBES AND INDIVIDUAL INDIANS Trust Fund Accounts: General Information § 115.702 What specific sources...
Seismic Hazard and Ground Motion Characterization at the Itoiz Dam (Northern Spain)
NASA Astrophysics Data System (ADS)
Rivas-Medina, A.; Santoyo, M. A.; Luzón, F.; Benito, B.; Gaspar-Escribano, J. M.; García-Jerez, A.
2012-08-01
This paper presents a new hazard-consistent ground motion characterization of the Itoiz dam site, located in Northern Spain. Firstly, we propose a methodology with different approximation levels to the expected ground motion at the dam site. Secondly, we apply this methodology taking into account the particular characteristics of the site and of the dam. Hazard calculations were performed following the Probabilistic Seismic Hazard Assessment method using a logic tree, which accounts for different seismic source zonings and different ground-motion attenuation relationships. The study was done in terms of peak ground acceleration and several spectral accelerations of periods coinciding with the fundamental vibration periods of the dam. In order to estimate these ground motions we consider two different dam conditions: when the dam is empty ( T = 0.1 s) and when it is filled with water to its maximum capacity ( T = 0.22 s). Additionally, seismic hazard analysis is done for two return periods: 975 years, related to the project earthquake, and 4,975 years, identified with an extreme event. Soil conditions were also taken into account at the site of the dam. Through the proposed methodology we deal with different forms of characterizing ground motion at the study site. In a first step, we obtain the uniform hazard response spectra for the two return periods. In a second step, a disaggregation analysis is done in order to obtain the controlling earthquakes that can affect the dam. Subsequently, we characterize the ground motion at the dam site in terms of specific response spectra for target motions defined by the expected values SA ( T) of T = 0.1 and 0.22 s for the return periods of 975 and 4,975 years, respectively. Finally, synthetic acceleration time histories for earthquake events matching the controlling parameters are generated using the discrete wave-number method and subsequently analyzed. Because of the short relative distances between the controlling earthquakes and the dam site we considered finite sources in these computations. We conclude that directivity effects should be taken into account as an important variable in this kind of studies for ground motion characteristics.
Wu, Rengmao; Hua, Hong
2016-01-01
Illumination design used to redistribute the spatial energy distribution of light source is a key technique in lighting applications. However, there is still no effective illumination design method for extended sources, especially for extended non-Lambertian sources. What we present here is to our knowledge the first direct method for extended non-Lambertian sources in three-dimensional (3D) rotational geometry. In this method, both meridional rays and skew rays of the extended source are taken into account to tailor the lens profile in the meridional plane. A set of edge rays and interior rays emitted from the extended source which will take a given direction after the refraction of the aspherical lens are found by the Snell’s law, and the output intensity at this direction is then calculated to be the integral of the luminance function of the outgoing rays at this direction. This direct method is effective for both extended non-Lambertian sources and extended Lambertian sources in 3D rotational symmetry, and can directly find a solution to the prescribed design problem without cumbersome iterative illuminance compensation. Two examples are presented to demonstrate the effectiveness of the proposed method in terms of performance and capacity for tackling complex designs. PMID:26832484
Brown, J. F.; Hendy, Steve
2001-01-01
In spite of repeated efforts to explain itself to a wider audience, behavior analysis remains a largely misunderstood and isolated discipline. In this article we argue that this situation is in part due to the terms we use in our technical discussions. In particular, reinforcement and punishment, with their vernacular associations of reward and retribution, are a source of much misunderstanding. Although contemporary thinking within behavior analysis holds that reinforcement and punishment are Darwinian processes whereby behavioral variants are selected and deselected by their consequences, the continued use of the terms reinforcement and punishment to account for behavioral evolution obscures this fact. To clarify and simplify matters, we propose replacing the terms reinforcement and punishment with selection and deselection, respectively. These changes would provide a terminological meeting point with other selectionist sciences, thereby increasing the likelihood that behavior analysis will contribute to Darwinian science. PMID:22478361
[Sacer ignis, quam pustulam vocant pastores: anthrax--cultural historical traces of a zoonosis].
Eitel, J
2003-01-01
The knowledge of anthrax as a disease and its importance as a zoonosis in the Greco-Roman world is revealed through a selection of classical texts and mythological sources, taking into account evidence of reworking and reuse of these texts up until the nineteenth century. The numerous names given to the disease throughout history and their linguistic origins will also be examined in this paper. The narrative of the epizoonoses in Noricum in Virgil's Georgics; taken by several to represent a description of an anthrax epidemic, and which had a great influence in written works on veterinary medicine up until the discovery of bacteria, will be given particular attention. The crucial term is "Sacer Ignis", synonymous for several different human and animal diseases through time. This term will be analysed in terms of linguistic origin and the changes in meaning it acquired throughout the centuries.
NASA Astrophysics Data System (ADS)
Charrier, J. G.; Richards-Henderson, N. K.; Bein, K. J.; McFall, A. S.; Wexler, A. S.; Anastasio, C.
2014-09-01
Recent epidemiological evidence supports the hypothesis that health effects from inhalation of ambient particulate matter (PM) are governed by more than just the mass of PM inhaled. Both specific chemical components and sources have been identified as important contributors to mortality and hospital admissions, even when these endpoints are unrelated to PM mass. Sources may cause adverse health effects via their ability to produce reactive oxygen species, possibly due to the transition metal content of the PM. Our goal is to quantify the oxidative potential of ambient particle sources collected during two seasons in Fresno, CA using the dithiothreitol (DTT) assay. We collected PM from different sources or source combinations into different ChemVol (CV) samplers in real time using a novel source-oriented sampling technique based on single particle mass spectrometry. We segregated the particles from each source-oriented mixture into two size fractions - ultrafine (Dp ≤ 0.17 μm) and submicron fine (0.17 μm ≤ Dp ≤ 1.0 μm) - and measured metals and the rate of DTT loss in each PM extract. We find that the mass-normalized oxidative potential of different sources varies by up to a actor of 8 and that submicron fine PM typically has a larger mass-normalized oxidative potential than ultrafine PM from the same source. Vehicular Emissions, Regional Source Mix, Commute Hours, Daytime Mixed Layer and Nighttime Inversion sources exhibit the highest mass-normalized oxidative potential. When we apportion the volume-normalized oxidative potential, which also accounts for the source's prevalence, cooking sources account for 18-29% of the total DTT loss while mobile (traffic) sources account for 16-28%. When we apportion DTT activity for total PM sampled to specific chemical compounds, soluble copper accounts for roughly 50% of total air-volume-normalized oxidative potential, soluble manganese accounts for 20%, and other unknown species, likely including quinones and other organics, account for 30%. During nighttime, soluble copper and manganese largely explain the oxidative potential of PM, while daytime has a larger contribution from unknown (likely organic) species.
Source term evaluation for accident transients in the experimental fusion facility ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Virot, F.; Barrachin, M.; Cousin, F.
2015-03-15
We have studied the transport and chemical speciation of radio-toxic and toxic species for an event of water ingress in the vacuum vessel of experimental fusion facility ITER with the ASTEC code. In particular our evaluation takes into account an assessed thermodynamic data for the beryllium gaseous species. This study shows that deposited beryllium dusts of atomic Be and Be(OH){sub 2} are formed. It also shows that Be(OT){sub 2} could exist in some conditions in the drain tank. (authors)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Obaid, Razib; Buth, Christian; Dakovski, Georgi L.
Here, we measured the fluorescence photon yield of neon upon soft x-ray ionization (~1200 eV) from the x-ray free-electron laser at Linac Coherent Light Source, and demonstrated the usage of a grazing incidence spectrometer with a variable line spacing grating to perform x-ray fluorescence spectroscopy on a gas phase system. Our measurements also allowed us to estimate the focal size of the beam from the theoretical description developed, in terms of the rate equation approximation accounting for photoionization shake off of neutral neon and double auger decay of single core holes.
Method for measuring multiple scattering corrections between liquid scintillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verbeke, J. M.; Glenn, A. M.; Keefer, G. J.
2016-04-11
In this study, a time-of-flight method is proposed to experimentally quantify the fractions of neutrons scattering between scintillators. An array of scintillators is characterized in terms of crosstalk with this method by measuring a californium source, for different neutron energy thresholds. The spectral information recorded by the scintillators can be used to estimate the fractions of neutrons multiple scattering. With the help of a correction to Feynman's point model theory to account for multiple scattering, these fractions can in turn improve the mass reconstruction of fissile materials under investigation.
What's the Rush? Tort Laws and Elective Early-term Induction of Labor.
Roth, Louise Marie
2016-12-01
Tort laws aim to deter risky medical practices and increase accountability for harm. This research examines their effects on deterrence of a high-risk obstetric practice in the United States: elective early-term (37-38 weeks gestation) induction of labor. Using birth certificate data from the Natality Detail Files and state-level data from publicly available sources, this study analyzes the effects of tort laws on labor induction with multilevel models (MLM) of 665,491 early-term births nested in states. Results reveal that caps on damages are associated with significantly higher odds of early-term induction and Proportionate Liability (PL) is associated with significantly lower odds compared to Joint and Several Liability (JSL). The findings suggest that clinicians are more likely to engage in practices that defy professional guidelines in tort environments with lower legal burdens. I discuss the implications of the findings for patient safety and the deterrence of high-risk practices. © American Sociological Association 2016.
A consistent modelling methodology for secondary settling tanks: a reliable numerical method.
Bürger, Raimund; Diehl, Stefan; Farås, Sebastian; Nopens, Ingmar; Torfs, Elena
2013-01-01
The consistent modelling methodology for secondary settling tanks (SSTs) leads to a partial differential equation (PDE) of nonlinear convection-diffusion type as a one-dimensional model for the solids concentration as a function of depth and time. This PDE includes a flux that depends discontinuously on spatial position modelling hindered settling and bulk flows, a singular source term describing the feed mechanism, a degenerating term accounting for sediment compressibility, and a dispersion term for turbulence. In addition, the solution itself is discontinuous. A consistent, reliable and robust numerical method that properly handles these difficulties is presented. Many constitutive relations for hindered settling, compression and dispersion can be used within the model, allowing the user to switch on and off effects of interest depending on the modelling goal as well as investigate the suitability of certain constitutive expressions. Simulations show the effect of the dispersion term on effluent suspended solids and total sludge mass in the SST. The focus is on correct implementation whereas calibration and validation are not pursued.
A continuous time random walk (CTRW) integro-differential equation with chemical interaction
NASA Astrophysics Data System (ADS)
Ben-Zvi, Rami; Nissan, Alon; Scher, Harvey; Berkowitz, Brian
2018-01-01
A nonlocal-in-time integro-differential equation is introduced that accounts for close coupling between transport and chemical reaction terms. The structure of the equation contains these terms in a single convolution with a memory function M ( t), which includes the source of non-Fickian (anomalous) behavior, within the framework of a continuous time random walk (CTRW). The interaction is non-linear and second-order, relevant for a bimolecular reaction A + B → C. The interaction term ΓP A ( s, t) P B ( s, t) is symmetric in the concentrations of A and B (i.e. P A and P B ); thus the source terms in the equations for A, B and C are similar, but with a change in sign for that of C. Here, the chemical rate coefficient, Γ, is constant. The fully coupled equations are solved numerically using a finite element method (FEM) with a judicious representation of M ( t) that eschews the need for the entire time history, instead using only values at the former time step. To begin to validate the equations, the FEM solution is compared, in lieu of experimental data, to a particle tracking method (CTRW-PT); the results from the two approaches, particularly for the C profiles, are in agreement. The FEM solution, for a range of initial and boundary conditions, can provide a good model for reactive transport in disordered media.
ATLAS Distributed Computing Monitoring tools during the LHC Run I
NASA Astrophysics Data System (ADS)
Schovancová, J.; Campana, S.; Di Girolamo, A.; Jézéquel, S.; Ueda, I.; Wenaus, T.; Atlas Collaboration
2014-06-01
This contribution summarizes evolution of the ATLAS Distributed Computing (ADC) Monitoring project during the LHC Run I. The ADC Monitoring targets at the three groups of customers: ADC Operations team to early identify malfunctions and escalate issues to an activity or a service expert, ATLAS national contacts and sites for the real-time monitoring and long-term measurement of the performance of the provided computing resources, and the ATLAS Management for long-term trends and accounting information about the ATLAS Distributed Computing resources. During the LHC Run I a significant development effort has been invested in standardization of the monitoring and accounting applications in order to provide extensive monitoring and accounting suite. ADC Monitoring applications separate the data layer and the visualization layer. The data layer exposes data in a predefined format. The visualization layer is designed bearing in mind visual identity of the provided graphical elements, and re-usability of the visualization bits across the different tools. A rich family of various filtering and searching options enhancing available user interfaces comes naturally with the data and visualization layer separation. With a variety of reliable monitoring data accessible through standardized interfaces, the possibility of automating actions under well defined conditions correlating multiple data sources has become feasible. In this contribution we discuss also about the automated exclusion of degraded resources and their automated recovery in various activities.
Bayen, Ute J.; Kuhlmann, Beatrice G.
2010-01-01
The authors investigated conditions under which judgments in source-monitoring tasks are influenced by prior schematic knowledge. According to a probability-matching account of source guessing (Spaniol & Bayen, 2002), when people do not remember the source of information, they match source guessing probabilities to the perceived contingency between sources and item types. When they do not have a representation of a contingency, they base their guesses on prior schematic knowledge. The authors provide support for this account in two experiments with sources presenting information that was expected for one source and somewhat unexpected for another. Schema-relevant information about the sources was provided at the time of encoding. When contingency perception was impeded by dividing attention, participants showed schema-based guessing (Experiment 1). Manipulating source - item contingency also affected guessing (Experiment 2). When this contingency was schema-inconsistent, it superseded schema-based expectations and led to schema-inconsistent guessing. PMID:21603251
The application of foraging theory to the information searching behaviour of general practitioners.
Dwairy, Mai; Dowell, Anthony C; Stahl, Jean-Claude
2011-08-23
General Practitioners (GPs) employ strategies to identify and retrieve medical evidence for clinical decision making which take workload and time constraints into account. Optimal Foraging Theory (OFT) initially developed to study animal foraging for food is used to explore the information searching behaviour of General Practitioners. This study is the first to apply foraging theory within this context.Study objectives were: 1. To identify the sequence and steps deployed in identifiying and retrieving evidence for clinical decision making. 2. To utilise Optimal Foraging Theory to assess the effectiveness and efficiency of General Practitioner information searching. GPs from the Wellington region of New Zealand were asked to document in a pre-formatted logbook the steps and outcomes of an information search linked to their clinical decision making, and fill in a questionnaire about their personal, practice and information-searching backgrounds. A total of 115/155 eligible GPs returned a background questionnaire, and 71 completed their information search logbook. GPs spent an average of 17.7 minutes addressing their search for clinical information. Their preferred information sources were discussions with colleagues (38% of sources) and books (22%). These were the two most profitable information foraging sources (15.9 min and 9.5 min search time per answer, compared to 34.3 minutes in databases). GPs nearly always accessed another source when unsuccessful (95% after 1st source), and frequently when successful (43% after 2nd source). Use of multiple sources accounted for 41% of searches, and increased search success from 70% to 89%. By consulting in foraging terms the most 'profitable' sources of information (colleagues, books), rapidly switching sources when unsuccessful, and frequently double checking, GPs achieve an efficient trade-off between maximizing search success and information reliability, and minimizing searching time. As predicted by foraging theory, GPs trade time-consuming evidence-based (electronic) information sources for sources with a higher information reward per unit time searched. Evidence-based practice must accommodate these 'real world' foraging pressures, and Internet resources should evolve to deliver information as effectively as traditional methods of information gathering.
Computation of nonlinear ultrasound fields using a linearized contrast source method.
Verweij, Martin D; Demi, Libertario; van Dongen, Koen W A
2013-08-01
Nonlinear ultrasound is important in medical diagnostics because imaging of the higher harmonics improves resolution and reduces scattering artifacts. Second harmonic imaging is currently standard, and higher harmonic imaging is under investigation. The efficient development of novel imaging modalities and equipment requires accurate simulations of nonlinear wave fields in large volumes of realistic (lossy, inhomogeneous) media. The Iterative Nonlinear Contrast Source (INCS) method has been developed to deal with spatiotemporal domains measuring hundreds of wavelengths and periods. This full wave method considers the nonlinear term of the Westervelt equation as a nonlinear contrast source, and solves the equivalent integral equation via the Neumann iterative solution. Recently, the method has been extended with a contrast source that accounts for spatially varying attenuation. The current paper addresses the problem that the Neumann iterative solution converges badly for strong contrast sources. The remedy is linearization of the nonlinear contrast source, combined with application of more advanced methods for solving the resulting integral equation. Numerical results show that linearization in combination with a Bi-Conjugate Gradient Stabilized method allows the INCS method to deal with fairly strong, inhomogeneous attenuation, while the error due to the linearization can be eliminated by restarting the iterative scheme.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-10-03
... demonstrated expertise in forestry, agriculture, measurement and carbon accounting methodologies, land use... draft Accounting Framework for Biogenic CO 2 Emissions from Stationary Sources (September 2011). DATES... review EPA's draft Accounting Framework for Biogenic CO 2 Emissions from Stationary Sources (September...
Numerical Modeling of Electroacoustic Logging Including Joule Heating
NASA Astrophysics Data System (ADS)
Plyushchenkov, Boris D.; Nikitin, Anatoly A.; Turchaninov, Victor I.
It is well known that electromagnetic field excites acoustic wave in a porous elastic medium saturated with fluid electrolyte due to electrokinetic conversion effect. Pride's equations describing this process are written in isothermal approximation. Update of these equations, which allows to take influence of Joule heating on acoustic waves propagation into account, is proposed here. This update includes terms describing the initiation of additional acoustic waves excited by thermoelastic stresses and the heat conduction equation with right side defined by Joule heating. Results of numerical modeling of several problems of propagation of acoustic waves excited by an electric field source with and without consideration of Joule heating effect in their statements are presented. From these results, it follows that influence of Joule heating should be taken into account at the numerical simulation of electroacoustic logging and at the interpretation of its log data.
47 CFR 32.4300 - Other long-term liabilities and deferred credits.
Code of Federal Regulations, 2010 CFR
2010-10-01
... CARRIER SERVICES UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Balance Sheet Accounts § 32.4300 Other long-term liabilities and deferred credits. (a) This account shall...
Analysis of the SPS Long Term Orbit Drifts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Velotti, Francesco; Bracco, Chiara; Cornelis, Karel
2016-06-01
The Super Proton Synchrotron (SPS) is the last accelerator in the Large Hadron Collider (LHC) injector chain, and has to deliver the two high-intensity 450 GeV proton beams to the LHC. The transport from SPS to LHC is done through the two Transfer Lines (TL), TI2 and TI8, for Beam 1 (B1) and Beam 2 (B2) respectively. During the first LHC operation period Run 1, a long term drift of the SPS orbit was observed, causing changes in the LHC injection due to the resulting changes in the TL trajectories. This translated into longer LHC turnaround because of the necessitymore » to periodically correct the TL trajectories in order to preserve the beam quality at injection into the LHC. Different sources for the SPS orbit drifts have been investigated: each of them can account only partially for the total orbit drift observed. In this paper, the possible sources of such drift are described, together with the simulated and measured effect they cause. Possible solutions and countermeasures are also discussed.« less
Verification and Improvement of Flamelet Approach for Non-Premixed Flames
NASA Technical Reports Server (NTRS)
Zaitsev, S.; Buriko, Yu.; Guskov, O.; Kopchenov, V.; Lubimov, D.; Tshepin, S.; Volkov, D.
1997-01-01
Studies in the mathematical modeling of the high-speed turbulent combustion has received renewal attention in the recent years. The review of fundamentals, approaches and extensive bibliography was presented by Bray, Libbi and Williams. In order to obtain accurate predictions for turbulent combustible flows, the effects of turbulent fluctuations on the chemical source terms should be taken into account. The averaging of chemical source terms requires to utilize probability density function (PDF) model. There are two main approaches which are dominant in high-speed combustion modeling now. In the first approach, PDF form is assumed based on intuitia of modelliers (see, for example, Spiegler et.al.; Girimaji; Baurle et.al.). The second way is much more elaborate and it is based on the solution of evolution equation for PDF. This approach was proposed by S.Pope for incompressible flames. Recently, it was modified for modeling of compressible flames in studies of Farschi; Hsu; Hsu, Raji, Norris; Eifer, Kollman. But its realization in CFD is extremely expensive in computations due to large multidimensionality of PDF evolution equation (Baurle, Hsu, Hassan).
NASA Astrophysics Data System (ADS)
Haworth, Daniel
2013-11-01
The importance of explicitly accounting for the effects of unresolved turbulent fluctuations in Reynolds-averaged and large-eddy simulations of chemically reacting turbulent flows is increasingly recognized. Transported probability density function (PDF) methods have emerged as one of the most promising modeling approaches for this purpose. In particular, PDF methods provide an elegant and effective resolution to the closure problems that arise from averaging or filtering terms that correspond to nonlinear point processes, including chemical reaction source terms and radiative emission. PDF methods traditionally have been associated with studies of turbulence-chemistry interactions in laboratory-scale, atmospheric-pressure, nonluminous, statistically stationary nonpremixed turbulent flames; and Lagrangian particle-based Monte Carlo numerical algorithms have been the predominant method for solving modeled PDF transport equations. Recent advances and trends in PDF methods are reviewed and discussed. These include advances in particle-based algorithms, alternatives to particle-based algorithms (e.g., Eulerian field methods), treatment of combustion regimes beyond low-to-moderate-Damköhler-number nonpremixed systems (e.g., premixed flamelets), extensions to include radiation heat transfer and multiphase systems (e.g., soot and fuel sprays), and the use of PDF methods as the basis for subfilter-scale modeling in large-eddy simulation. Examples are provided that illustrate the utility and effectiveness of PDF methods for physics discovery and for applications to practical combustion systems. These include comparisons of results obtained using the PDF method with those from models that neglect unresolved turbulent fluctuations in composition and temperature in the averaged or filtered chemical source terms and/or the radiation heat transfer source terms. In this way, the effects of turbulence-chemistry-radiation interactions can be isolated and quantified.
40 CFR 74.10 - Roles-EPA and permitting authority.
Code of Federal Regulations, 2010 CFR
2010-07-01
... allowance allocation, and allocating allowances for combustion or process sources that become affected units under this part; (2) Certifying or recertifying monitoring systems for combustion or process sources as... accounting for the replacement of thermal energy, closing accounts for opt-in sources that shut down, are...
Thorn, A S; Gathercole, S E
2001-06-01
Language differences in verbal short-term memory were investigated in two experiments. In Experiment 1, bilinguals with high competence in English and French and monolingual English adults with extremely limited knowledge of French were assessed on their serial recall of words and nonwords in both languages. In all cases recall accuracy was superior in the language with which individuals were most familiar, a first-language advantage that remained when variation due to differential rates of articulation in the two languages was taken into account. In Experiment 2, bilinguals recalled lists of English and French words with and without concurrent articulatory suppression. First-language superiority persisted under suppression, suggesting that the language differences in recall accuracy were not attributable to slower rates of subvocal rehearsal in the less familiar language. The findings indicate that language-specific differences in verbal short-term memory do not exclusively originate in the subvocal rehearsal process. It is suggested that one source of language-specific variation might relate to the use of long-term knowledge to support short-term memory performance.
Srivastava, D; Favez, O; Bonnaire, N; Lucarelli, F; Haeffelin, M; Perraudin, E; Gros, V; Villenave, E; Albinet, A
2018-09-01
The present study aimed at performing PM 10 source apportionment, using positive matrix factorization (PMF), based on filter samples collected every 4h at a sub-urban station in the Paris region (France) during a PM pollution event in March 2015 (PM 10 >50μgm -3 for several consecutive days). The PMF model allowed to deconvolve 11 source factors. The use of specific primary and secondary organic molecular markers favoured the determination of common sources such as biomass burning and primary traffic emissions, as well as 2 specific biogenic SOA (marine+isoprene) and 3 anthropogenic SOA (nitro-PAHs+oxy-PAHs+phenolic compounds oxidation) factors. This study is probably the first one to report the use of methylnitrocatechol isomers as well as 1-nitropyrene to apportion secondary OA linked to biomass burning emissions and primary traffic emissions, respectively. Secondary organic carbon (SOC) fractions were found to account for 47% of the total OC. The use of organic molecular markers allowed the identification of 41% of the total SOC composed of anthropogenic SOA (namely, oxy-PAHs, nitro-PAHs and phenolic compounds oxidation, representing 15%, 9%, 11% of the total OC, respectively) and biogenic SOA (marine+isoprene) (6% in total). Results obtained also showed that 35% of the total SOC originated from anthropogenic sources and especially PAH SOA (oxy-PAHs+nitro-PAHs), accounting for 24% of the total SOC, highlighting its significant contribution in urban influenced environments. Anthropogenic SOA related to nitro-PAHs and phenolic compounds exhibited a clear diurnal pattern with high concentrations during the night indicating the prominent role of night-time chemistry but with different chemical processes involved. Copyright © 2018 Elsevier B.V. All rights reserved.
Quantification of the evolution of firm size distributions due to mergers and acquisitions.
Lera, Sandro Claudio; Sornette, Didier
2017-01-01
The distribution of firm sizes is known to be heavy tailed. In order to account for this stylized fact, previous economic models have focused mainly on growth through investments in a company's own operations (internal growth). Thereby, the impact of mergers and acquisitions (M&A) on the firm size (external growth) is often not taken into consideration, notwithstanding its potential large impact. In this article, we make a first step into accounting for M&A. Specifically, we describe the effect of mergers and acquisitions on the firm size distribution in terms of an integro-differential equation. This equation is subsequently solved both analytically and numerically for various initial conditions, which allows us to account for different observations of previous empirical studies. In particular, it rationalises shortcomings of past work by quantifying that mergers and acquisitions develop a significant influence on the firm size distribution only over time scales much longer than a few decades. This explains why M&A has apparently little impact on the firm size distributions in existing data sets. Our approach is very flexible and can be extended to account for other sources of external growth, thus contributing towards a holistic understanding of the distribution of firm sizes.
47 CFR 64.4005 - Unreasonable terms or conditions on the provision of customer account information.
Code of Federal Regulations, 2010 CFR
2010-10-01
... provision of customer account information. 64.4005 Section 64.4005 Telecommunication FEDERAL COMMUNICATIONS... Customer Account Record Exchange Requirements § 64.4005 Unreasonable terms or conditions on the provision of customer account information. To the extent that a carrier incurs costs associated with providing...
Basis for the power supply reliability study of the 1 MW neutron source
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGhee, D.G.; Fathizadeh, M.
1993-07-01
The Intense Pulsed Neutron Source (IPNS) upgrade to 1 MW requires new power supply designs. This paper describes the tools and the methodology needed to assess the reliability of the power supplies. Both the design and operation of the power supplies in the synchrotron will be taken into account. To develop a reliability budget, the experiments to be conducted with this accelerator are reviewed, and data is collected on the number and duration of interruptions possible before an experiment is required to start over. Once the budget is established, several accelerators of this type will be examined. The budget ismore » allocated to the different accelerator systems based on their operating experience. The accelerator data is usually in terms of machine availability and system down time. It takes into account mean time to failure (MTTF), time to diagnose, time to repair or replace the failed components, and time to get the machine back online. These estimated times are used as baselines for the design. Even though we are in the early stage of design, available data can be analyzed to estimate the MTTF for the power supplies.« less
26 CFR 1.446-1 - General rule for methods of accounting.
Code of Federal Regulations, 2013 CFR
2013-04-01
..., as a cost taken into account in computing cost of goods sold, as a cost allocable to a long-term...; section 460, relating to the long-term contract methods. In addition, special methods of accounting for... regulations under sections 471 and 472), a change from the cash or accrual method to a long-term contract...
37 CFR 384.4 - Terms for making payment of royalty fees and statements of account.
Code of Federal Regulations, 2010 CFR
2010-07-01
... royalty fees and statements of account. 384.4 Section 384.4 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES RATES AND TERMS FOR THE MAKING... royalty fees and statements of account. (a) Payment to Collective. A Licensee shall make the royalty...
Becker, H; Albera, L; Comon, P; Nunes, J-C; Gribonval, R; Fleureau, J; Guillotel, P; Merlet, I
2017-08-15
Over the past decades, a multitude of different brain source imaging algorithms have been developed to identify the neural generators underlying the surface electroencephalography measurements. While most of these techniques focus on determining the source positions, only a small number of recently developed algorithms provides an indication of the spatial extent of the distributed sources. In a recent comparison of brain source imaging approaches, the VB-SCCD algorithm has been shown to be one of the most promising algorithms among these methods. However, this technique suffers from several problems: it leads to amplitude-biased source estimates, it has difficulties in separating close sources, and it has a high computational complexity due to its implementation using second order cone programming. To overcome these problems, we propose to include an additional regularization term that imposes sparsity in the original source domain and to solve the resulting optimization problem using the alternating direction method of multipliers. Furthermore, we show that the algorithm yields more robust solutions by taking into account the temporal structure of the data. We also propose a new method to automatically threshold the estimated source distribution, which permits to delineate the active brain regions. The new algorithm, called Source Imaging based on Structured Sparsity (SISSY), is analyzed by means of realistic computer simulations and is validated on the clinical data of four patients. Copyright © 2017 Elsevier Inc. All rights reserved.
Chandra Detection of Intracluster X-Ray sources in Virgo
NASA Astrophysics Data System (ADS)
Hou, Meicun; Li, Zhiyuan; Peng, Eric W.; Liu, Chengze
2017-09-01
We present a survey of X-ray point sources in the nearest and dynamically young galaxy cluster, Virgo, using archival Chandra observations that sample the vicinity of 80 early-type member galaxies. The X-ray source populations at the outskirts of these galaxies are of particular interest. We detect a total of 1046 point sources (excluding galactic nuclei) out to a projected galactocentric radius of ˜40 kpc and down to a limiting 0.5-8 keV luminosity of ˜ 2× {10}38 {erg} {{{s}}}-1. Based on the cumulative spatial and flux distributions of these sources, we statistically identify ˜120 excess sources that are not associated with the main stellar content of the individual galaxies, nor with the cosmic X-ray background. This excess is significant at a 3.5σ level, when Poisson error and cosmic variance are taken into account. On the other hand, no significant excess sources are found at the outskirts of a control sample of field galaxies, suggesting that at least some fraction of the excess sources around the Virgo galaxies are truly intracluster X-ray sources. Assisted with ground-based and HST optical imaging of Virgo, we discuss the origins of these intracluster X-ray sources, in terms of supernova-kicked low-mass X-ray binaries (LMXBs), globular clusters, LMXBs associated with the diffuse intracluster light, stripped nucleated dwarf galaxies and free-floating massive black holes.
18 CFR 367.4040 - Account 404, Amortization of limited-term property.
Code of Federal Regulations, 2010 CFR
2010-04-01
... COMPANY ACT OF 2005, FEDERAL POWER ACT AND NATURAL GAS ACT UNIFORM SYSTEM OF ACCOUNTS FOR CENTRALIZED... applicable to amounts included in the service company property accounts for limited-term franchises, licenses...
Maintenance of contamination sensitive surfaces on board long-term space vehicles
NASA Technical Reports Server (NTRS)
Phillips, A.; Maag, C.
1984-01-01
In the current age, highly sensitive instruments are being flown on spacecraft, and questions of contamination have become important. The present investigation is concerned with the available approaches which can provide long-term protection for contamination sensitive surfaces. Aspects and sources of spacecraft contamination are examined, taking into account materials outgassing, particulates, propulsion system interaction, overboard venting, man-made and cosmic debris, and atomic oxygen/ambient atmosphere interaction. Suitable protection approaches provided by current technology are discussed, giving attention to aperture covers, a possibility for a retractable cover design, gaseous purges, options for prolonging the lifetime of the thermal control system, and plume shields. Some new possibilities considered are related to an early warning system for excessive amounts of contamination, a molecular/wake shield, and the use of atomic oxygen.
18. Uniform cost accounting in long-term care.
Sorensen, J E
1976-05-01
Uniform cost data are essential for managing health services, establishing billing and reimbursement rates, and measuring effectiveness and impact. Although it is especially difficult in the case of long-term health care to develop standard cost accounting procedures because of the varied configurations of inpatient, intermediate, and ambulatory services, the overall approaches to cost accounting and its content can be made more uniform. With this purpose in mind, a general model of cost accounting is presented for a multilevel program of long-term services, together with a special method for ambulatory services using "hours accounted for" as the basic measure.
Nava, S; Lucarelli, F; Amato, F; Becagli, S; Calzolai, G; Chiari, M; Giannoni, M; Traversi, R; Udisti, R
2015-04-01
Biomass burning (BB) is a significant source of particulate matter (PM) in many parts of the world. Whereas numerous studies demonstrate the relevance of BB emissions in central and northern Europe, the quantification of this source has been assessed only in few cities in southern European countries. In this work, the application of Positive Matrix Factorisation (PMF) allowed a clear identification and quantification of an unexpected very high biomass burning contribution in Tuscany (central Italy), in the most polluted site of the PATOS project. In this urban background site, BB accounted for 37% of the mass of PM10 (particulate matter with aerodynamic diameter<10 μm) as annual average, and more than 50% during winter, being the main cause of all the PM10 limit exceedances. Due to the chemical complexity of BB emissions, an accurate assessment of this source contribution is not always easily achievable using just a single tracer. The present work takes advantage of the combination of a long-term daily data-set, characterized by an extended chemical speciation, with a short-term high time resolution (1-hour) and size-segregated data-set, obtained by PIXE analyses of streaker samples. The hourly time pattern of the BB source, characterised by a periodic behaviour with peaks starting at about 6 p.m. and lasting all the evening-night, and its strong seasonality, with higher values in the winter period, clearly confirmed the hypothesis of a domestic heating source (also excluding important contributions from wildfires and agricultural wastes burning). Copyright © 2014 Elsevier B.V. All rights reserved.
Berger, Robert
2008-10-21
The importance of the Breit interaction for an accurate prediction of parity violating energy differences between enantiomers is studied within electroweak quantum chemical frameworks. Besides two-electron orbit-orbit and spin-spin coupling contributions, the Breit interaction gives rise to the spin-other-orbit coupling term of the Breit-Pauli Hamiltonian. The present numerical study demonstrates that neglect of this latter term leads in hydrogen peroxide (H(2)O(2)) to relative deviations in the parity violating potential (V(pv)) by about 10%, whereas further relativistic corrections accounted for within a four-component Dirac-Hartree-Fock-Coulomb (DHFC) framework remain smaller, below 5%. Thus, the main source of discrepancy between previous one-component based (coupled perturbed) Hartree-Fock (HF) and four-component Dirac-Hartree-Fock results for parity violating potentials in H(2)O(2) is the neglect of the Breit contribution in DHFC. In heavier homologs of hydrogen peroxide the relative contribution of the spin-other-orbit coupling term to V(pv) decreases with increasing nuclear charge, whereas other relativistic effects become increasingly important. As shown for the H(2)X(2) (X = O,S,Se,Te,Po) series of molecules and for CHBrClF, to a good approximation these other relativistic influences on V(pv) can be accounted for in one-component based HF calculations with the help of relativistic enhancement factors proposed earlier in the theory of atomic parity violation.
Hong Kong domestic health spending: financial years 1989/90 to 2005/06.
Tin, K Y K; Tsoi, P K O; Leung, E S K; Tsui, E L H; Lam, D W S; Tsang, C S H; Lo, S V
2010-02-01
This report presents the latest estimates of Hong Kong domestic health spending between fiscal years 1989/90 and 2005/06, cross-stratified and categorised by financing source, provider, and function on an annual basis. In fiscal year 2005/06, total health expenditure was HK$71 557 million. In real terms, it grew 6.5% per annum on average throughout the study period, whereas gross domestic product grew 4.1%, indicating a growing percentage of health spending relative to gross domestic product, from 3.5% in 1989/90 to 5.1% in 2005/06. This increase was largely funded by public spending, which rose 8.2% per annum on average in real terms, compared with 5.1% for private spending. This represents a growing share of public spending from 40.2% to 51.6% of total health expenditure during the period. Public spending was the dominant source of health financing in 2005/06, whereas private household out-of-pocket expenditure accounted for the second largest share (34.5%), followed by employer-provided group medical benefits (7.5%), privately purchased insurance (5.1%), and other private sources (1.3%). Of the HK$71 557 million total health expenditure in 2005/06, HK$68 810 million (96.2%) was on current expenditure and HK$2746 million (3.8%) on capital expenses (ie investment in medical facilities). Services of curative care accounted for the largest share (67.3%) and were made up of ambulatory services (35.7%), in-patient services (27.7%), day patient hospital services (3.4%), and home care (0.6%). The second largest share was spending on medical goods outside the patient care setting (10.8%). In terms of health care providers, hospitals (44.0%) accounted for the largest share of total health expenditure in 2005/06, followed by providers of ambulatory health care (31.4%). We observed a system-wide trend towards service consolidation at institutions (as opposed to free-standing ambulatory clinics, most of which are staffed by solo practitioners). Not taking capital expenses (ie investment in medical facilities) into account, public current expenditure on health amounted to HK$34 849 million (50.6% of total current expenditure) in 2005/06, most of which was incurred at hospitals (76.3%), whereas private current expenditure (HK$33 961 million) was mostly incurred at providers of ambulatory health care (55.8%). This reflects the mixed health care economy of Hong Kong, where public hospitals generally account for about 90% of total bed-days and private doctors (including western and Chinese medicine practitioners) provide about 70% of out-patient care. Although both public and private spending were mostly expended on personal health care services and goods (93.0%), the patterns of distribution among functional categories differed. Public expenditure was targeted at in-patient care (53.7%) and substantially less on out-patient care (24.6%), especially low-intensity first-contact care. In comparison, private spending was concentrated on out-patient care (49.9%), followed by medical goods outside the patient care setting (22.0%) and in-patient care (19.0%). Compared to countries of the Organisation for Economic Co-operation and Development, Hong Kong has devoted a relatively low percentage of gross domestic product on health services in the last decade. As a share of total spending, public funding (either general government revenue or social security funds) was also lower than in most comparably developed economies, although commensurate with its public revenue collection base.
Green’s functions for a volume source in an elastic half-space
Zabolotskaya, Evgenia A.; Ilinskii, Yurii A.; Hay, Todd A.; Hamilton, Mark F.
2012-01-01
Green’s functions are derived for elastic waves generated by a volume source in a homogeneous isotropic half-space. The context is sources at shallow burial depths, for which surface (Rayleigh) and bulk waves, both longitudinal and transverse, can be generated with comparable magnitudes. Two approaches are followed. First, the Green’s function is expanded with respect to eigenmodes that correspond to Rayleigh waves. While bulk waves are thus ignored, this approximation is valid on the surface far from the source, where the Rayleigh wave modes dominate. The second approach employs an angular spectrum that accounts for the bulk waves and yields a solution that may be separated into two terms. One is associated with bulk waves, the other with Rayleigh waves. The latter is proved to be identical to the Green’s function obtained following the first approach. The Green’s function obtained via angular spectrum decomposition is analyzed numerically in the time domain for different burial depths and distances to the receiver, and for parameters relevant to seismo-acoustic detection of land mines and other buried objects. PMID:22423682
A source study of atmospheric polycyclic aromatic hydrocarbons in Shenzhen, South China.
Liu, Guoqing; Tong, Yongpeng; Luong, John H T; Zhang, Hong; Sun, Huibin
2010-04-01
Air pollution has become a serious problem in the Pearl River Delta, South China, particularly in winter due to the local micrometeorology. In this study, atmospheric polycyclic aromatic hydrocarbons (PAHs) were monitored weekly in Shenzhen during the winter of 2006. Results indicated that the detected PAHs were mainly of vapor phase compounds with phenanthrene dominant. The average vapor phase and particle phase PAHs concentration in Shenzhen was 101.3 and 26.7 ng m( - 3), respectively. Meteorological conditions showed great effect on PAH concentrations. The higher PAHs concentrations observed during haze episode might result from the accumulation of pollutants under decreased boundary layer, slower wind speed, and long-term dryness conditions. The sources of PAHs in the air were estimated by principal component analysis in combination with diagnostic ratios. Vehicle exhaust was the major PAHs source in Shenzhen, accounting for 50.0% of the total PAHs emissions, whereas coal combustion and solid waste incineration contributed to 29.4% and 20.6% of the total PAHs concentration, respectively. The results clearly indicated that the increasing solid waste incinerators have become a new important PAHs source in this region.
Nimbus-7 Earth radiation budget calibration history. Part 2: The Earth flux channels
NASA Technical Reports Server (NTRS)
Kyle, H. Lee; Hucek, Douglas Richard R.; Ardanuy, Philip E.; Hickey, John R.; Maschhoff, Robert H.; Penn, Lanning M.; Groveman, Brian S.; Vallette, Brenda J.
1994-01-01
Nine years (November 1978 to October 1987) of Nimbus-7 Earth radiation budget (ERB) products have shown that the global annual mean emitted longwave, absorbed shortwave, and net radiation were constant to within about + 0.5 W/sq m. Further, most of the small annual variations in the emitted longwave have been shown to be real. To obtain this measurement accuracy, the wide-field-of-view (WFOV) Earth-viewing channels 12 (0.2 to over 50 micrometers), 13 (0.2 to 3.8 micrometers), and 14 (0.7 to 2.8 micrometers) have been characterized in their satellite environment to account for signal variations not considered in the prelaunch calibration equations. Calibration adjustments have been derived for (1) extraterrestrial radiation incident on the detectors, (2) long-term degradation of the sensors, and (3) thermal perturbations within the ERB instrument. The first item is important in all the channels; the second, mainly in channels 13 and 14, and the third, only in channels 13 and 14. The Sun is used as a stable calibration source to monitor the long-term degradation of the various channels. Channel 12, which is reasonably stable to both thermal perturbations and sensor degradation, is used as a reference and calibration transfer agent for the drifting sensitivities of the filtered channels 13 and 14. Redundant calibration procedures were utilized. Laboratory studies complemented analyses of the satellite data. Two nearly independent models were derived to account for the thermal perturbations in channels 13 and 14. The global annual mean terrestrial shortwave and longwave signals proved stable enough to act as secondary calibration sources. Instantaneous measurements may still, at times, be in error by as much as a few Wm(exp -2), but the long-term averages are stable to within a fraction of a Wm(exp -2).
Refraction and Shielding of Noise in Non-Axisymmetric Jets
NASA Technical Reports Server (NTRS)
Khavaran, Abbas
1996-01-01
This paper examines the shielding effect of the mean flow and refraction of sound in non-axisymmetric jets. A general three-dimensional ray-acoustic approach is applied. The methodology is independent of the exit geometry and may account for jet spreading and transverse as well as streamwise flow gradients. We assume that noise is dominated by small-scale turbulence. The source correlation terms, as described by the acoustic analogy approach, are simplified and a model is proposed that relates the source strength to 7/2 power of turbulence kinetic energy. Local characteristics of the source such as its strength, time- or length-scale, convection velocity and characteristic frequency are inferred from the mean flow considerations. Compressible Navier Stokes equations are solved with a k-e turbulence model. Numerical predictions are presented for a Mach 1.5, aspect ratio 2:1 elliptic jet. The predicted sound pressure level directivity demonstrates favorable agreement with reported data, indicating a relative quiet zone on the side of the major axis of the elliptic jet.
Marsden, O; Bogey, C; Bailly, C
2014-03-01
The feasibility of using numerical simulation of fluid dynamics equations for the detailed description of long-range infrasound propagation in the atmosphere is investigated. The two dimensional (2D) Navier Stokes equations are solved via high fidelity spatial finite differences and Runge-Kutta time integration, coupled with a shock-capturing filter procedure allowing large amplitudes to be studied. The accuracy of acoustic prediction over long distances with this approach is first assessed in the linear regime thanks to two test cases featuring an acoustic source placed above a reflective ground in a homogeneous and weakly inhomogeneous medium, solved for a range of grid resolutions. An atmospheric model which can account for realistic features affecting acoustic propagation is then described. A 2D study of the effect of source amplitude on signals recorded at ground level at varying distances from the source is carried out. Modifications both in terms of waveforms and arrival times are described.
The effect of barriers on wave propagation phenomena: With application for aircraft noise shielding
NASA Technical Reports Server (NTRS)
Mgana, C. V. M.; Chang, I. D.
1982-01-01
The frequency spectrum was divided into high and low frequency regimes and two separate methods were developed and applied to account for physical factors associated with flight conditions. For long wave propagation, the acoustic filed due to a point source near a solid obstacle was treated in terms of an inner region which where the fluid motion is essentially incompressible, and an outer region which is a linear acoustic field generated by hydrodynamic disturbances in the inner region. This method was applied to a case of a finite slotted plate modelled to represent a wing extended flap for both stationary and moving media. Ray acoustics, the Kirchhoff integral formulation, and the stationary phase approximation were combined to study short wave length propagation in many limiting cases as well as in the case of a semi-infinite plate in a uniform flow velocity with a point source above the plate and embedded in a different flow velocity to simulate an engine exhaust jet stream surrounding the source.
NASA Astrophysics Data System (ADS)
Vasu, B.; Gorla, Rama Subba Reddy; Murthy, P. V. S. N.
2017-05-01
The Walters-B liquid model is employed to simulate medical creams and other rheological liquids encountered in biotechnology and chemical engineering. This rheological model introduces supplementary terms into the momentum conservation equation. The combined effects of thermal radiation and heat sink/source on transient free convective, laminar flow and mass transfer in a viscoelastic fluid past a vertical plate are presented by taking thermophoresis effect into account. The transformed conservation equations are solved using a stable, robust finite difference method. A parametric study illustrating the influence of viscoelasticity parameter ( Γ), thermophoretic parameter ( τ), thermal radiation parameter ( F), heat sink/source ( ϕ), Prandtl number ( Pr), Schmidt number ( Sc), thermal Grashof number ( Gr), solutal Grashof number ( Gm), temperature and concentration profiles as well as local skin-friction, Nusselt and Sherwood number is conducted. The results of this parametric study are shown graphically and inform of table. The study has applications in polymer materials processing.
Code of Federal Regulations, 2010 CFR
2010-10-01
... ACCOUNTING STANDARDS CONTRACT COVERAGE CAS Rules and Regulations 9903.302 Definitions, explanations, and... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Definitions, explanations, and illustrations of the terms, âcost accounting practiceâ and âchange to a cost accounting practice.â...
26 CFR 1.451-5 - Advance payments for goods and long-term contracts.
Code of Federal Regulations, 2010 CFR
2010-04-01
... accounting for tax purposes if such method results in including advance payments in gross receipts no later... the case of a taxpayer accounting for advance payments for tax purposes pursuant to a long-term contract method of accounting under § 1.460-4, or of a taxpayer accounting for advance payments with...
The Utilization of Social Media in the Hearing Aid Community.
Choudhury, Moumita; Dinger, Zoë; Fichera, Elena
2017-03-01
This study investigated the utilization of social media by the hearing aid (HA) community. The purpose of this survey was to analyze the participation of HA community in the social media websites. A systematic survey of online HA-related social media sources was conducted. Such sources were identified using appropriate search terms. Social media participation was quantified on the basis of posts and "likes." Five hundred fifty-seven social media sources were identified, including 174 Twitter accounts, 172 YouTube videos, 91 Facebook pages, 20 Facebook groups, 71 blogs, and 29 forums. Twitter and YouTube platforms showed the highest level of activity among social media users. The HA-related community used social media sources for advice and support, information sharing, and service-related information. HA users, other individuals, and organizations interested in HAs leave their digital footprint on a wide variety of social media sources. The community connects, offers support, and shares information on a variety of HA-related issues. The HA community is as active in social media utilization as other groups, such as the cochlear implant community, even though the patterns of their social media use are different because of their unique needs.
Fine particulate pollution in the Nanjing northern suburb during summer: composition and sources.
An, Junlin; Duan, Qing; Wang, Honglei; Miao, Qing; Shao, Ping; Wang, Jian; Zou, Jianan
2015-09-01
To understand the chemical composition characteristic of pollution in a northern suburb of Nanjing, particle samples were collected by two Andersen cascade impactors from May to July 2013. The positive matrix factorization version 3 (EPA-PMF 3.0) was applied to identify the source contribution of PM2.1 concentrations in the study area. Source categories were determined based on the chemical component abundances in the source profiles. Overall, results indicated that seven factors were obtained. The factors are identified as follows: (I) secondary aerosol, characterized by high concentrations of NH4 (+), NO3 (-), and SO4 (2-), accounting for 20.22 %; (II) metallurgical aerosol, characterized by high concentrations of Pb, Cd, and Zn, accounting for 6.71 %; (III) road dust, characterized by high concentrations of Mg, Ca, Na, Al, and Ba, accounting for 11.85 %; (IV) biomass burning, characterized by high concentrations of K(+), Na(+), Cl(-), and K, accounting for 10.17 %; (V) residual oil, characterized by high concentrations of V and Cr, accounting for 16.63 %; (VI) iron and steel industry, characterized by high concentrations of Mn and Fe, accounting for 9.48 %; and (VII) vehicle exhaust, characterized by high concentrations of organic carbon (OC), Mo, elemental carbon (EC) and K, accounting for 24.94 %.
Accounting Methodology for Source Energy of Non-Combustible Renewable Electricity Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donohoo-Vallett, Paul
As non-combustible sources of renewable power (wind, solar, hydro, and geothermal) do not consume fuel, the “source” (or “primary”) energy from these sources cannot be accounted for in the same manner as it is for fossil fuel sources. The methodology chosen for these technologies is important as it affects the perception of the relative size of renewable source energy to fossil energy, affects estimates of source-based building energy use, and overall source energy based metrics such as energy productivity. This memo reviews the methodological choices, outlines implications of each choice, summarizes responses to a request for information on this topic,more » and presents guiding principles for the U.S. Department of Energy, (DOE) Office of Energy Efficiency and Renewable Energy (EERE) to use to determine where modifying the current renewable source energy accounting method used in EERE products and analyses would be appropriate to address the issues raised above.« less
Coupling the Mixed Potential and Radiolysis Models for Used Fuel Degradation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buck, Edgar C.; Jerden, James L.; Ebert, William L.
The primary purpose of this report is to describe the strategy for coupling three process level models to produce an integrated Used Fuel Degradation Model (FDM). The FDM, which is based on fundamental chemical and physical principals, provides direct calculation of radionuclide source terms for use in repository performance assessments. The G-value for H2O2 production (Gcond) to be used in the Mixed Potential Model (MPM) (H2O2 is the only radiolytic product presently included but others will be added as appropriate) needs to account for intermediate spur reactions. The effects of these intermediate reactions on [H2O2] are accounted for in themore » Radiolysis Model (RM). This report details methods for applying RM calculations that encompass the effects of these fast interactions on [H2O2] as the solution composition evolves during successive MPM iterations and then represent the steady-state [H2O2] in terms of an “effective instantaneous or conditional” generation value (Gcond). It is anticipated that the value of Gcond will change slowly as the reaction progresses through several iterations of the MPM as changes in the nature of fuel surface occur. The Gcond values will be calculated with the RM either after several iterations or when concentrations of key reactants reach threshold values determined from previous sensitivity runs. Sensitivity runs with RM indicate significant changes in G-value can occur over narrow composition ranges. The objective of the mixed potential model (MPM) is to calculate the used fuel degradation rates for a wide range of disposal environments to provide the source term radionuclide release rates for generic repository concepts. The fuel degradation rate is calculated for chemical and oxidative dissolution mechanisms using mixed potential theory to account for all relevant redox reactions at the fuel surface, including those involving oxidants produced by solution radiolysis and provided by the radiolysis model (RM). The RM calculates the concentration of species generated at any specific time and location from the surface of the fuel. Several options being considered for coupling the RM and MPM are described in the report. Different options have advantages and disadvantages based on the extent of coding that would be required and the ease of use of the final product.« less
Ensuring long-term stability of infrared camera absolute calibration.
Kattnig, Alain; Thetas, Sophie; Primot, Jérôme
2015-07-13
Absolute calibration of cryogenic 3-5 µm and 8-10 µm infrared cameras is notoriously instable and thus has to be repeated before actual measurements. Moreover, the signal to noise ratio of the imagery is lowered, decreasing its quality. These performances degradations strongly lessen the suitability of Infrared Imaging. These defaults are often blamed on detectors reaching a different "response state" after each return to cryogenic conditions, while accounting for the detrimental effects of imperfect stray light management. We show here that detectors are not to be blamed and that the culprit can also dwell in proximity electronics. We identify an unexpected source of instability in the initial voltage of the integrating capacity of detectors. Then we show that this parameter can be easily measured and taken into account. This way we demonstrate that a one month old calibration of a 3-5 µm camera has retained its validity.
Goal-oriented Site Characterization in Hydrogeological Applications: An Overview
NASA Astrophysics Data System (ADS)
Nowak, W.; de Barros, F.; Rubin, Y.
2011-12-01
In this study, we address the importance of goal-oriented site characterization. Given the multiple sources of uncertainty in hydrogeological applications, information needs of modeling, prediction and decision support should be satisfied with efficient and rational field campaigns. In this work, we provide an overview of an optimal sampling design framework based on Bayesian decision theory, statistical parameter inference and Bayesian model averaging. It optimizes the field sampling campaign around decisions on environmental performance metrics (e.g., risk, arrival times, etc.) while accounting for parametric and model uncertainty in the geostatistical characterization, in forcing terms, and measurement error. The appealing aspects of the framework lie on its goal-oriented character and that it is directly linked to the confidence in a specified decision. We illustrate how these concepts could be applied in a human health risk problem where uncertainty from both hydrogeological and health parameters are accounted.
Thermostability in rubredoxin and its relationship to mechanical rigidity
NASA Astrophysics Data System (ADS)
Rader, A. J.
2010-03-01
The source of increased stability in proteins from organisms that thrive in extreme thermal environments is not well understood. Previous experimental and theoretical studies have suggested many different features possibly responsible for such thermostability. Many of these thermostabilizing mechanisms can be accounted for in terms of structural rigidity. Thus a plausible hypothesis accounting for this remarkable stability in thermophilic enzymes states that these enzymes have enhanced conformational rigidity at temperatures below their native, functioning temperature. Experimental evidence exists to both support and contradict this supposition. We computationally investigate the relationship between thermostability and rigidity using rubredoxin as a case study. The mechanical rigidity is calculated using atomic models of homologous rubredoxin structures from the hyperthermophile Pyrococcus furiosus and mesophile Clostridium pasteurianum using the FIRST software. A global increase in structural rigidity (equivalently a decrease in flexibility) corresponds to an increase in thermostability. Locally, rigidity differences (between mesophilic and thermophilic structures) agree with differences in protection factors.
Examining health care spending trends over a decade: the Palestinian case.
Hamidi, S; Narcı, H Ö; Akinci, F; Nacakgedigi, O
2016-03-15
An analysis was made of recent health care spending patterns in the occupied Palestinian territory, in order to inform future health policy-making and planning. Data were obtained from the national health accounts for the period 2000-2011. The current level of resource allocation to the health care sector is higher than in many developed countries and is not sustainable. The private sector represents the largest source of health financing (61%) and the burden falls disproportionally on individual households, who account for 63% of private health care expenditure. Key recommendations include: building capacity in the government sector to reduce the outsourcing of health services; modifying inequitable financing mechanisms to reduce the burden on households; and allocating more resources for health promotion and disease prevention programmes. Reorientation of the health system is also needed in terms of reducing the share of spending on inpatient services in favour of more day surgery, outpatient and home-based services.
Addison, P F E; Flander, L B; Cook, C N
2015-02-01
Protected area management agencies are increasingly using management effectiveness evaluation (MEE) to better understand, learn from and improve conservation efforts around the globe. Outcome assessment is the final stage of MEE, where conservation outcomes are measured to determine whether management objectives are being achieved. When quantitative monitoring data are available, best-practice examples of outcome assessments demonstrate that data should be assessed against quantitative condition categories. Such assessments enable more transparent and repeatable integration of monitoring data into MEE, which can promote evidence-based management and improve public accountability and reporting. We interviewed key informants from marine protected area (MPA) management agencies to investigate how scientific data sources, especially long-term biological monitoring data, are currently informing conservation management. Our study revealed that even when long-term monitoring results are available, management agencies are not using them for quantitative condition assessment in MEE. Instead, many agencies conduct qualitative condition assessments, where monitoring results are interpreted using expert judgment only. Whilst we found substantial evidence for the use of long-term monitoring data in the evidence-based management of MPAs, MEE is rarely the sole mechanism that facilitates the knowledge transfer of scientific evidence to management action. This suggests that the first goal of MEE (to enable environmental accountability and reporting) is being achieved, but the second and arguably more important goal of facilitating evidence-based management is not. Given that many MEE approaches are in their infancy, recommendations are made to assist management agencies realize the full potential of long-term quantitative monitoring data for protected area evaluation and evidence-based management. Copyright © 2014 Elsevier Ltd. All rights reserved.
Active System for Electromagnetic Perturbation Monitoring in Vehicles
NASA Astrophysics Data System (ADS)
Matoi, Adrian Marian; Helerea, Elena
Nowadays electromagnetic environment is rapidly expanding in frequency domain and wireless services extend in terms of covered area. European electromagnetic compatibility regulations refer to limit values regarding emissions, as well as procedures for determining susceptibility of the vehicle. Approval procedure for a series of cars is based on determining emissions/immunity level for a few vehicles picked randomly from the entire series, supposing that entire vehicle series is compliant. During immunity assessment, the vehicle is not subjected to real perturbation sources, but exposed to electric/magnetic fields generated by laboratory equipment. Since current approach takes into account only partially real situation regarding perturbation sources, this paper proposes an active system for determining electromagnetic parameters of vehicle's environment, that implements a logical diagram for measurement, satisfying the imposed requirements. This new and original solution is useful for EMC assessment of hybrid and electrical vehicles.
LCLS in—photon out: fluorescence measurement of neon using soft x-rays
Obaid, Razib; Buth, Christian; Dakovski, Georgi L.; ...
2018-01-09
Here, we measured the fluorescence photon yield of neon upon soft x-ray ionization (~1200 eV) from the x-ray free-electron laser at Linac Coherent Light Source, and demonstrated the usage of a grazing incidence spectrometer with a variable line spacing grating to perform x-ray fluorescence spectroscopy on a gas phase system. Our measurements also allowed us to estimate the focal size of the beam from the theoretical description developed, in terms of the rate equation approximation accounting for photoionization shake off of neutral neon and double auger decay of single core holes.
Extension of CE/SE method to non-equilibrium dissociating flows
NASA Astrophysics Data System (ADS)
Wen, C. Y.; Saldivar Massimi, H.; Shen, H.
2018-03-01
In this study, the hypersonic non-equilibrium flows over rounded nose geometries are numerically investigated by a robust conservation element and solution element (CE/SE) code, which is based on hybrid meshes consisting of triangular and quadrilateral elements. The dissociating and recombination chemical reactions as well as the vibrational energy relaxation are taken into account. The stiff source terms are solved by an implicit trapezoidal method of integration. Comparison with laboratory and numerical cases are provided to demonstrate the accuracy and reliability of the present CE/SE code in simulating hypersonic non-equilibrium flows.
LANDSAT 4 band 6 data evaluation
NASA Technical Reports Server (NTRS)
1983-01-01
Satellite data collected over Lake Ontario were processed to observed surface temperature values. This involved computing apparent radiance values for each point where surface temperatures were known from averaged digital count values. These radiance values were then converted by using the LOWTRAN 5A atmospheric propagation model. This model was modified by incorporating a spectral response function for the LANDSAT band 6 sensors. A downwelled radiance term derived from LOWTRAN was included to account for reflected sky radiance. A blackbody equivalent source radiance was computed. Measured temperatures were plotted against the predicted temperature. The RMS error between the data sets is 0.51K.
LCLS in—photon out: fluorescence measurement of neon using soft x-rays
NASA Astrophysics Data System (ADS)
Obaid, Razib; Buth, Christian; Dakovski, Georgi L.; Beerwerth, Randolf; Holmes, Michael; Aldrich, Jeff; Lin, Ming-Fu; Minitti, Michael; Osipov, Timur; Schlotter, William; Cederbaum, Lorenz S.; Fritzsche, Stephan; Berrah, Nora
2018-02-01
We measured the fluorescence photon yield of neon upon soft x-ray ionization (∼1200 eV) from the x-ray free-electron laser at Linac Coherent Light Source, and demonstrated the usage of a grazing incidence spectrometer with a variable line spacing grating to perform x-ray fluorescence spectroscopy on a gas phase system. Our measurements also allowed us to estimate the focal size of the beam from the theoretical description developed, in terms of the rate equation approximation accounting for photoionization shake off of neutral neon and double auger decay of single core holes.
Disentangling the effects of CO2 and short-lived climate forcer mitigation.
Rogelj, Joeri; Schaeffer, Michiel; Meinshausen, Malte; Shindell, Drew T; Hare, William; Klimont, Zbigniew; Velders, Guus J M; Amann, Markus; Schellnhuber, Hans Joachim
2014-11-18
Anthropogenic global warming is driven by emissions of a wide variety of radiative forcers ranging from very short-lived climate forcers (SLCFs), like black carbon, to very long-lived, like CO2. These species are often released from common sources and are therefore intricately linked. However, for reasons of simplification, this CO2-SLCF linkage was often disregarded in long-term projections of earlier studies. Here we explicitly account for CO2-SLCF linkages and show that the short- and long-term climate effects of many SLCF measures consistently become smaller in scenarios that keep warming to below 2 °C relative to preindustrial levels. Although long-term mitigation of methane and hydrofluorocarbons are integral parts of 2 °C scenarios, early action on these species mainly influences near-term temperatures and brings small benefits for limiting maximum warming relative to comparable reductions taking place later. Furthermore, we find that maximum 21st-century warming in 2 °C-consistent scenarios is largely unaffected by additional black-carbon-related measures because key emission sources are already phased-out through CO2 mitigation. Our study demonstrates the importance of coherently considering CO2-SLCF coevolutions. Failing to do so leads to strongly and consistently overestimating the effect of SLCF measures in climate stabilization scenarios. Our results reinforce that SLCF measures are to be considered complementary rather than a substitute for early and stringent CO2 mitigation. Near-term SLCF measures do not allow for more time for CO2 mitigation. We disentangle and resolve the distinct benefits across different species and therewith facilitate an integrated strategy for mitigating both short and long-term climate change.
Disentangling the effects of CO2 and short-lived climate forcer mitigation
Rogelj, Joeri; Schaeffer, Michiel; Meinshausen, Malte; Shindell, Drew T.; Hare, William; Klimont, Zbigniew; Amann, Markus; Schellnhuber, Hans Joachim
2014-01-01
Anthropogenic global warming is driven by emissions of a wide variety of radiative forcers ranging from very short-lived climate forcers (SLCFs), like black carbon, to very long-lived, like CO2. These species are often released from common sources and are therefore intricately linked. However, for reasons of simplification, this CO2–SLCF linkage was often disregarded in long-term projections of earlier studies. Here we explicitly account for CO2–SLCF linkages and show that the short- and long-term climate effects of many SLCF measures consistently become smaller in scenarios that keep warming to below 2 °C relative to preindustrial levels. Although long-term mitigation of methane and hydrofluorocarbons are integral parts of 2 °C scenarios, early action on these species mainly influences near-term temperatures and brings small benefits for limiting maximum warming relative to comparable reductions taking place later. Furthermore, we find that maximum 21st-century warming in 2 °C-consistent scenarios is largely unaffected by additional black-carbon-related measures because key emission sources are already phased-out through CO2 mitigation. Our study demonstrates the importance of coherently considering CO2–SLCF coevolutions. Failing to do so leads to strongly and consistently overestimating the effect of SLCF measures in climate stabilization scenarios. Our results reinforce that SLCF measures are to be considered complementary rather than a substitute for early and stringent CO2 mitigation. Near-term SLCF measures do not allow for more time for CO2 mitigation. We disentangle and resolve the distinct benefits across different species and therewith facilitate an integrated strategy for mitigating both short and long-term climate change. PMID:25368182
Sensitivity Analysis Tailored to Constrain 21st Century Terrestrial Carbon-Uptake
NASA Astrophysics Data System (ADS)
Muller, S. J.; Gerber, S.
2013-12-01
The long-term fate of terrestrial carbon (C) in response to climate change remains a dominant source of uncertainty in Earth-system model projections. Increasing atmospheric CO2 could be mitigated by long-term net uptake of C, through processes such as increased plant productivity due to "CO2-fertilization". Conversely, atmospheric conditions could be exacerbated by long-term net release of C, through processes such as increased decomposition due to higher temperatures. This balance is an important area of study, and a major source of uncertainty in long-term (>year 2050) projections of planetary response to climate change. We present results from an innovative application of sensitivity analysis to LM3V, a dynamic global vegetation model (DGVM), intended to identify observed/observable variables that are useful for constraining long-term projections of C-uptake. We analyzed the sensitivity of cumulative C-uptake by 2100, as modeled by LM3V in response to IPCC AR4 scenario climate data (1860-2100), to perturbations in over 50 model parameters. We concurrently analyzed the sensitivity of over 100 observable model variables, during the extant record period (1970-2010), to the same parameter changes. By correlating the sensitivities of observable variables with the sensitivity of long-term C-uptake we identified model calibration variables that would also constrain long-term C-uptake projections. LM3V employs a coupled carbon-nitrogen cycle to account for N-limitation, and we find that N-related variables have an important role to play in constraining long-term C-uptake. This work has implications for prioritizing field campaigns to collect global data that can help reduce uncertainties in the long-term land-atmosphere C-balance. Though results of this study are specific to LM3V, the processes that characterize this model are not completely divorced from other DGVMs (or reality), and our approach provides valuable insights into how data can be leveraged to be better constrain projections for the land carbon sink.
Social media utilization in the cochlear implant community.
Saxena, Rajeev C; Lehmann, Ashton E; Hight, A Ed; Darrow, Keith; Remenschneider, Aaron; Kozin, Elliott D; Lee, Daniel J
2015-02-01
More than 200,000 individuals worldwide have received a cochlear implant (CI). Social media Websites may provide a paramedical community for those who possess or are interested in a CI. The utilization patterns of social media by the CI community, however, have not been thoroughly investigated. The purpose of this study was to investigate participation of the CI community in social media Websites. We conducted a systematic survey of online CI-related social media sources. Using standard search engines, the search terms cochlear implant, auditory implant, forum, and blog identified relevant social media platforms and Websites. Social media participation was quantified by indices of membership and posts. Social media sources included Facebook, Twitter, YouTube, blogs, and online forums. Each source was assigned one of six functional categories based on its description. No intervention was performed. We conducted all online searches in February 2014. Total counts of each CI-related social media source were summed, and descriptive statistics were calculated. More than 350 sources were identified, including 60 Facebook groups, 36 Facebook pages, 48 Twitter accounts, 121 YouTube videos, 13 forums, and 95 blogs. The most active online communities were Twitter accounts, which totaled 35,577 members, and Facebook groups, which totaled 17,971 members. CI users participated in Facebook groups primarily for general information/support (68%). Online forums were the next most active online communities by membership. The largest forum contained approximately 9,500 topics with roughly 127,000 posts. CI users primarily shared personal stories through blogs (92%), Twitter (71%), and YouTube (62%). The CI community engages in the use of a wide range of online social media sources. The CI community uses social media for support, advocacy, rehabilitation information, research endeavors, and sharing of personal experiences. Future studies are needed to investigate how social media Websites may be harnessed to improve patient-provider relationships and potentially used to augment patient education. American Academy of Audiology.
Social Media Utilization in the Cochlear Implant Community
Saxena, Rajeev C.; Lehmann, Ashton E.; Hight, A. Ed; Darrow, Keith; Remenschneider, Aaron; Kozin, Elliott D.; Lee, Daniel J.
2015-01-01
Background More than 200,000 individuals worldwide have received a cochlear implant (CI). Social media Websites may provide a paramedical community for those who possess or are interested in a CI. The utilization patterns of social media by the CI community, however, have not been thoroughly investigated. Purpose The purpose of this study was to investigate participation of the CI community in social media Websites. Research Design We conducted a systematic survey of online CI-related social media sources. Using standard search engines, the search terms cochlear implant, auditory implant, forum, and blog identified relevant social media platforms and Websites. Social media participation was quantified by indices of membership and posts. Study Sample Social media sources included Facebook, Twitter, YouTube, blogs, and online forums. Each source was assigned one of six functional categories based on its description. Intervention No intervention was performed. Data Collection and Analysis We conducted all online searches in February 2014. Total counts of each CI-related social media source were summed, and descriptive statistics were calculated. Results More than 350 sources were identified, including 60 Facebook groups, 36 Facebook pages, 48 Twitter accounts, 121 YouTube videos, 13 forums, and 95 blogs. The most active online communities were Twitter accounts, which totaled 35,577 members, and Facebook groups, which totaled 17,971 members. CI users participated in Facebook groups primarily for general information/support (68%). Online forums were the next most active online communities by membership. The largest forum contained approximately 9,500 topics with roughly 127,000 posts. CI users primarily shared personal stories through blogs (92%), Twitter (71%), and YouTube (62%). Conclusions The CI community engages in the use of a wide range of online social media sources. The CI community uses social media for support, advocacy, rehabilitation information, research endeavors, and sharing of personal experiences. Future studies are needed to investigate how social media Websites may be harnessed to improve patient-provider relationships and potentially used to augment patient education. PMID:25690778
NASA Astrophysics Data System (ADS)
Diapouli, E.; Manousakas, M.; Vratolis, S.; Vasilatou, V.; Maggos, Th; Saraga, D.; Grigoratos, Th; Argyropoulos, G.; Voutsa, D.; Samara, C.; Eleftheriadis, K.
2017-09-01
Metropolitan Urban areas in Greece have been known to suffer from poor air quality, due to variety of emission sources, topography and climatic conditions favouring the accumulation of pollution. While a number of control measures have been implemented since the 1990s, resulting in reductions of atmospheric pollution and changes in emission source contributions, the financial crisis which started in 2009 has significantly altered this picture. The present study is the first effort to assess the contribution of emission sources to PM10 and PM2.5 concentration levels and their long-term variability (over 5-10 years), in the two largest metropolitan urban areas in Greece (Athens and Thessaloniki). Intensive measurement campaigns were conducted during 2011-2012 at suburban, urban background and urban traffic sites in these two cities. In addition, available datasets from previous measurements in Athens and Thessaloniki were used in order to assess the long-term variability of concentrations and sources. Chemical composition analysis of the 2011-2012 samples showed that carbonaceous matter was the most abundant component for both PM size fractions. Significant increase of carbonaceous particle concentrations and of OC/EC ratio during the cold period, especially in the residential urban background sites, pointed towards domestic heating and more particularly wood (biomass) burning as a significant source. PMF analysis further supported this finding. Biomass burning was the largest contributing source at the two urban background sites (with mean contributions for the two size fractions in the range of 24-46%). Secondary aerosol formation (sulphate, nitrate & organics) was also a major contributing source for both size fractions at the suburban and urban background sites. At the urban traffic site, vehicular traffic (exhaust and non-exhaust emissions) was the source with the highest contributions, accounting for 44% of PM10 and 37% of PM2.5, respectively. The long-term variability of emission sources in the two cities (over 5-10 years), assessed through a harmonized application of the PMF technique on recent and past year data, clearly demonstrates the effective reduction in emissions during the last decade due to control measures and technological development; however, it also reflects the effects of the financial crisis in Greece during these years, which has led to decreased economic activities and the adoption of more polluting practices by the local population in an effort to reduce living costs.
A Critique on the Concept of Social Accountability in Higher Education
ERIC Educational Resources Information Center
Pedroza Flores, René; Villalobos Monroy, Guadalupe; Reyes Fabela, Ana María
2015-01-01
This paper attempts to present a critique of the concept and meaning of the term social accountability at university level from a critical point of view. The main objective is to analyze and re-build the term accountability in order to contextualize it for public universities. First, we present the importance of the idea of accountability in…
Fu, Linda Y.; Zook, Kathleen; Spoehr-Labutta, Zachary; Hu, Pamela; Joseph, Jill G.
2015-01-01
Purpose Online information can influence attitudes toward vaccination. The aim of the present study is to provide a systematic evaluation of the search engine ranking, quality, and content of webpages that are critical versus noncritical of HPV vaccination. Methods We identified HPV vaccine-related webpages with the Google search engine by entering 20 terms. We then assessed each webpage for critical versus noncritical bias as well as for the following quality indicators: authorship disclosure, source disclosure, attribution of at least one reference, currency, exclusion of testimonial accounts, and readability level less than 9th grade. We also determined webpage comprehensiveness in terms of mention of 14 HPV vaccine relevant topics. Results Twenty searches yielded 116 unique webpages. HPV vaccine-critical webpages comprised roughly a third of the top, top 5 and top 10-ranking webpages. The prevalence of HPV vaccine-critical webpages was higher for queries that included term modifiers in addition to root terms. Compared with noncritical webpages, webpages critical of HPV vaccine overall had a lower quality score than those with a noncritical bias (p<.01) and covered fewer important HPV-related topics (p<.001). Critical webpages required viewers to have higher reading skills, were less likely to include an author byline, and were more likely to include testimonial accounts. They also were more likely to raise unsubstantiated concerns about vaccination. Conclusion Webpages critical of HPV vaccine may be frequently returned and highly ranked by search engine queries despite being of lower quality and less comprehensive than noncritical webpages. PMID:26559742
NASA Astrophysics Data System (ADS)
Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne
2013-04-01
A major difficulty when inverting the source term of an atmospheric tracer dispersion problem is the estimation of the prior errors: those of the atmospheric transport model, those ascribed to the representativeness of the measurements, the instrumental errors, and those attached to the prior knowledge on the variables one seeks to retrieve. In the case of an accidental release of pollutant, and specially in a situation of sparse observability, the reconstructed source is sensitive to these assumptions. This sensitivity makes the quality of the retrieval dependent on the methods used to model and estimate the prior errors of the inverse modeling scheme. In Winiarek et al. (2012), we proposed to use an estimation method for the errors' amplitude based on the maximum likelihood principle. Under semi-Gaussian assumptions, it takes into account, without approximation, the positivity assumption on the source. We applied the method to the estimation of the Fukushima Daiichi cesium-137 and iodine-131 source terms using activity concentrations in the air. The results were compared to an L-curve estimation technique, and to Desroziers's scheme. Additionally to the estimations of released activities, we provided related uncertainties (12 PBq with a std. of 15 - 20 % for cesium-137 and 190 - 380 PBq with a std. of 5 - 10 % for iodine-131). We also enlightened that, because of the low number of available observations (few hundreds) and even if orders of magnitude were consistent, the reconstructed activities significantly depended on the method used to estimate the prior errors. In order to use more data, we propose to extend the methods to the use of several data types, such as activity concentrations in the air and fallout measurements. The idea is to simultaneously estimate the prior errors related to each dataset, in order to fully exploit the information content of each one. Using the activity concentration measurements, but also daily fallout data from prefectures and cumulated deposition data over a region lying approximately 150 km around the nuclear power plant, we can use a few thousands of data in our inverse modeling algorithm to reconstruct the Cesium-137 source term. To improve the parameterization of removal processes, rainfall fields have also been corrected using outputs from the mesoscale meteorological model WRF and ground station rainfall data. As expected, the different methods yield closer results as the number of data increases. Reference : Winiarek, V., M. Bocquet, O. Saunier, A. Mathieu (2012), Estimation of errors in the inverse modeling of accidental release of atmospheric pollutant : Application to the reconstruction of the cesium-137 and iodine-131 source terms from the Fukushima Daiichi power plant, J. Geophys. Res., 117, D05122, doi:10.1029/2011JD016932.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perrin, Tess E.; Davis, Robert G.; Wilkerson, Andrea M.
This GATEWAY project evaluated four field installations to better understand the long-term performance of a number of LED products, which can hopefully stimulate improvements in designing, manufacturing, specifying, procuring, and installing LED products. Field studies provide the opportunity to discover and investigate issues that cannot be simulated or uncovered in a laboratory, but the installed performance over time of commercially available LED products has not been well documented. Improving long-term performance can provide both direct energy savings by reducing the need to over-light to account for light loss and indirect energy savings through better market penetration due to SSL’s competitivemore » advantages over less-efficient light source technologies. The projects evaluated for this report illustrate that SSL use is often motivated by advantages other than energy savings, including maintenance savings, easier integration with control systems, and improved lighting quality.« less
PHYSICS OF OUR DAYS: Dark energy and universal antigravitation
NASA Astrophysics Data System (ADS)
Chernin, A. D.
2008-03-01
Universal antigravitation, a new physical phenomenon discovered astronomically at distances of 5 to 8 billion light years, manifests itself as cosmic repulsion that acts between distant galaxies and overcomes their gravitational attraction, resulting in the accelerating expansion of the Universe. The source of the antigravitation is not galaxies or any other bodies of nature but a previously unknown form of mass/energy that has been termed dark energy. Dark energy accounts for 70 to 80% of the total mass and energy of the Universe and, in macroscopic terms, is a kind of continuous medium that fills the entire space of the Universe and is characterized by positive density and negative pressure. With its physical nature and microscopic structure unknown, dark energy is among the most critical challenges fundamental science faces in the twenty-first century.
Toward applied behavior analysis of life aloft
NASA Technical Reports Server (NTRS)
Brady, J. V.
1990-01-01
This article deals with systems at multiple levels, at least from cell to organization. It also deals with learning, decision making, and other behavior at multiple levels. Technological development of a human behavioral ecosystem appropriate to space environments requires an analytic and synthetic orientation, explicitly experimental in nature, dictated by scientific and pragmatic considerations, and closely approximating procedures of established effectiveness in other areas of natural science. The conceptual basis of such an approach has its roots in environmentalism which has two main features: (1) knowledge comes from experience rather than from innate ideas, divine revelation, or other obscure sources; and (2) action is governed by consequences rather than by instinct, reason, will, beliefs, attitudes or even the currently fashionable cognitions. Without an experimentally derived data base founded upon such a functional analysis of human behavior, the overgenerality of "ecological systems" approaches render them incapable of ensuring the successful establishment of enduring space habitats. Without an experimentally derived function account of individual behavioral variability, a natural science of behavior cannot exist. And without a natural science of behavior, the social sciences will necessarily remain in their current status as disciplines of less than optimal precision or utility. Such a functional analysis of human performance should provide an operational account of behavior change in a manner similar to the way in which Darwin's approach to natural selection accounted for the evolution of phylogenetic lines (i.e., in descriptive, nonteleological terms). Similarly, as Darwin's account has subsequently been shown to be consonant with information obtained at the cellular level, so too should behavior principles ultimately prove to be in accord with an account of ontogenetic adaptation at a biochemical level. It would thus seem obvious that the most productive conceptual and methodological approaches to long-term research investments focused upon human behavior in space environments will require multidisciplinary inputs from such wide-ranging fields as molecular biology, environmental physiology, behavioral biology, architecture, sociology, and political science, among others.
Neubauer, Georg; Feychting, Maria; Hamnerius, Yngve; Kheifets, Leeka; Kuster, Niels; Ruiz, Ignacio; Schüz, Joachim; Uberbacher, Richard; Wiart, Joe; Röösli, Martin
2007-04-01
The increasing deployment of mobile communication base stations led to an increasing demand for epidemiological studies on possible health effects of radio frequency emissions. The methodological challenges of such studies have been critically evaluated by a panel of scientists in the fields of radiofrequency engineering/dosimetry and epidemiology. Strengths and weaknesses of previous studies have been identified. Dosimetric concepts and crucial aspects in exposure assessment were evaluated in terms of epidemiological studies on different types of outcomes. We conclude that in principle base station epidemiological studies are feasible. However, the exposure contributions from all relevant radio frequency sources have to be taken into account. The applied exposure assessment method should be piloted and validated. Short to medium term effects on physiology or health related quality of life are best investigated by cohort studies. For long term effects, groups with a potential for high exposure need to first be identified; for immediate effect, human laboratory studies are the preferred approach. (c) 2006 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Davies, G. F.
2009-12-01
Dynamical and chemical interpretations of the mantle have hitherto remained incompatible, despite substantial progress over recent years. It is argued that both the refractory incompatible elements and the noble gases can be reconciled with the dynamical mantle when mantle heterogeneity is more fully accounted for. It is argued that the incompatible-element content of the MORB source is about double recent estimates (U~10 ng/g) because enriched components have been systematically overlooked, for three main reasons. (1) in a heterogeneous MORB source, melts from enriched pods are not expected to equilibrate fully with the peridotite matrix, but recent estimates of MORB-source composition have been tied to residual (relatively infertile) peridotite composition. (2) about 25% of the MORB source comes from plumes, but plume-like components have tended to be excluded. (3) a focus on the most common “normal” MORBs, allegedly representing a “depleted” MORB source, has overlooked the less-common but significant enriched components of MORBs, of various possible origins. Geophysical constraints (seismological and topographic) exclude mantle layering except for the thin D” layer and the “superpiles” under Africa and the Pacific. Numerical models then indicate the MORB source comprises the rest of the mantle. Refractory-element mass balances can then be accommodated by a MORB source depleted by only a factor of 2 from chondritic abundances, rather than a factor of 4-7. A source for the hitherto-enigmatic unradiogenic helium in OIBs also emerges from this picture. Melt from subducted oceanic crust melting under MORs will react with surrounding peridotite to form intemediate compositions here termed hybrid pyroxenite. Only about half of the hybrid pyroxenite will be remelted, extracted and degassed at MORs, and the rest will recirculate within the mantle. Over successive generations starting early in Earth history, volatiles will come to reside mainly in the hybrid pyroxenite. This will be denser than average mantle and will tend to accumulate in D”, like subducted oceanic crust. Because residence times in D” are longer, it will degas more slowly. Thus plumes will tap a mixture of older, less-degassed hybrid pyroxenite, containing less-radiogenic noble gases, and degassed former oceanic crust. Calculations of degassing history confirm that this picture can quantitatively account for He, Ne and Ar in MORBs and OIBs. Geophysically-based dynamical models have been shown over recent years to account quantitatively for the isotopes of refractory incompatible elements. This can now be extended to noble gas isotopes. The remaining significant issue is that thermal evolution calculations require more radiogenic heating than implied by cosmochemical estimates of radioactive heat sources. This may imply that tectonic and thermal evolution have been more episodic in the Phanerozoic than has been generally recognised.
NASA Astrophysics Data System (ADS)
Guo, Y.; Liu, J.; Mauzerall, D. L.; Emmons, L. K.; Horowitz, L. W.; Fan, S.; Li, X.; Tao, S.
2014-12-01
Long-range transport of ozone is of great concern, yet the source-receptor relationships derived previously depend strongly on the source attribution techniques used. Here we describe a new tagged ozone mechanism (full-tagged), the design of which seeks to take into account the combined effects of emissions of ozone precursors, CO, NOx and VOCs, from a particular source, while keeping the current state of chemical equilibrium unchanged. We label emissions from the target source (A) and background (B). When two species from A and B sources react with each other, half of the resulting products are labeled A, and half B. Thus the impact of a given source on downwind regions is recorded through tagged chemistry. We then incorporate this mechanism into the Model for Ozone and Related chemical Tracers (MOZART-4) to examine the impact of anthropogenic emissions within North America, Europe, East Asia and South Asia on ground-level ozone downwind of source regions during 1999-2000. We compare our results with two previously used methods -- the sensitivity and tagged-N approaches. The ozone attributed to a given source by the full-tagged method is more widely distributed spatially, but has weaker seasonal variability than that estimated by the other methods. On a seasonal basis, for most source/receptor pairs, the full-tagged method estimates the largest amount of tagged ozone, followed by the sensitivity and tagged-N methods. In terms of trans-Pacific influence of ozone pollution, the full-tagged method estimates the strongest impact of East Asian (EA) emissions on the western U.S. (WUS) in MAM and JJA (~3 ppbv), which is substantially different in magnitude and seasonality from tagged-N and sensitivity studies. This difference results from the full-tagged method accounting for the maintenance of peroxy radicals (e.g., CH3O2, CH3CO3, and HO2), in addition to NOy, as effective reservoirs of EA source impact across the Pacific, allowing for a significant contribution to ozone formation over WUS (particularly in summer). Thus, the full-tagged method, with its clear discrimination of source and background contributions on a per-reaction basis, provides unique insights into the critical role of VOCs (and additional reactive nitrogen species) in determining the nonlinear inter-continental influence of ozone pollution.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 32 National Defense 2 2010-07-01 2010-07-01 false Reporting of accounts receivable and sales under... sales under 120 days delayed payment terms (short-term credit). (a) General. (1) Amounts payable to DoD Components for sales of Defense articles and services on terms which require payment of cash in advance of...
Code of Federal Regulations, 2011 CFR
2011-07-01
... 32 National Defense 2 2011-07-01 2011-07-01 false Reporting of accounts receivable and sales under... sales under 120 days delayed payment terms (short-term credit). (a) General. (1) Amounts payable to DoD Components for sales of Defense articles and services on terms which require payment of cash in advance of...
Application of Green Net Metropolitan Product to Measure ...
The U.S. Environmental Protection Agency (USEPA) has been increasingly incorporating the concept of sustainability in its research programs. One facet of this research is the quantitative assessment of the sustainability of urban systems in light of several multidisciplinary sustainability metrics. In this work, we explore the estimation of economic measure of sustainability for Chicago Metropolitan Area (CMA) based on Green Net Metropolitan Product (GNMP), by adapting the economic models of sustainability at the macroeconomic level to regional sustainability. GNMP aims at amending the limitations of Net Domestic Product (NDP), a classical indicator of economic wellbeing, which fails to account for the degradation of environmental and natural resources caused by economic activities. We collect data for computing GNMP from publicly available secondary sources on variables such as gross metropolitan product, net income, emissions, solid waste, etc. In estimating GNMP for CMA, we have accounted for the damage costs associated with pollution emissions based on marginal damage values obtained from the literature using benefit transfers method. In addition, we attempt at accounting for the marginal value of depletion of natural resources in the CMA in terms of water depletion and changes in urban ecosystems such as green spaces. We account for the marginal damage cost associated with solid waste generation. It is expected the preliminary results of this exploration se
Quantification of the evolution of firm size distributions due to mergers and acquisitions
Sornette, Didier
2017-01-01
The distribution of firm sizes is known to be heavy tailed. In order to account for this stylized fact, previous economic models have focused mainly on growth through investments in a company’s own operations (internal growth). Thereby, the impact of mergers and acquisitions (M&A) on the firm size (external growth) is often not taken into consideration, notwithstanding its potential large impact. In this article, we make a first step into accounting for M&A. Specifically, we describe the effect of mergers and acquisitions on the firm size distribution in terms of an integro-differential equation. This equation is subsequently solved both analytically and numerically for various initial conditions, which allows us to account for different observations of previous empirical studies. In particular, it rationalises shortcomings of past work by quantifying that mergers and acquisitions develop a significant influence on the firm size distribution only over time scales much longer than a few decades. This explains why M&A has apparently little impact on the firm size distributions in existing data sets. Our approach is very flexible and can be extended to account for other sources of external growth, thus contributing towards a holistic understanding of the distribution of firm sizes. PMID:28841683
AMOEBA 2.0: A physics-first approach to biomolecular simulations
NASA Astrophysics Data System (ADS)
Rackers, Joshua; Ponder, Jay
The goal of the AMOEBA force field project is to use classical physics to understand and predict the nature of interactions between biological molecules. While making significant advances over the past decade, the ultimate goal of predicting binding energies with ``chemical accuracy'' remains elusive. The primary source of this inaccuracy comes from the physics of how molecules interact at short range. For example, despite AMOEBA's advanced treatment of electrostatics, the force field dramatically overpredicts the electrostatic energy of DNA stacking interactions. AMOEBA 2.0 works to correct these errors by including simple, first principles physics-based terms to account for the quantum mechanical nature of these short-range molecular interactions. We have added a charge penetration term that considerably improves the description of electrostatic interactions at short range. We are reformulating the polarization term of AMOEBA in terms of basic physics assertions. And we are reevaluating the van der Waals term to match ab initio energy decompositions. These additions and changes promise to make AMOEBA more predictive. By including more physical detail of the important short-range interactions of biological molecules, we hope to move closer to the ultimate goal of true predictive power.
NASA Astrophysics Data System (ADS)
Parazoo, Nicholas C.; Koven, Charles D.; Lawrence, David M.; Romanovsky, Vladimir; Miller, Charles E.
2018-01-01
Thaw and release of permafrost carbon (C) due to climate change is likely to offset increased vegetation C uptake in northern high-latitude (NHL) terrestrial ecosystems. Models project that this permafrost C feedback may act as a slow leak, in which case detection and attribution of the feedback may be difficult. The formation of talik, a subsurface layer of perennially thawed soil, can accelerate permafrost degradation and soil respiration, ultimately shifting the C balance of permafrost-affected ecosystems from long-term C sinks to long-term C sources. It is imperative to understand and characterize mechanistic links between talik, permafrost thaw, and respiration of deep soil C to detect and quantify the permafrost C feedback. Here, we use the Community Land Model (CLM) version 4.5, a permafrost and biogeochemistry model, in comparison to long-term deep borehole data along North American and Siberian transects, to investigate thaw-driven C sources in NHL ( > 55° N) from 2000 to 2300. Widespread talik at depth is projected across most of the NHL permafrost region (14 million km2) by 2300, 6.2 million km2 of which is projected to become a long-term C source, emitting 10 Pg C by 2100, 50 Pg C by 2200, and 120 Pg C by 2300, with few signs of slowing. Roughly half of the projected C source region is in predominantly warm sub-Arctic permafrost following talik onset. This region emits only 20 Pg C by 2300, but the CLM4.5 estimate may be biased low by not accounting for deep C in yedoma. Accelerated decomposition of deep soil C following talik onset shifts the ecosystem C balance away from surface dominant processes (photosynthesis and litter respiration), but sink-to-source transition dates are delayed by 20-200 years by high ecosystem productivity, such that talik peaks early ( ˜ 2050s, although borehole data suggest sooner) and C source transition peaks late ( ˜ 2150-2200). The remaining C source region in cold northern Arctic permafrost, which shifts to a net source early (late 21st century), emits 5 times more C (95 Pg C) by 2300, and prior to talik formation due to the high decomposition rates of shallow, young C in organic-rich soils coupled with low productivity. Our results provide important clues signaling imminent talik onset and C source transition, including (1) late cold-season (January-February) soil warming at depth ( ˜ 2 m), (2) increasing cold-season emissions (November-April), and (3) enhanced respiration of deep, old C in warm permafrost and young, shallow C in organic-rich cold permafrost soils. Our results suggest a mosaic of processes that govern carbon source-to-sink transitions at high latitudes and emphasize the urgency of monitoring soil thermal profiles, organic C age and content, cold-season CO2 emissions, and atmospheric 14CO2 as key indicators of the permafrost C feedback.
NASA Technical Reports Server (NTRS)
Ginoux, Paul; Prospero, Joseph M.; Gill, Thomas E.; Hsu, N. Christina; Zhao, Ming
2012-01-01
Our understanding of the global dust cycle is limited by a dearth of information about dust sources, especially small-scale features which could account for a large fraction of global emissions. Here we present a global-scale high-resolution (0.1 deg) mapping of sources based on Moderate Resolution Imaging Spectroradiometer (MODIS) Deep Blue estimates of dust optical depth in conjunction with other data sets including land use. We ascribe dust sources to natural and anthropogenic (primarily agricultural) origins, calculate their respective contributions to emissions, and extensively compare these products against literature. Natural dust sources globally account for 75% of emissions; anthropogenic sources account for 25%. North Africa accounts for 55% of global dust emissions with only 8% being anthropogenic, mostly from the Sahel. Elsewhere, anthropogenic dust emissions can be much higher (75% in Australia). Hydrologic dust sources (e.g., ephemeral water bodies) account for 31% worldwide; 15% of them are natural while 85% are anthropogenic. Globally, 20% of emissions are from vegetated surfaces, primarily desert shrublands and agricultural lands. Since anthropogenic dust sources are associated with land use and ephemeral water bodies, both in turn linked to the hydrological cycle, their emissions are affected by climate variability. Such changes in dust emissions can impact climate, air quality, and human health. Improved dust emission estimates will require a better mapping of threshold wind velocities, vegetation dynamics, and surface conditions (soil moisture and land use) especially in the sensitive regions identified here, as well as improved ability to address small-scale convective processes producing dust via cold pool (haboob) events frequent in monsoon regimes.
NASA Astrophysics Data System (ADS)
Chen, X.; Millet, D. B.; Singh, H. B.; Wisthaler, A.
2017-12-01
We present an integrated analysis of the atmospheric VOC budget over North America using a high-resolution GEOS-Chem simulation and observations from a large suite of recent aircraft campaigns. Here, the standard model simulation is expanded to include a more comprehensive VOC treatment encompassing the best current understanding of emissions and chemistry. Based on this updated framework, we find in the model that biogenic emission dominate VOC carbon sources over North America (accounting for 71% of total primary emissions), and this is especially the case from a reactivity perspective (with biogenic VOCs accounting for 90% of reactivity-weighted emissions). Physical processes and chemical degradation make comparable contributions to the removal of VOC carbon over North America. We further apply this simulation to explore the impacts of different primary VOC sources on atmospheric chemistry in terms of OH reactivity and key atmospheric chemicals including NOx, HCHO, glyoxal, and ozone. The airborne observations show that the majority of detected VOC carbon is carried by oxygenated VOC throughout the North American troposphere, and this tendency is well captured by the model. Model-measurement comparisons along the campaign flight tracks show that the total observed VOC abundance is generally well-predicted by the model within the boundary layer (with some regionally-specific biases) but severely underestimated in the upper troposphere. The observations imply significant missing sources in the model for upper tropospheric methanol, acetone, peroxyacetic acid, and glyoxal, and for organic acids in the lower troposphere. Elemental ratios derived from airborne high-resolution mass spectrometry show only modest change in the ensemble VOC carbon oxidation state with aging (in NOx:NOy space), and the model successfully captures this behavior.
NASA Astrophysics Data System (ADS)
Maenhaut, Willy
2018-02-01
Five-year-long (1991-1996) aerosol trace element data sets for the fine (PM2) size fraction from the sites of Birkenes and Skreådalen in southern Norway were reanalysed by US EPA positive matrix factorization PMF5 in order to assess the sources and their contribution to the PM2 aerosol. The data sets contained the concentrations of the particulate mass (PM), black carbon (BC) and 21 elements in over 700 samples for each of the two sites. The PM was obtained from weighing with a microbalance and BC was determined with a light reflectance technique. The data for the elements were obtained by a combination of particle-induced X-ray emission and instrumental neutron activation analysis. Eight source factors were retained for each site, i.e., (i) secondary sulfate, which accounted for around 40% of the average measured PM2 mass, (ii) wood burning, with BC, K, Zn and As, which accounted for about 17%, (iii) an iodine factor (with also Br and Se), which is probably related to a marine biogenic source and was responsible for about 6.5%, (iv) aged sea salt with Na, Mg, Cl and Ca, but heavily depleted in Cl; (v) a crustal factor containing Al, Si, Ca, Ti and Fe; (vi) a heavy oil burning factor with V and Ni in a ratio of 3-4; (vii) a general pollution factor (with Cu, Zn, As, Se, Sb and Pb), and (viii) an almost pure manganese factor, which is attributed to Mn and FeMn industries in southern Norway. The results were substantially different from those of an earlier PMF analysis, in which use was made of PMF2.
Major sources of benzene exposure.
Wallace, L A
1989-01-01
Data from EPA's TEAM Study allow us to identify the major sources of exposure to benzene for much of the U.S. population. These sources turn out to be quite different from what had previously been considered the important sources. The most important source of exposure for 50 million smokers is the mainstream smoke from their cigarettes, which accounts for about half of the total population burden of exposure to benzene. Another 20% of nationwide exposure is contributed by various personal activities, such as driving and using attached garages. (Emissions from consumer products, building materials, paints, and adhesives may also be important, although data are largely lacking.) The traditional sources of atmospheric emissions (auto exhaust and industrial emissions) account for only about 20% of total exposure. Environmental tobacco smoke is an important source, accounting for about 5% of total nationwide exposure. A number of sources sometimes considered important, such as petroleum refining operations, petrochemical manufacturing, oil storage tanks, urban-industrial areas, service stations, certain foods, groundwater contamination, and underground gasoline leaks, appear to be unimportant on a nationwide basis. PMID:2477239
A Second Law Based Unstructured Finite Volume Procedure for Generalized Flow Simulation
NASA Technical Reports Server (NTRS)
Majumdar, Alok
1998-01-01
An unstructured finite volume procedure has been developed for steady and transient thermo-fluid dynamic analysis of fluid systems and components. The procedure is applicable for a flow network consisting of pipes and various fittings where flow is assumed to be one dimensional. It can also be used to simulate flow in a component by modeling a multi-dimensional flow using the same numerical scheme. The flow domain is discretized into a number of interconnected control volumes located arbitrarily in space. The conservation equations for each control volume account for the transport of mass, momentum and entropy from the neighboring control volumes. In addition, they also include the sources of each conserved variable and time dependent terms. The source term of entropy equation contains entropy generation due to heat transfer and fluid friction. Thermodynamic properties are computed from the equation of state of a real fluid. The system of equations is solved by a hybrid numerical method which is a combination of simultaneous Newton-Raphson and successive substitution schemes. The paper also describes the application and verification of the procedure by comparing its predictions with the analytical and numerical solution of several benchmark problems.
Demand for Long-Term Care Insurance in China.
Wang, Qun; Zhou, Yi; Ding, Xinrui; Ying, Xiaohua
2017-12-22
The aim of this study was to estimate willingness to pay (WTP) for long-term care insurance (LTCI) and to explore the determinants of demand for LTCI in China. We collected data from a household survey conducted in Qinghai and Zhejiang on a sample of 1842 households. We relied on contingent valuation methods to elicit the demand for LTCI and random effects logistic regression to analyze the factors associated with the demand for LTCI. Complementarily, we used document analysis to compare the LTCI designed in this study and the current LTCI policies in the pilot cities. More than 90% of the respondents expressed their willingness to buy LTCI. The median WTP for LTCI was estimated at 370.14 RMB/year, accounting for 2.29% of average annual per capita disposable income. Price, age, education status, and income were significantly associated with demand for LTCI. Most pilot cities were found to mainly rely on Urban Employees Basic Medical Insurance funds as the financing source for LTCI. Considering that financing is one of the greatest challenges in the development of China's LTCI, we suggest that policy makers consider individual contribution as an important and possible option as a source of financing for LTCI.
Challies, Danna M; Hunt, Maree; Garry, Maryanne; Harper, David N
2011-01-01
The misinformation effect is a term used in the cognitive psychological literature to describe both experimental and real-world instances in which misleading information is incorporated into an account of an historical event. In many real-world situations, it is not possible to identify a distinct source of misinformation, and it appears that the witness may have inferred a false memory by integrating information from a variety of sources. In a stimulus equivalence task, a small number of trained relations between some members of a class of arbitrary stimuli result in a large number of untrained, or emergent relations, between all members of the class. Misleading information was introduced into a simple memory task between a learning phase and a recognition test by means of a match-to-sample stimulus equivalence task that included both stimuli from the original learning task and novel stimuli. At the recognition test, participants given equivalence training were more likely to misidentify patterns than those who were not given such training. The misinformation effect was distinct from the effects of prior stimulus exposure, or partial stimulus control. In summary, stimulus equivalence processes may underlie some real-world manifestations of the misinformation effect. PMID:22084495
Anthropogenic Chromium Emissions in China from 1990 to 2009
Cheng, Hongguang; Zhou, Tan; Li, Qian; Lu, Lu; Lin, Chunye
2014-01-01
An inventory of chromium emission into the atmosphere and water from anthropogenic activities in China was compiled for 1990 through to 2009. We estimate that the total emission of chromium to the atmosphere is about 1.92×105t. Coal and oil combustion were the two leading sources of chromium emission to the atmosphere in China, while the contribution of them showed opposite annual growth trend. In total, nearly 1.34×104t of chromium was discharged to water, mainly from six industrial categories in 20 years. Among them, the metal fabrication industry and the leather tanning sector were the dominant sources of chromium emissions, accounting for approximately 68.0% and 20.0% of the total emissions and representing increases of15.6% and 10.3% annually, respectively. The spatial trends of Cr emissions show significant variation based on emissions from 2005 to 2009. The emission to the atmosphere was heaviest in Hebei, Shandong, Guangdong, Zhejiang and Shanxi, whose annual emissions reached more than 1000t for the high level of coal and oil consumption. In terms of emission to water, the largest contributors were Guangdong, Jiangsu, Shandong and Zhejiang, where most of the leather production and metal manufacturing occur and these four regions accounted for nearly 47.4% of the total emission to water. PMID:24505309
2013-01-01
Background The objective was to examine feasibility of using hospital discharge register data for studying fire-related injuries. Methods The Finnish National Hospital Discharge Register (FHDR) was the database used to select relevant hospital discharge data to study usability and data quality issues. Patterns of E-coding were assessed, as well as prominent challenges in defining the incidence of injuries. Additionally, the issue of defining the relevant amount of hospital days accounted for in injury care was considered. Results Directly after the introduction of the ICD-10 classification system, in 1996, the completeness of E-coding was found to be poor, but to have improved dramatically around 2000 and thereafter. The scale of the challenges to defining the incidence of injuries was found to be manageable. In counting the relevant hospital days, psychiatric and long-term care were found to be the obvious and possible sources of overestimation. Conclusions The FHDR was found to be a feasible data source for studying fire-related injuries so long as potential challenges are acknowledged and taken into account. Hospital discharge data can be a unique and powerful means for injury research as issues of representativeness and coverage of traditional probability samples can frequently be completely avoided. PMID:23496937
40 CFR 96.51 - Establishment of accounts.
Code of Federal Regulations, 2013 CFR
2013-07-01
... accounts. Upon receipt of a complete account certificate of representation under § 96.13, the Administrator... representation was submitted; and (2) An overdraft account for each source for which the account certificate of representation was submitted and that has two or more NOX Budget units. (b) General accounts. (1) Any person may...
40 CFR 96.51 - Establishment of accounts.
Code of Federal Regulations, 2012 CFR
2012-07-01
... accounts. Upon receipt of a complete account certificate of representation under § 96.13, the Administrator... representation was submitted; and (2) An overdraft account for each source for which the account certificate of representation was submitted and that has two or more NOX Budget units. (b) General accounts. (1) Any person may...
40 CFR 96.51 - Establishment of accounts.
Code of Federal Regulations, 2011 CFR
2011-07-01
... accounts. Upon receipt of a complete account certificate of representation under § 96.13, the Administrator... representation was submitted; and (2) An overdraft account for each source for which the account certificate of representation was submitted and that has two or more NOX Budget units. (b) General accounts. (1) Any person may...
40 CFR 96.51 - Establishment of accounts.
Code of Federal Regulations, 2014 CFR
2014-07-01
... accounts. Upon receipt of a complete account certificate of representation under § 96.13, the Administrator... representation was submitted; and (2) An overdraft account for each source for which the account certificate of representation was submitted and that has two or more NOX Budget units. (b) General accounts. (1) Any person may...
Iowa Community Colleges Accounting Manual.
ERIC Educational Resources Information Center
Iowa State Dept. of Education, Des Moines. Div. of Community Colleges and Workforce Preparation.
This document describes account classifications and definitions for the accounting system of the Iowa community colleges. In view of the objectives of the accounting system, it is necessary to segregate the assets of the community college according to its source and intended use. Additionally, the accounting system should provide for accounting by…
Numerical Prediction of Combustion-induced Noise using a hybrid LES/CAA approach
NASA Astrophysics Data System (ADS)
Ihme, Matthias; Pitsch, Heinz; Kaltenbacher, Manfred
2006-11-01
Noise generation in technical devices is an increasingly important problem. Jet engines in particular produce sound levels that not only are a nuisance but may also impair hearing. The noise emitted by such engines is generated by different sources such as jet exhaust, fans or turbines, and combustion. Whereas the former acoustic mechanisms are reasonably well understood, combustion-generated noise is not. A methodology for the prediction of combustion-generated noise is developed. In this hybrid approach unsteady acoustic source terms are obtained from an LES and the propagation of pressure perturbations are obtained using acoustic analogies. Lighthill's acoustic analogy and a non-linear wave equation, accounting for variable speed of sound, have been employed. Both models are applied to an open diffusion flame. The effects on the far field pressure and directivity due to the variation of speed of sound are analyzed. Results for the sound pressure level will be compared with experimental data.
A nonequilibrium model for a moderate pressure hydrogen microwave discharge plasma
NASA Technical Reports Server (NTRS)
Scott, Carl D.
1993-01-01
This document describes a simple nonequilibrium energy exchange and chemical reaction model to be used in a computational fluid dynamics calculation for a hydrogen plasma excited by microwaves. The model takes into account the exchange between the electrons and excited states of molecular and atomic hydrogen. Specifically, electron-translation, electron-vibration, translation-vibration, ionization, and dissociation are included. The model assumes three temperatures, translational/rotational, vibrational, and electron, each describing a Boltzmann distribution for its respective energy mode. The energy from the microwave source is coupled to the energy equation via a source term that depends on an effective electric field which must be calculated outside the present model. This electric field must be found by coupling the results of the fluid dynamics and kinetics solution with a solution to Maxwell's equations that includes the effects of the plasma permittivity. The solution to Maxwell's equations is not within the scope of this present paper.
Bank Runs and the Accounting for Illiquid Assets in Financial Institutions
ERIC Educational Resources Information Center
Meder, Anthony; Schwartz, Steven T.; Wu, Mark; Young, Richard A.
2014-01-01
Financial services are an increasingly important sector in modern economies, yet many accounting and auditing texts focus on manufacturing and retailing. This teaching note describes the role of financial institutions in transforming long-term, difficult-to-sell assets into short-term bank accounts. This is referred to as liquidity transformation.…
18 CFR 367.2240 - Account 224, Other long-term debt.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Account 224, Other long-term debt. 367.2240 Section 367.2240 Conservation of Power and Water Resources FEDERAL ENERGY..., FEDERAL POWER ACT AND NATURAL GAS ACT UNIFORM SYSTEM OF ACCOUNTS FOR CENTRALIZED SERVICE COMPANIES SUBJECT...
37 CFR 261.4 - Terms for making payment of royalty fees and statements of account.
Code of Federal Regulations, 2010 CFR
2010-07-01
... royalty fees and statements of account. 261.4 Section 261.4 Patents, Trademarks, and Copyrights COPYRIGHT OFFICE, LIBRARY OF CONGRESS COPYRIGHT ARBITRATION ROYALTY PANEL RULES AND PROCEDURES RATES AND TERMS FOR... payment of royalty fees and statements of account. (a) A Licensee shall make the royalty payments due...
12 CFR Appendix B to Part 707 - Model Clauses and Sample Forms
Code of Federal Regulations, 2013 CFR
2013-01-01
... your deposit account is ___% with an annual percentage yield (APY) of ___%. [For purposes of this...-bearing Term Share Accounts The dividend rate on your term share account is ___% with an annual percentage... declaration date/ (date)], the dividend rate was ___% with an annual percentage yield (APY) of ___% on your...
12 CFR Appendix B to Part 707 - Model Clauses and Sample Forms
Code of Federal Regulations, 2012 CFR
2012-01-01
... your deposit account is ___% with an annual percentage yield (APY) of ___%. [For purposes of this...-bearing Term Share Accounts The dividend rate on your term share account is ___% with an annual percentage... declaration date/ (date)], the dividend rate was ___% with an annual percentage yield (APY) of ___% on your...
12 CFR Appendix B to Part 707 - Model Clauses and Sample Forms
Code of Federal Regulations, 2011 CFR
2011-01-01
... your deposit account is ___% with an annual percentage yield (APY) of ___%. [For purposes of this...-bearing Term Share Accounts The dividend rate on your term share account is ___% with an annual percentage... declaration date/ (date)], the dividend rate was ___% with an annual percentage yield (APY) of ___% on your...
12 CFR Appendix B to Part 707 - Model Clauses and Sample Forms
Code of Federal Regulations, 2014 CFR
2014-01-01
... your deposit account is ___% with an annual percentage yield (APY) of ___%. [For purposes of this...-bearing Term Share Accounts The dividend rate on your term share account is ___% with an annual percentage... declaration date/ (date)], the dividend rate was ___% with an annual percentage yield (APY) of ___% on your...
Source Apportionment of VOCs in Edmonton, Alberta
NASA Astrophysics Data System (ADS)
McCarthy, M. C.; Brown, S. G.; Aklilu, Y.; Lyder, D. A.
2012-12-01
Regional emissions at Edmonton, Alberta, are complex, containing emissions from (1) transportation sources, such as cars, trucks, buses, and rail; (2) industrial sources, such as petroleum refining, light manufacturing, and fugitive emissions from holding tanks or petroleum terminals; and (3) miscellaneous sources, such as biogenic emissions and natural gas use and processing. From 2003 to 2009, whole air samples were collected at two sites in Edmonton and analyzed for over 77 volatile organic compounds (VOCs). VOCs were sampled in the downtown area (Central) and the industrial area on the eastern side of the city (East). Concentrations of most VOCs were highest at the East site. The positive matrix factorization (PMF) receptor model was used to apportion ambient concentration measurements of VOCs into eleven factors, which were associated with emissions source categories. Factors of VOCs identified in the final eleven-factor solution include transportation sources (both gasoline and diesel vehicles), industrial sources, a biogenic source, and a natural-gas-related source. Transportation sources accounted for more mass at the Central site than at the East site; this was expected because Central is in a core urban area where transportation emissions are concentrated. Transportation sources accounted for nearly half of the VOC mass at the Central site, but only 6% of the mass at the East site. Encouragingly, mass from transportation sources has declined by about 4% a year in this area; this trend is similar to the decline found throughout the United States, and is likely due to fleet turnover as older, more highly polluting cars are replaced with newer, cleaner cars. In contrast, industrial sources accounted for ten times more VOC mass at the East site than at the Central site and were responsible for most of the total VOC mass observed at the East site. Of the six industrial factors identified at the East site, four were linked to petrochemical industry production and storage. The two largest contributors to VOC mass at the East site were associated with fugitive emissions of volatile species (butanes, pentanes, hexane, and cyclohexane); together, these two factors accounted for more than 50% of the mass at the East site and less than 2% of the mass at the Central site. Natural-gas-related emissions accounted for 10% to 20% of the mass at both sites. Biogenic emissions and VOCs associated with well-mixed global background were less than 10% of the VOC mass at the Central site and less than 3% of the mass at the East site. Controllable emissions sources account for the bulk of the identified VOC mass. Efforts to reduce ozone or particulate matter precursors or exposure to toxic pollutants can now be directed to those sources most important to the Edmonton area.
75 FR 24384 - Reserve Requirements of Depository Institutions Policy on Payment System Risk
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-05
... balances. Institutions eligible to receive earnings on their balances in accounts at Federal Reserve Banks... general level of short-term interest rates. Term deposits are separate and distinct from balances... maintained in an excess balance account. Term deposits do not satisfy an institution's required reserve...
Radial mixing in turbomachines
NASA Astrophysics Data System (ADS)
Segaert, P.; Hirsch, Ch.; Deruyck, J.
1991-03-01
A method for computing the effects of radial mixing in a turbomachinery blade row has been developed. The method fits in the framework of a quasi-3D flow computation and hence is applied in a corrective fashion to through flow distributions. The method takes into account both secondary flows and turbulent diffusion as possible sources of mixing. Secondary flow velocities determine the magnitude of the convection terms in the energy redistribution equation while a turbulent diffusion coefficient determines the magnitude of the diffusion terms. Secondary flows are computed by solving a Poisson equation for a secondary streamfunction on a transversal S3-plane, whereby the right-hand side axial vorticity is composed of different contributions, each associated to a particular flow region: inviscid core flow, end-wall boundary layers, profile boundary layers and wakes. The turbulent mixing coefficient is estimated by a semi-empirical correlation. Secondary flow theory is applied to the VUB cascade testcase and comparisons are made between the computational results and the extensive experimental data available for this testcase. This comparison shows that the secondary flow computations yield reliable predictions of the secondary flow pattern, both qualitatively and quantitatively, taking into account the limitations of the model. However, the computations show that use of a uniform mixing coefficient has to be replaced by a more sophisticated approach.
[Biotechnology's macroeconomic impact].
Dones Tacero, Milagros; Pérez García, Julián; San Román, Antonio Pulido
2008-12-01
This paper tries to yield an economic valuation of biotechnological activities in terms of aggregated production and employment. This valuation goes beyond direct estimation and includes the indirect effects derived from sectorial linkages between biotechnological activities and the rest of economic system. To deal with the proposed target several sources of data have been used, including official data from National Statistical Office (INE) such us national accounts, input-output tables, and innovation surveys, as well as, firms' level balance sheets and income statements and also specific information about research projects compiled by Genoma Spain Foundation. Methodological approach is based on the estimation of a new input-output table which includes the biotechnological activities as a specific branch. This table offers both the direct impact of these activities and the main parameters to obtain the induced effects over the rest of the economic system. According to the most updated available figures, biotechnological activities would have directly generated almost 1,600 millions of euros in 2005, and they would be employed more than 9,000 workers. But if we take into account the full linkages with the rest of the system, the macroeconomic impact of Biotechnological activities would reach around 5,000 millions euros in production terms (0.6% of total GDP) and would be responsible, directly or indirectly, of more than 44,000 employments.
Iron formations as the source of the West African magnetic crustal anomaly
NASA Astrophysics Data System (ADS)
Launay, Nicolas; Quesnel, Yoann; Rochette, Pierre; Demory, François
2018-04-01
The geological sources of major magnetic field anomalies are still poorly constrained, in terms of nature, geometry and vertical position. A common feature of several anomalies is their spatial correlation with cratonic shields and, for the largest anomalies, with Banded Iron Formations (BIF). This study first unveils the magnetic properties of some BIF samples from Mauritania, where the main part of the West African magnetic anomaly is observed. It shows how strong the magnetic susceptibility and natural remanent magnetization for such rocks are. High Koenigsberger ratios imply that the remanent magnetization should be taken into account to explain the anomaly. A numerical modeling of the crust beneath this anomaly is performed using these constraints and both gravity and magnetic field data. A forward approach is used, investigating the depth, thickness and magnetization intensity of all possible crustal lithologies. Our results show that BIF slices can be the only magnetized crustal sources needed to explain the anomaly, and that they could be buried several kilometers deep. The results of this study provide a new perspective to address the investigation of magnetic field anomaly sources in other cratonic regions with BIF outcrops.
Dissociation between memory accuracy and memory confidence following bilateral parietal lesions.
Simons, Jon S; Peers, Polly V; Mazuz, Yonatan S; Berryhill, Marian E; Olson, Ingrid R
2010-02-01
Numerous functional neuroimaging studies have observed lateral parietal lobe activation during memory tasks: a surprise to clinicians who have traditionally associated the parietal lobe with spatial attention rather than memory. Recent neuropsychological studies examining episodic recollection after parietal lobe lesions have reported differing results. Performance was preserved in unilateral lesion patients on source memory tasks involving recollecting the context in which stimuli were encountered, and impaired in patients with bilateral parietal lesions on tasks assessing free recall of autobiographical memories. Here, we investigated a number of possible accounts for these differing results. In 3 experiments, patients with bilateral parietal lesions performed as well as controls at source recollection, confirming the previous unilateral lesion results and arguing against an explanation for those results in terms of contralesional compensation. Reducing the behavioral relevance of mnemonic information critical to the source recollection task did not affect performance of the bilateral lesion patients, indicating that the previously observed reduced autobiographical free recall might not be due to impaired bottom-up attention. The bilateral patients did, however, exhibit reduced confidence in their source recollection abilities across the 3 experiments, consistent with a suggestion that parietal lobe lesions might lead to impaired subjective experience of rich episodic recollection.
Takahiro Sayama; Jeffrey J. McDonnell
2009-01-01
Hydrograph source components and stream water residence time are fundamental behavioral descriptors of watersheds but, as yet, are poorly represented in most rainfall-runoff models. We present a new time-space accounting scheme (T-SAS) to simulate the pre-event and event water fractions, mean residence time, and spatial source of streamflow at the watershed scale. We...
NASA Astrophysics Data System (ADS)
Yuan, Zibing; Yadav, Varun; Turner, Jay R.; Louie, Peter K. K.; Lau, Alexis Kai Hon
2013-09-01
Despite extensive emission control measures targeting motor vehicles and to a lesser extent other sources, annual-average PM10 mass concentrations in Hong Kong have remained relatively constant for the past several years and for some air quality metrics, such as the frequency of poor visibility days, conditions have degraded. The underlying drivers for these long-term trends were examined by performing source apportionment on eleven years (1998-2008) of data for seven monitoring sites in the Hong Kong PM10 chemical speciation network. Nine factors were resolved using Positive Matrix Factorization. These factors were assigned to emission source categories that were classified as local (operationally defined as within the Hong Kong Special Administrative Region) or non-local based on temporal and spatial patterns in the source contribution estimates. This data-driven analysis provides strong evidence that local controls on motor vehicle emissions have been effective in reducing motor vehicle-related ambient PM10 burdens with annual-average contributions at neighborhood- and larger-scale monitoring stations decreasing by ˜6 μg m-3 over the eleven year period. However, this improvement has been offset by an increase in annual-average contributions from non-local contributions, especially secondary sulfate and nitrate, of ˜8 μg m-3 over the same time period. As a result, non-local source contributions to urban-scale PM10 have increased from 58% in 1998 to 70% in 2008. Most of the motor vehicle-related decrease and non-local source driven increase occurred over the period 1998-2004 with more modest changes thereafter. Non-local contributions increased most dramatically for secondary sulfate and secondary nitrate factors and thus combustion-related control strategies, including but not limited to power plants, are needed for sources located in the Pearl River Delta and more distant regions to improve air quality conditions in Hong Kong. PMF-resolved source contribution estimates were also used to examine differential contributions of emission source categories during high PM episodes compared to study-average behavior. While contributions from all source categories increased to some extent on high PM days, the increases were disproportionately high for the non-local sources. Thus, controls on emission sources located outside the Hong Kong Special Administrative Region will be needed to effectively decrease the frequency and severity of high PM episodes.
Performance Impact of Deflagration to Detonation Transition Enhancing Obstacles
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Schauer, Frederick; Hopper, David
2012-01-01
A sub-model is developed to account for the drag and heat transfer enhancement resulting from deflagration-to-detonation (DDT) inducing obstacles commonly used in pulse detonation engines (PDE). The sub-model is incorporated as a source term in a time-accurate, quasi-onedimensional, CFD-based PDE simulation. The simulation and sub-model are then validated through comparison with a particular experiment in which limited DDT obstacle parameters were varied. The simulation is then used to examine the relative contributions from drag and heat transfer to the reduced thrust which is observed. It is found that heat transfer is far more significant than aerodynamic drag in this particular experiment.
Prediction of facial cooling while walking in cold wind.
Tikuisis, Peter; Ducharme, Michel B; Brajkovic, Dragan
2007-09-01
A dynamic model of cheek cooling has been modified to account for increased skin blood circulation of individuals walking in cold wind. This was achieved by modelling the cold-induced vasodilation response to cold as a varying blood perfusion term, which provided a source of convective heat to the skin tissues of the model. Physiologically-valid blood perfusion was fitted to replicate the cheek skin temperature responses of 12 individuals experimentally exposed to air temperatures from -10 to 10 degrees C at wind speeds from 2 to 8 ms(-1). Resultant cheek skin temperatures met goodness-of-fit criteria and implications on wind chill predictions are discussed.
On simulation of local fluxes in molecular junctions
NASA Astrophysics Data System (ADS)
Cabra, Gabriel; Jensen, Anders; Galperin, Michael
2018-05-01
We present a pedagogical review of the current density simulation in molecular junction models indicating its advantages and deficiencies in analysis of local junction transport characteristics. In particular, we argue that current density is a universal tool which provides more information than traditionally simulated bond currents, especially when discussing inelastic processes. However, current density simulations are sensitive to the choice of basis and electronic structure method. We note that while discussing the local current conservation in junctions, one has to account for the source term caused by the open character of the system and intra-molecular interactions. Our considerations are illustrated with numerical simulations of a benzenedithiol molecular junction.
Comparison of Fully-Compressible Equation Sets for Atmospheric Dynamics
NASA Technical Reports Server (NTRS)
Ahmad, Nashat N.
2016-01-01
Traditionally, the equation for the conservation of energy used in atmospheric models is based on potential temperature and is used in place of the total energy conservation. This paper compares the application of the two equations sets for both the Euler and the Navier-Stokes solutions using several benchmark test cases. A high-resolution wave-propagation method which accurately takes into account the source term due to gravity is used for computing the non-hydrostatic atmospheric flows. It is demonstrated that there is little to no difference between the results obtained using the two different equation sets for Euler as well as Navier-Stokes solutions.
Stets, Edward G.; Kelly, Valerie J.; Crawford, Charles G.
2015-01-01
Riverine nitrate (NO3) is a well-documented driver of eutrophication and hypoxia in coastal areas. The development of the elevated river NO3 concentration is linked to anthropogenic inputs from municipal, agricultural, and atmospheric sources. The intensity of these sources has varied regionally, through time, and in response to multiple causes such as economic drivers and policy responses. This study uses long-term water quality, land use, and other ancillary data to further describe the evolution of river NO3 concentrations at 22 monitoring stations in the United States (U.S.). The stations were selected for long-term data availability and to represent a range of climate and land-use conditions. We examined NO3 at the monitoring stations, using a flow-weighting scheme meant to account for interannual flow variability allowing greater focus on river chemical conditions. River NO3 concentration increased strongly during 1945-1980 at most of the stations and have remained elevated, but stopped increasing during 1981-2008. NO3 increased to a greater extent at monitoring stations in the Midwest U.S. and less so at those in the Eastern and Western U.S. We discuss 20th Century agricultural development in the U.S. and demonstrate that regional differences in NO3 concentration patterns were strongly related to an agricultural index developed using principal components analysis. This unique century-scale dataset adds to our understanding of long-term NO3 patterns in the U.S.
An audit of the global carbon budget: identifying and reducing sources of uncertainty
NASA Astrophysics Data System (ADS)
Ballantyne, A. P.; Tans, P. P.; Marland, G.; Stocker, B. D.
2012-12-01
Uncertainties in our carbon accounting practices may limit our ability to objectively verify emission reductions on regional scales. Furthermore uncertainties in the global C budget must be reduced to benchmark Earth System Models that incorporate carbon-climate interactions. Here we present an audit of the global C budget where we try to identify sources of uncertainty for major terms in the global C budget. The atmospheric growth rate of CO2 has increased significantly over the last 50 years, while the uncertainty in calculating the global atmospheric growth rate has been reduced from 0.4 ppm/yr to 0.2 ppm/yr (95% confidence). Although we have greatly reduced global CO2 growth rate uncertainties, there remain regions, such as the Southern Hemisphere, Tropics and Arctic, where changes in regional sources/sinks will remain difficult to detect without additional observations. Increases in fossil fuel (FF) emissions are the primary factor driving the increase in global CO2 growth rate; however, our confidence in FF emission estimates has actually gone down. Based on a comparison of multiple estimates, FF emissions have increased from 2.45 ± 0.12 PgC/yr in 1959 to 9.40 ± 0.66 PgC/yr in 2010. Major sources of increasing FF emission uncertainty are increased emissions from emerging economies, such as China and India, as well as subtle differences in accounting practices. Lastly, we evaluate emission estimates from Land Use Change (LUC). Although relative errors in emission estimates from LUC are quite high (2 sigma ~ 50%), LUC emissions have remained fairly constant in recent decades. We evaluate the three commonly used approaches to estimating LUC emissions- Bookkeeping, Satellite Imagery, and Model Simulations- to identify their main sources of error and their ability to detect net emissions from LUC.; Uncertainties in Fossil Fuel Emissions over the last 50 years.
NASA Astrophysics Data System (ADS)
Yang, Junhua; Kang, Shichang; Ji, Zhenming; Chen, Deliang
2018-01-01
Black carbon (BC) in snow/ice induces enhanced snow and glacier melting. As over 60% of atmospheric BC is emitted from anthropogenic sources, which directly impacts the distribution and concentration of BC in snow/ice, it is essential to assess the origin of anthropogenic BC transported to the Tibetan Plateau (TP) where there are few direct emissions attributable to local human activities. In this study, we used a regional climate-atmospheric chemistry model and a set of BC scenarios for quantitative evaluation of the impact of anthropogenic BC from various sources and its climate effects over the TP in 2013. The results showed that the model performed well in terms of climatology, aerosol optical properties, and near-surface concentrations, which indicates that this modeling framework is appropriate to characterize anthropogenic BC source-receptor relationships over the TP. The simulated surface concentration associated with the anthropogenic sources showed seasonal differences. In the monsoon season, the contribution of anthropogenic BC was less than in the nonmonsoon season. In the nonmonsoon season, westerly winds prevailed and transported BC from central Asia and north India to the western TP. In the monsoon season, BC aerosol was transported to the middle-upper troposphere over the Indo-Gangetic Plain and crossed the Himalayas via southwesterly winds. The majority of anthropogenic BC over the TP was transported from South Asia, which contributed to 40%-80% (mean of 61.3%) of surface BC in the nonmonsoon season, and 10%-50% (mean of 19.4%) in the monsoon season. For the northeastern TP, anthropogenic BC from eastern China accounted for less than 10% of the total in the nonmonsoon season but can be up to 50% in the monsoon season. Averaged over the TP, the eastern China anthropogenic sources accounted for 6.2% and 8.4% of surface BC in the nonmonsoon and monsoon seasons, respectively. The anthropogenic BC induced negative radiative forcing and cooling effects at the near surface over the TP.
The role of fire in the boreal carbon budget
Harden, J.W.; Trumbore, S.E.; Stocks, B.J.; Hirsch, A.; Gower, S.T.; O'Neill, K. P.; Kasischke, E.S.
2000-01-01
To reconcile observations of decomposition rates, carbon inventories, and net primary production (NPP), we estimated long-term averages for C exchange in boreal forests near Thompson, Manitoba. Soil drainage as defined by water table, moss cover, and permafrost dynamics, is the dominant control on direct fire emissions. In upland forests, an average of about 10-30% of annual NPP was likely consumed by fire over the past 6500 years since these landforms and ecosystems were established. This long-term, average fire emission is much larger than has been accounted for in global C cycle models and may forecast an increase in fire activity for this region. While over decadal to century times these boreal forests may be acting as slight net sinks for C from the atmosphere to land, periods of drought and severe fire activity may result in net sources of C from these systems.
Long-term monitoring of the Sedlec Ossuary - Analysis of hygrothermal conditions
NASA Astrophysics Data System (ADS)
Pavlík, Zbyšek; Balík, Lukáš; Maděra, Jiří; Černý, Robert
2016-07-01
The Sedlec Ossuary is one of the twelve UNESCO World Heritage Sites in the Czech Republic. Although the ossuary is listed among the most visited Czech tourist attractions, its technical state is almost critical and a radical renovation is necessary. On this account, hygrothermal performance of the ossuary is experimentally researched in the presented paper in order to get information on moisture sources and to get necessary data for optimized design of renovation treatments and reconstruction solutions that will allow preserve the historical significance of this attractive heritage site. Within the performed experimental analysis, the interior and exterior climatic conditions are monitored over an almost three year period together with relative humidity and temperature profiles measured in the most damage parts of the ossuary chapel. On the basis of measured data, the long-term hygrothermal state of the ossuary building is accessed and the periods of possible surface condensation are identified.
17 CFR 210.1-02 - Definitions of terms used in Regulation S-X (17 CFR part 210).
Code of Federal Regulations, 2010 CFR
2010-04-01
... in this section unless the context otherwise requires. (a)(1) Accountant's report. The term accountant's report, when used in regard to financial statements, means a document in which an independent public or certified public accountant indicates the scope of the audit (or examination) which he has made...
Coane, Jennifer H; McBride, Dawn M; Termonen, Miia-Liisa; Cutting, J Cooper
2016-01-01
The goal of the present study was to examine the contributions of associative strength and similarity in terms of shared features to the production of false memories in the Deese/Roediger-McDermott list-learning paradigm. Whereas the activation/monitoring account suggests that false memories are driven by automatic associative activation from list items to nonpresented lures, combined with errors in source monitoring, other accounts (e.g., fuzzy trace theory, global-matching models) emphasize the importance of semantic-level similarity, and thus predict that shared features between list and lure items will increase false memory. Participants studied lists of nine items related to a nonpresented lure. Half of the lists consisted of items that were associated but did not share features with the lure, and the other half included items that were equally associated but also shared features with the lure (in many cases, these were taxonomically related items). The two types of lists were carefully matched in terms of a variety of lexical and semantic factors, and the same lures were used across list types. In two experiments, false recognition of the critical lures was greater following the study of lists that shared features with the critical lure, suggesting that similarity at a categorical or taxonomic level contributes to false memory above and beyond associative strength. We refer to this phenomenon as a "feature boost" that reflects additive effects of shared meaning and association strength and is generally consistent with accounts of false memory that have emphasized thematic or feature-level similarity among studied and nonstudied representations.
Fu, Linda Y; Zook, Kathleen; Spoehr-Labutta, Zachary; Hu, Pamela; Joseph, Jill G
2016-01-01
Online information can influence attitudes toward vaccination. The aim of the present study was to provide a systematic evaluation of the search engine ranking, quality, and content of Web pages that are critical versus noncritical of human papillomavirus (HPV) vaccination. We identified HPV vaccine-related Web pages with the Google search engine by entering 20 terms. We then assessed each Web page for critical versus noncritical bias and for the following quality indicators: authorship disclosure, source disclosure, attribution of at least one reference, currency, exclusion of testimonial accounts, and readability level less than ninth grade. We also determined Web page comprehensiveness in terms of mention of 14 HPV vaccine-relevant topics. Twenty searches yielded 116 unique Web pages. HPV vaccine-critical Web pages comprised roughly a third of the top, top 5- and top 10-ranking Web pages. The prevalence of HPV vaccine-critical Web pages was higher for queries that included term modifiers in addition to root terms. Compared with noncritical Web pages, Web pages critical of HPV vaccine overall had a lower quality score than those with a noncritical bias (p < .01) and covered fewer important HPV-related topics (p < .001). Critical Web pages required viewers to have higher reading skills, were less likely to include an author byline, and were more likely to include testimonial accounts. They also were more likely to raise unsubstantiated concerns about vaccination. Web pages critical of HPV vaccine may be frequently returned and highly ranked by search engine queries despite being of lower quality and less comprehensive than noncritical Web pages. Copyright © 2016 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Educational Accountability and the Need for Comprehensive Evaluation in TAFE.
ERIC Educational Resources Information Center
White, J. L.
This paper seeks to provide a rationale for evaluating Technical and Further Education (TAFE) programs by using a management system approach that is based on corporate planning. The first section reviews the sources of increased demands for accountability in TAFE (societal, economic, government, and legislative sources) and examines various…
40 CFR 74.40 - Establishment of opt-in source allowance accounts.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 16 2011-07-01 2011-07-01 false Establishment of opt-in source allowance accounts. 74.40 Section 74.40 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) SULFUR DIOXIDE OPT-INS Allowance Tracking and Transfer and End of Year...
77 FR 37576 - Treatment of Overall Foreign and Domestic Losses
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-22
... balance in the general category overall foreign loss account or $300 foreign source income in the general... income in the general category is recharacterized as U.S. source income. The balance in Y's general... recharacterizing the balance in any separate limitation loss account under the general recharacterization rule of...
40 CFR 74.40 - Establishment of opt-in source allowance accounts.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Establishment of opt-in source allowance accounts. 74.40 Section 74.40 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) SULFUR DIOXIDE OPT-INS Allowance Tracking and Transfer and End of Year...
40 CFR 74.50 - Deducting opt-in source allowances from ATS accounts.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Deducting opt-in source allowances from ATS accounts. 74.50 Section 74.50 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) SULFUR DIOXIDE OPT-INS Allowance Tracking and Transfer and End of Year...
46 CFR 232.5 - Income Statement Accounts.
Code of Federal Regulations, 2010 CFR
2010-10-01
... ACTIVITIES UNIFORM FINANCIAL REPORTING REQUIREMENTS Income Statement § 232.5 Income Statement Accounts. (a... Expense Accounts. The income and expense accounts shall show for each reporting period the amount of money... accounted for to facilitate reporting the source of revenue by trade route or service area. (iii) All other...
Shear flow of angular grains: acoustic effects and nonmonotonic rate dependence of volume.
Lieou, Charles K C; Elbanna, Ahmed E; Langer, J S; Carlson, J M
2014-09-01
Naturally occurring granular materials often consist of angular particles whose shape and frictional characteristics may have important implications on macroscopic flow rheology. In this paper, we provide a theoretical account for the peculiar phenomenon of autoacoustic compaction-nonmonotonic variation of shear band volume with shear rate in angular particles-recently observed in experiments. Our approach is based on the notion that the volume of a granular material is determined by an effective-disorder temperature known as the compactivity. Noise sources in a driven granular material couple its various degrees of freedom and the environment, causing the flow of entropy between them. The grain-scale dynamics is described by the shear-transformation-zone theory of granular flow, which accounts for irreversible plastic deformation in terms of localized flow defects whose density is governed by the state of configurational disorder. To model the effects of grain shape and frictional characteristics, we propose an Ising-like internal variable to account for nearest-neighbor grain interlocking and geometric frustration and interpret the effect of friction as an acoustic noise strength. We show quantitative agreement between experimental measurements and theoretical predictions and propose additional experiments that provide stringent tests on the new theoretical elements.
NASA Astrophysics Data System (ADS)
Li, Zhiyuan; Yuan, Zibing; Li, Ying; Lau, Alexis K. H.; Louie, Peter K. K.
2015-12-01
Atmospheric particulate matter (PM) pollution is a major public health concern in Hong Kong. In this study, the spatiotemporal variations of health risks from ambient PM10 from seven air quality monitoring stations between 2000 and 2011 were analyzed. Positive matrix factorization (PMF) was adopted to identify major source categories of ambient PM10 and quantify their contributions. Afterwards, a point-estimated risk model was used to identify the inhalation cancer and non-cancer risks of PM10 sources. The long-term trends of the health risks from classified local and non-local sources were explored. Furthermore, the reason for the increase of health risks during high PM10 days was discussed. Results show that vehicle exhaust source was the dominant inhalation cancer risk (ICR) contributor (72%), whereas trace metals and vehicle exhaust sources contributed approximately 27% and 21% of PM10 inhalation non-cancer risk (INCR), respectively. The identified local sources accounted for approximately 80% of the ICR in Hong Kong, while contribution percentages of the non-local and local sources for INCR are comparable. The clear increase of ICR at high PM days was mainly attributed to the increase of contributions from coal combustion/biomass burning and secondary sulfate, while the increase of INCR at high PM days was attributed to the increase of contributions from the sources coal combustion/biomass burning, secondary nitrate, and trace metals. This study highlights the importance of health risk-based source apportionment in air quality management with protecting human health as the ultimate target.
MIXOPTIM: A tool for the evaluation and the optimization of the electricity mix in a territory
NASA Astrophysics Data System (ADS)
Bonin, Bernard; Safa, Henri; Laureau, Axel; Merle-Lucotte, Elsa; Miss, Joachim; Richet, Yann
2014-09-01
This article presents a method of calculation of the generation cost of a mixture of electricity sources, by means of a Monte Carlo simulation of the production output taking into account the fluctuations of the demand and the stochastic nature of the availability of the various power sources that compose the mix. This evaluation shows that for a given electricity mix, the cost has a non-linear dependence on the demand level. In the second part of the paper, we develop some considerations on the management of intermittence. We develop a method based on spectral decomposition of the imposed power fluctuations to calculate the minimal amount of the controlled power sources needed to follow these fluctuations. This can be converted into a viability criterion of the mix included in the MIXOPTIM software. In the third part of the paper, the MIXOPTIM cost evaluation method is applied to the multi-criteria optimization of the mix, according to three main criteria: the cost of the mix; its impact on climate in terms of CO2 production; and the security of supply.
Long-term decline of global atmospheric ethane concentrations and implications for methane.
Simpson, Isobel J; Sulbaek Andersen, Mads P; Meinardi, Simone; Bruhwiler, Lori; Blake, Nicola J; Helmig, Detlev; Rowland, F Sherwood; Blake, Donald R
2012-08-23
After methane, ethane is the most abundant hydrocarbon in the remote atmosphere. It is a precursor to tropospheric ozone and it influences the atmosphere's oxidative capacity through its reaction with the hydroxyl radical, ethane's primary atmospheric sink. Here we present the longest continuous record of global atmospheric ethane levels. We show that global ethane emission rates decreased from 14.3 to 11.3 teragrams per year, or by 21 per cent, from 1984 to 2010. We attribute this to decreasing fugitive emissions from ethane's fossil fuel source--most probably decreased venting and flaring of natural gas in oil fields--rather than a decline in its other major sources, biofuel use and biomass burning. Ethane's major emission sources are shared with methane, and recent studies have disagreed on whether reduced fossil fuel or microbial emissions have caused methane's atmospheric growth rate to slow. Our findings suggest that reduced fugitive fossil fuel emissions account for at least 10-21 teragrams per year (30-70 per cent) of the decrease in methane's global emissions, significantly contributing to methane's slowing atmospheric growth rate since the mid-1980s.
Permeable Surface Corrections for Ffowcs Williams and Hawkings Integrals
NASA Technical Reports Server (NTRS)
Lockard, David P.; Casper, Jay H.
2005-01-01
The acoustic prediction methodology discussed herein applies an acoustic analogy to calculate the sound generated by sources in an aerodynamic simulation. Sound is propagated from the computed flow field by integrating the Ffowcs Williams and Hawkings equation on a suitable control surface. Previous research suggests that, for some applications, the integration surface must be placed away from the solid surface to incorporate source contributions from within the flow volume. As such, the fluid mechanisms in the input flow field that contribute to the far-field noise are accounted for by their mathematical projection as a distribution of source terms on a permeable surface. The passage of nonacoustic disturbances through such an integration surface can result in significant error in an acoustic calculation. A correction for the error is derived in the frequency domain using a frozen gust assumption. The correction is found to work reasonably well in several test cases where the error is a small fraction of the actual radiated noise. However, satisfactory agreement has not been obtained between noise predictions using the solution from a three-dimensional, detached-eddy simulation of flow over a cylinder.
NASA Astrophysics Data System (ADS)
Zhang, Hongliang; Li, Jingyi; Ying, Qi; Guven, Birnur Buzcu; Olaguer, Eduardo P.
2013-02-01
In this study, a source-oriented version of the Community Multiscale Air Quality (CMAQ) model was developed and used to quantify the contributions of five major local emission source types in Southeast Texas (vehicles, industry, natural gas combustion, wildfires, biogenic sources), as well as upwind sources, to regional primary and secondary formaldehyde (HCHO) concentrations. Predicted HCHO concentrations agree well with observations at two urban sites (the Moody Tower [MT] site at the University of Houston and the Haden Road #3 [HRM-3] site operated by Texas Commission on Environmental Quality). However, the model underestimates concentrations at an industrial site (Lynchburg Ferry). Throughout most of Southeast Texas, primary HCHO accounts for approximately 20-30% of total HCHO, while the remaining portion is due to secondary HCHO (30-50%) and upwind sources (20-50%). Biogenic sources, natural gas combustion, and vehicles are important sources of primary HCHO in the urban Houston area, respectively, accounting for 10-20%, 10-30%, and 20-60% of total primary HCHO. Biogenic sources, industry, and vehicles are the top three sources of secondary HCHO, respectively, accounting for 30-50%, 10-30%, and 5-15% of overall secondary HCHO. It was also found that over 70% of PAN in the Houston area is due to upwind sources, and only 30% is formed locally. The model-predicted source contributions to HCHO at the MT generally agree with source apportionment results obtained from the Positive Matrix Factorization (PMF) technique.
Hawking, Michael
2016-06-01
In the aftermath of the Kermit Gosnell trial and Giubilini and Minerva's article 'After-birth abortion', abortion-rights advocates have been pressured to provide an account of the moral difference between abortion, particularly late-term abortion, and infanticide. In response, some scholars have defended a moral distinction by appealing to an argument developed by Judith Jarvis Thomson in A defense of abortion. However, once Thomson's analogy is refined to account for the morally relevant features of late-term pregnancy, rather than distinguishing between late-term abortion and infanticide, it reinforces their moral similarity. This is because late-term abortion requires more than detachment - it requires an act of feticide to ensure the death of the viable fetus. As such, a Thomsonian account cannot be deployed successfully as a response to Giubilini and Minerva. Those wishing to defend late-term abortion while rejecting the permissibility of infanticide will need to provide an alternative account of the difference, or else accept Giubilini and Minerva's conclusion. © 2015 John Wiley & Sons Ltd.
76 FR 53378 - Cost Accounting Standards: Accounting for Insurance Costs
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-26
... Accounting Standards: Accounting for Insurance Costs AGENCY: Cost Accounting Standards Board (Board), Office... Discontinuation of Rulemaking. SUMMARY: The Office of Federal Procurement Policy (OFPP), Cost Accounting Standards... development of an amendment to Cost Accounting Standard (CAS) 416 regarding the use of the term ``catastrophic...
A Model for the Sources of the Slow Solar Wind
NASA Technical Reports Server (NTRS)
Antiochos, Spiro K.; Mikic, Z.; Titov, V. S.; Lionello, R.; Linker, J. A.
2010-01-01
Models for the origin of the slow solar wind must account for two seemingly contradictory observations: The slow wind has the composition of the closed-field corona, implying that it originates from the continuous opening and closing of flux at the boundary between open and closed field. On the other hand, the slow wind has large angular width, up to approximately 60 degrees, suggesting that its source extends far from the open-closed boundary. We propose a model that can explain both observations. The key idea is that the source of the slow wind at the Sun is a network of narrow (possibly singular) open-field corridors that map to a web of separatrices and quasi-separatrix layers in the heliosphere. We compute analytically the topology of an open-field corridor and show that it produces a quasi-separatrix layer in the heliosphere that extends to angles far front the heliospheric current sheet. We then use an MHD code and MIDI/SOHO observations of the photospheric magnetic field to calculate numerically, with high spatial resolution, the quasi-steady solar wind and magnetic field for a time period preceding the August 1, 2008 total solar eclipse. Our numerical results imply that, at least for this time period, a web of separatrices (which we term an S-web) forms with sufficient density and extent in the heliosphere to account for the observed properties of the slow wind. We discuss the implications of our S-web model for the structure and dynamics of the corona and heliosphere, and propose further tests of the model.
A Strong Shallow Heat Source in the Accreting Neutron Star MAXI J0556-332
NASA Astrophysics Data System (ADS)
Deibel, Alex; Cumming, Andrew; Brown, Edward F.; Page, Dany
2015-08-01
An accretion outburst in an X-ray transient deposits material onto the neutron star primary; this accumulation of matter induces reactions in the neutron star’s crust. During the accretion outburst these reactions heat the crust out of thermal equilibrium with the core. When accretion halts, the crust cools to its long-term equilibrium temperature on observable timescales. Here we examine the accreting neutron star transient MAXI J0556-332, which is the hottest transient, at the start of quiescence, observed to date. Models of the quiescent light curve require a large deposition of heat in the shallow outer crust from an unknown source. The additional heat injected is ≈4-10 MeV per accreted nucleon; when the observed decline in accretion rate at the end of the outburst is accounted for, the required heating increases to ≈6-16 MeV. This shallow heating is still required to fit the light curve even after taking into account a second accretion episode, uncertainties in distance, and different surface gravities. The amount of shallow heating is larger than that inferred for other neutron star transients and is larger than can be supplied by nuclear reactions or compositionally driven convection; but it is consistent with stored mechanical energy in the accretion disk. The high crust temperature ({T}b≳ {10}9 {{K}}) makes its cooling behavior in quiescence largely independent of the crust composition and envelope properties, so that future observations will probe the gravity of the source. Fits to the light curve disfavor the presence of Urca cooling pairs in the crust.
A Model for the Sources of the Slow Solar Wind
NASA Astrophysics Data System (ADS)
Antiochos, S. K.; Mikić, Z.; Titov, V. S.; Lionello, R.; Linker, J. A.
2011-04-01
Models for the origin of the slow solar wind must account for two seemingly contradictory observations: the slow wind has the composition of the closed-field corona, implying that it originates from the continuous opening and closing of flux at the boundary between open and closed field. On the other hand, the slow wind also has large angular width, up to ~60°, suggesting that its source extends far from the open-closed boundary. We propose a model that can explain both observations. The key idea is that the source of the slow wind at the Sun is a network of narrow (possibly singular) open-field corridors that map to a web of separatrices and quasi-separatrix layers in the heliosphere. We compute analytically the topology of an open-field corridor and show that it produces a quasi-separatrix layer in the heliosphere that extends to angles far from the heliospheric current sheet. We then use an MHD code and MDI/SOHO observations of the photospheric magnetic field to calculate numerically, with high spatial resolution, the quasi-steady solar wind, and magnetic field for a time period preceding the 2008 August 1 total solar eclipse. Our numerical results imply that, at least for this time period, a web of separatrices (which we term an S-web) forms with sufficient density and extent in the heliosphere to account for the observed properties of the slow wind. We discuss the implications of our S-web model for the structure and dynamics of the corona and heliosphere and propose further tests of the model.
Kandler, Christian; Riemann, Rainer; Angleitner, Alois; Spinath, Frank M; Borkenau, Peter; Penke, Lars
2016-08-01
This multitrait multimethod twin study examined the structure and sources of individual differences in creativity. According to different theoretical and metrological perspectives, as well as suggestions based on previous research, we expected 2 aspects of individual differences, which can be described as perceived creativity and creative test performance. We hypothesized that perceived creativity, reflecting typical creative thinking and behavior, should be linked to specific personality traits, whereas test creativity, reflecting maximum task-related creative performance, should show specific associations with cognitive abilities. Moreover, we tested whether genetic variance in intelligence and personality traits account for the genetic component of creativity. Multiple-rater and multimethod data (self- and peer reports, observer ratings, and test scores) from 2 German twin studies-the Bielefeld Longitudinal Study of Adult Twins and the German Observational Study of Adult Twins-were analyzed. Confirmatory factor analyses yielded the expected 2 correlated aspects of creativity. Perceived creativity showed links to openness to experience and extraversion, whereas tested figural creativity was associated with intelligence and also with openness. Multivariate behavioral genetic analyses indicated that the heritability of tested figural creativity could be accounted for by the genetic component of intelligence and openness, whereas a substantial genetic component in perceived creativity could not be explained. A primary source of individual differences in creativity was due to environmental influences, even after controlling for random error and method variance. The findings are discussed in terms of the multifaceted nature and construct validity of creativity as an individual characteristic. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Medium term hurricane catastrophe models: a validation experiment
NASA Astrophysics Data System (ADS)
Bonazzi, Alessandro; Turner, Jessica; Dobbin, Alison; Wilson, Paul; Mitas, Christos; Bellone, Enrica
2013-04-01
Climate variability is a major source of uncertainty for the insurance industry underwriting hurricane risk. Catastrophe models provide their users with a stochastic set of events that expands the scope of the historical catalogue by including synthetic events that are likely to happen in a defined time-frame. The use of these catastrophe models is widespread in the insurance industry but it is only in recent years that climate variability has been explicitly accounted for. In the insurance parlance "medium term catastrophe model" refers to products that provide an adjusted view of risk that is meant to represent hurricane activity on a 1 to 5 year horizon, as opposed to long term models that integrate across the climate variability of the longest available time series of observations. In this presentation we discuss how a simple reinsurance program can be used to assess the value of medium term catastrophe models. We elaborate on similar concepts as discussed in "Potential Economic Value of Seasonal Hurricane Forecasts" by Emanuel et al. (2012, WCAS) and provide an example based on 24 years of historical data of the Chicago Mercantile Hurricane Index (CHI), an insured loss proxy. Profit and loss volatility of a hypothetical primary insurer are used to score medium term models versus their long term counterpart. Results show that medium term catastrophe models could help a hypothetical primary insurer to improve their financial resiliency to varying climate conditions.
The Meta Language of Accounting: What's the Level of Students' Understanding?
ERIC Educational Resources Information Center
Elson, Raymond J.; O'Callaghan, Susanne; Walker, John P.; Williams, Robert
2013-01-01
Students rely on rote knowledge to learn accounting concepts. However, this approach does not allow them to understanding the meta language of accounting. Meta language is simply the concepts and terms that are used in a profession and are easily understood by its users. Terms such as equity, assets, and balance sheet are part of the accounting…
Code of Federal Regulations, 2014 CFR
2014-07-01
... education, in scientific, professional, technical, mechanical, trade, clerical, fiscal, administrative, or... Data Elements for Federal Travel [Accounting & Certification] Group name Data elements Description Accounting Classification Accounting Code Agency accounting code. Non-Federal Source Indicator Per Diem...
Accuracy-preserving source term quadrature for third-order edge-based discretization
NASA Astrophysics Data System (ADS)
Nishikawa, Hiroaki; Liu, Yi
2017-09-01
In this paper, we derive a family of source term quadrature formulas for preserving third-order accuracy of the node-centered edge-based discretization for conservation laws with source terms on arbitrary simplex grids. A three-parameter family of source term quadrature formulas is derived, and as a subset, a one-parameter family of economical formulas is identified that does not require second derivatives of the source term. Among the economical formulas, a unique formula is then derived that does not require gradients of the source term at neighbor nodes, thus leading to a significantly smaller discretization stencil for source terms. All the formulas derived in this paper do not require a boundary closure, and therefore can be directly applied at boundary nodes. Numerical results are presented to demonstrate third-order accuracy at interior and boundary nodes for one-dimensional grids and linear triangular/tetrahedral grids over straight and curved geometries.
NASA Astrophysics Data System (ADS)
Medellin-Azuara, J.; Fraga, C. C. S.; Marques, G.; Mendes, C. A.
2015-12-01
The expansion and operation of urban water supply systems under rapidly growing demands, hydrologic uncertainty, and scarce water supplies requires a strategic combination of various supply sources for added reliability, reduced costs and improved operational flexibility. The design and operation of such portfolio of water supply sources merits decisions of what and when to expand, and how much to use of each available sources accounting for interest rates, economies of scale and hydrologic variability. The present research provides a framework and an integrated methodology that optimizes the expansion of various water supply alternatives using dynamic programming and combining both short term and long term optimization of water use and simulation of water allocation. A case study in Bahia Do Rio Dos Sinos in Southern Brazil is presented. The framework couples an optimization model with quadratic programming model in GAMS with WEAP, a rain runoff simulation models that hosts the water supply infrastructure features and hydrologic conditions. Results allow (a) identification of trade offs between cost and reliability of different expansion paths and water use decisions and (b) evaluation of potential gains by reducing water system losses as a portfolio component. The latter is critical in several developing countries where water supply system losses are high and often neglected in favor of more system expansion. Results also highlight the potential of various water supply alternatives including, conservation, groundwater, and infrastructural enhancements over time. The framework proves its usefulness for planning its transferability to similarly urbanized systems.
The impact of nonuniform sampling on stratospheric ozone trends derived from occultation instruments
NASA Astrophysics Data System (ADS)
Damadeo, Robert P.; Zawodny, Joseph M.; Remsberg, Ellis E.; Walker, Kaley A.
2018-01-01
This paper applies a recently developed technique for deriving long-term trends in ozone from sparsely sampled data sets to multiple occultation instruments simultaneously without the need for homogenization. The technique can compensate for the nonuniform temporal, spatial, and diurnal sampling of the different instruments and can also be used to account for biases and drifts between instruments. These problems have been noted in recent international assessments as being a primary source of uncertainty that clouds the significance of derived trends. Results show potential recovery
trends of ˜ 2-3 % decade-1 in the upper stratosphere at midlatitudes, which are similar to other studies, and also how sampling biases present in these data sets can create differences in derived recovery trends of up to ˜ 1 % decade-1 if not properly accounted for. Limitations inherent to all techniques (e.g., relative instrument drifts) and their impacts (e.g., trend differences up to ˜ 2 % decade-1) are also described and a potential path forward towards resolution is presented.
Radiative interactions in chemically reacting supersonic internal flows
NASA Technical Reports Server (NTRS)
Tiwari, S. N.; Chandrasekhar, R.
1991-01-01
The two-dimensional, elliptic Navier-Stokes equations are used to investigate supersonic flows with finite-rate chemistry and radiation for hydrogen-air systems. The chemistry source terms in the species equation is treated implicitly to alleviate the stiffness associated with fast reactions. The explicit, unsplit MacCormack finite-difference scheme is used to advance the governing equations in time, until convergence is achieved. The specific problem considered is the premixed flow in a channel with a ten-degree compression ramp. Three different chemistry models are used, accounting for increasing number of reactions and participating species. Two chemistry models assure nitrogen as inert, while the third model accounts for nitrogen reactions and NO(x) formation. The tangent slab approximation is used in the radiative flux formulation. A pseudo-gray model is used to represent the absorption-emission characteristics of the participating species. Results obtained for specific conditions indicate that the radiative interactions vary substantially, depending on reactions involving HO2 and NO species and that this can have a significant influence on the flowfield.
Going for the gold. Models of agency in Japanese and American contexts.
Markus, Hazel Rose; Uchida, Yukiko; Omoregie, Heather; Townsend, Sarah S M; Kitayama, Shinobu
2006-02-01
Two studies examined how Olympic performance is explained in American and Japanese contexts. Study 1, an analysis of media coverage of the 2000 and 2002 Olympics, shows that in both Japanese and American contexts, performance is construed mainly in terms of the actions of persons. However, Japanese and American accounts differ in their explanations of the nature and source of intentional agency, that is, in their models of agency. In Japanese contexts, agency is construed as conjoint and simultaneously implicates athletes' personal attributes (both positive and negative), background, and social and emotional experience. In American contexts, agency is construed as disjoint, separate from athletes' background or social and emotional experience; performance is explained primarily through positive personal characteristics and features of the competition. Study 2, in which participants chose information to be included in an athlete's description, confirms these findings. Differences in the construction of agency are reflected in and fostered by common cultural products (e.g., television accounts).
CONVECTIVE BABCOCK-LEIGHTON DYNAMO MODELS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miesch, Mark S.; Brown, Benjamin P., E-mail: miesch@ucar.edu
We present the first global, three-dimensional simulations of solar/stellar convection that take into account the influence of magnetic flux emergence by means of the Babcock-Leighton (BL) mechanism. We have shown that the inclusion of a BL poloidal source term in a convection simulation can promote cyclic activity in an otherwise steady dynamo. Some cycle properties are reminiscent of solar observations, such as the equatorward propagation of toroidal flux near the base of the convection zone. However, the cycle period in this young sun (rotating three times faster than the solar rate) is very short ({approx}6 months) and it is unclearmore » whether much longer cycles may be achieved within this modeling framework, given the high efficiency of field generation and transport by the convection. Even so, the incorporation of mean-field parameterizations in three-dimensional convection simulations to account for elusive processes such as flux emergence may well prove useful in the future modeling of solar and stellar activity cycles.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul L. Wichlacz
2003-09-01
This source-term summary document is intended to describe the current understanding of contaminant source terms and the conceptual model for potential source-term release to the environment at the Idaho National Engineering and Environmental Laboratory (INEEL), as presented in published INEEL reports. The document presents a generalized conceptual model of the sources of contamination and describes the general categories of source terms, primary waste forms, and factors that affect the release of contaminants from the waste form into the vadose zone and Snake River Plain Aquifer. Where the information has previously been published and is readily available, summaries of the inventorymore » of contaminants are also included. Uncertainties that affect the estimation of the source term release are also discussed where they have been identified by the Source Term Technical Advisory Group. Areas in which additional information are needed (i.e., research needs) are also identified.« less
31 CFR 1010.620 - Due diligence programs for private banking accounts.
Code of Federal Regulations, 2013 CFR
2013-07-01
... section is a senior foreign political figure; (3) Ascertain the source(s) of funds deposited into a... requirements for senior foreign political figures. (1) In the case of a private banking account for which a senior foreign political figure is a nominal or beneficial owner, the due diligence program required by...
31 CFR 1010.620 - Due diligence programs for private banking accounts.
Code of Federal Regulations, 2012 CFR
2012-07-01
... section is a senior foreign political figure; (3) Ascertain the source(s) of funds deposited into a... requirements for senior foreign political figures. (1) In the case of a private banking account for which a senior foreign political figure is a nominal or beneficial owner, the due diligence program required by...
31 CFR 1010.620 - Due diligence programs for private banking accounts.
Code of Federal Regulations, 2014 CFR
2014-07-01
... section is a senior foreign political figure; (3) Ascertain the source(s) of funds deposited into a... requirements for senior foreign political figures. (1) In the case of a private banking account for which a senior foreign political figure is a nominal or beneficial owner, the due diligence program required by...
Parazoo, Nicholas C.; Koven, Charles D.; Lawrence, David M.; ...
2018-01-12
Thaw and release of permafrost carbon (C) due to climate change is likely to offset increased vegetation C uptake in northern high-latitude (NHL) terrestrial ecosystems. Models project that this permafrost C feedback may act as a slow leak, in which case detection and attribution of the feedback may be difficult. The formation of talik, a subsurface layer of perennially thawed soil, can accelerate permafrost degradation and soil respiration, ultimately shifting the C balance of permafrost-affected ecosystems from long-term C sinks to long-term C sources. It is imperative to understand and characterize mechanistic links between talik, permafrost thaw, and respiration ofmore » deep soil C to detect and quantify the permafrost C feedback. Here, we use the Community Land Model (CLM) version 4.5, a permafrost and biogeochemistry model, in comparison to long-term deep borehole data along North American and Siberian transects, to investigate thaw-driven C sources in NHL ( > 55°N) from 2000 to 2300. Widespread talik at depth is projected across most of the NHL permafrost region (14 million km 2) by 2300, 6.2 million km 2 of which is projected to become a long-term C source, emitting 10 Pg C by 2100, 50 Pg C by 2200, and 120 Pg C by 2300, with few signs of slowing. Roughly half of the projected C source region is in predominantly warm sub-Arctic permafrost following talik onset. This region emits only 20 Pg C by 2300, but the CLM4.5 estimate may be biased low by not accounting for deep C in yedoma. Accelerated decomposition of deep soil C following talik onset shifts the ecosystem C balance away from surface dominant processes (photosynthesis and litter respiration), but sink-to-source transition dates are delayed by 20–200 years by high ecosystem productivity, such that talik peaks early (~2050s, although borehole data suggest sooner) and C source transition peaks late (~2150–2200). The remaining C source region in cold northern Arctic permafrost, which shifts to a net source early (late 21st century), emits 5 times more C (95 Pg C) by 2300, and prior to talik formation due to the high decomposition rates of shallow, young C in organic-rich soils coupled with low productivity. Our results provide important clues signaling imminent talik onset and C source transition, including (1) late cold-season (January–February) soil warming at depth (~2m), (2) increasing cold-season emissions (November–April), and (3) enhanced respiration of deep, old C in warm permafrost and young, shallow C in organic-rich cold permafrost soils. Our results suggest a mosaic of processes that govern carbon source-to-sink transitions at high latitudes and emphasize the urgency of monitoring soil thermal profiles, organic C age and content, cold-season CO 2 emissions, and atmospheric 14CO 2 as key indicators of the permafrost C feedback« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parazoo, Nicholas C.; Koven, Charles D.; Lawrence, David M.
Thaw and release of permafrost carbon (C) due to climate change is likely to offset increased vegetation C uptake in northern high-latitude (NHL) terrestrial ecosystems. Models project that this permafrost C feedback may act as a slow leak, in which case detection and attribution of the feedback may be difficult. The formation of talik, a subsurface layer of perennially thawed soil, can accelerate permafrost degradation and soil respiration, ultimately shifting the C balance of permafrost-affected ecosystems from long-term C sinks to long-term C sources. It is imperative to understand and characterize mechanistic links between talik, permafrost thaw, and respiration ofmore » deep soil C to detect and quantify the permafrost C feedback. Here, we use the Community Land Model (CLM) version 4.5, a permafrost and biogeochemistry model, in comparison to long-term deep borehole data along North American and Siberian transects, to investigate thaw-driven C sources in NHL ( > 55°N) from 2000 to 2300. Widespread talik at depth is projected across most of the NHL permafrost region (14 million km 2) by 2300, 6.2 million km 2 of which is projected to become a long-term C source, emitting 10 Pg C by 2100, 50 Pg C by 2200, and 120 Pg C by 2300, with few signs of slowing. Roughly half of the projected C source region is in predominantly warm sub-Arctic permafrost following talik onset. This region emits only 20 Pg C by 2300, but the CLM4.5 estimate may be biased low by not accounting for deep C in yedoma. Accelerated decomposition of deep soil C following talik onset shifts the ecosystem C balance away from surface dominant processes (photosynthesis and litter respiration), but sink-to-source transition dates are delayed by 20–200 years by high ecosystem productivity, such that talik peaks early (~2050s, although borehole data suggest sooner) and C source transition peaks late (~2150–2200). The remaining C source region in cold northern Arctic permafrost, which shifts to a net source early (late 21st century), emits 5 times more C (95 Pg C) by 2300, and prior to talik formation due to the high decomposition rates of shallow, young C in organic-rich soils coupled with low productivity. Our results provide important clues signaling imminent talik onset and C source transition, including (1) late cold-season (January–February) soil warming at depth (~2m), (2) increasing cold-season emissions (November–April), and (3) enhanced respiration of deep, old C in warm permafrost and young, shallow C in organic-rich cold permafrost soils. Our results suggest a mosaic of processes that govern carbon source-to-sink transitions at high latitudes and emphasize the urgency of monitoring soil thermal profiles, organic C age and content, cold-season CO 2 emissions, and atmospheric 14CO 2 as key indicators of the permafrost C feedback« less
Development of episodic and autobiographical memory: The importance of remembering forgetting
Bauer, Patricia J.
2015-01-01
Some memories of the events of our lives have a long shelf-life—they remain accessible to recollection even after long delays. Yet many other of our experiences are forgotten, sometimes very soon after they take place. In spite of the prevalence of forgetting, theories of the development of episodic and autobiographical memory largely ignore it as a potential source of variance in explanation of age-related variability in long-term recall. They focus instead on what may be viewed as positive developmental changes, that is, changes that result in improvements in the quality of memory representations that are formed. The purpose of this review is to highlight the role of forgetting as an important variable in understanding the development of episodic and autobiographical memory. Forgetting processes are implicated as a source of variability in long-term recall due to the protracted course of development of the neural substrate responsible for transformation of fleeting experiences into memory traces that can be integrated into long-term stores and retrieved at later points in time. It is logical to assume that while the substrate is developing, neural processing is relatively inefficient and ineffective, resulting in loss of information from memory (i.e., forgetting). For this reason, focus on developmental increases in the quality of representations of past events and experiences will tell only a part of the story of how memory develops. A more complete account is afforded when we also consider changes in forgetting. PMID:26644633
NASA Astrophysics Data System (ADS)
Pan, Zhiyuan; Liu, Li
2018-02-01
In this paper, we extend the GARCH-MIDAS model proposed by Engle et al. (2013) to account for the leverage effect in short-term and long-term volatility components. Our in-sample evidence suggests that both short-term and long-term negative returns can cause higher future volatility than positive returns. Out-of-sample results show that the predictive ability of GARCH-MIDAS is significantly improved after taking the leverage effect into account. The leverage effect for short-term volatility component plays more important role than the leverage effect for long-term volatility component in affecting out-of-sample forecasting performance.
Dancing Bees Improve Colony Foraging Success as Long-Term Benefits Outweigh Short-Term Costs
Schürch, Roger; Grüter, Christoph
2014-01-01
Waggle dancing bees provide nestmates with spatial information about high quality resources. Surprisingly, attempts to quantify the benefits of this encoded spatial information have failed to find positive effects on colony foraging success under many ecological circumstances. Experimental designs have often involved measuring the foraging success of colonies that were repeatedly switched between oriented dances versus disoriented dances (i.e. communicating vectors versus not communicating vectors). However, if recruited bees continue to visit profitable food sources for more than one day, this procedure would lead to confounded results because of the long-term effects of successful recruitment events. Using agent-based simulations, we found that spatial information was beneficial in almost all ecological situations. Contrary to common belief, the benefits of recruitment increased with environmental stability because benefits can accumulate over time to outweigh the short-term costs of recruitment. Furthermore, we found that in simulations mimicking previous experiments, the benefits of communication were considerably underestimated (at low food density) or not detected at all (at medium and high densities). Our results suggest that the benefits of waggle dance communication are currently underestimated and that different experimental designs, which account for potential long-term benefits, are needed to measure empirically how spatial information affects colony foraging success. PMID:25141306
Dancing bees improve colony foraging success as long-term benefits outweigh short-term costs.
Schürch, Roger; Grüter, Christoph
2014-01-01
Waggle dancing bees provide nestmates with spatial information about high quality resources. Surprisingly, attempts to quantify the benefits of this encoded spatial information have failed to find positive effects on colony foraging success under many ecological circumstances. Experimental designs have often involved measuring the foraging success of colonies that were repeatedly switched between oriented dances versus disoriented dances (i.e. communicating vectors versus not communicating vectors). However, if recruited bees continue to visit profitable food sources for more than one day, this procedure would lead to confounded results because of the long-term effects of successful recruitment events. Using agent-based simulations, we found that spatial information was beneficial in almost all ecological situations. Contrary to common belief, the benefits of recruitment increased with environmental stability because benefits can accumulate over time to outweigh the short-term costs of recruitment. Furthermore, we found that in simulations mimicking previous experiments, the benefits of communication were considerably underestimated (at low food density) or not detected at all (at medium and high densities). Our results suggest that the benefits of waggle dance communication are currently underestimated and that different experimental designs, which account for potential long-term benefits, are needed to measure empirically how spatial information affects colony foraging success.
Background PM2.5 source apportionment in the remote Northwestern United States
NASA Astrophysics Data System (ADS)
Hadley, Odelle L.
2017-10-01
This study used the Environmental Protection Agency's positive matrix factorization model (EPA PMF5.0) to identify five primary source factors contributing to the ambient PM2.5 concentrations at Cheeka Peak Atmospheric Observatory (CPO), Neah Bay WA between January 2011 and December 2014. CPO is home to both an IMPROVE (Interagency Monitoring for Protected Visual Environments) and a NCore multi-pollutant monitoring site. Chemically resolved particulate data from the IMPROVE site was the input data to EPA PMF5.0 and the resulting source factors were derived solely from these data. Solutions from the model were analyzed in context with trace gas and meteorological data collected at the NCore site located roughly 10 m away. Seasonal and long-term trends were analyzed for all five factors and provide the first complete source apportionment analysis of PM2.5 at this remote location. The first factor, identified as marine-traffic residual fuel oil (RFO), was the highest contributor to PM2.5 during late summer. Over the 4-year analysis, the RFO percent contribution to total PM2.5 declined. This is consistent with previous studies and may be attributed to regulations restricting the sulfur content of ship fuel. Biomass combustion emissions (BMC) and sea salt were the largest PM2.5 sources observed at CPO in winter, accounting for over 80% of the fine particulate. BMC accounted for a large percent of the fine particulate pollution when winds were easterly, or continental. Sea salt was the dominant winter factor when winds blew from the west. Measured trace carbon monoxide (CO) and reactive nitrogen species (NOy) were most strongly correlated with the BMC factor and continental winds. The fourth factor was identified as aged crustal material, or dust. In all three years, dust peaked in the spring and was associated exclusively with north-easterly winds. The last factor was identified as aged sea salt mixed with nitrate, sulfate, and other components common to RFO and BMC source factors. It did not exhibit a strong seasonal cycle or dependence on wind direction.
Paraskevopoulou, D; Liakakou, E; Gerasopoulos, E; Mihalopoulos, N
2015-09-15
To identify the sources of aerosols in Greater Athens Area (GAA), a total of 1510 daily samples of fine (PM 2.5) and coarse (PM 10-2,5) aerosols were collected at a suburban site (Penteli), during a five year period (May 2008-April 2013) corresponding to the period before and during the financial crisis. In addition, aerosol sampling was also conducted in parallel at an urban site (Thissio), during specific, short-term campaigns during all seasons. In all these samples mass and chemical composition measurements were performed, the latest only at the fine fraction. Particulate organic matter (POM) and ionic masses (IM) are the main contributors of aerosol mass, equally contributing by accounting for about 24% of the fine aerosol mass. In the IM, nss-SO4(-2) is the prevailing specie followed by NO3(-) and NH4(+) and shows a decreasing trend during the 2008-2013 period similar to that observed for PM masses. The contribution of water in fine aerosol is equally significant (21 ± 2%), while during dust transport, the contribution of dust increases from 7 ± 2% to 31 ± 9%. Source apportionment (PCA and PMF) and mass closure exercises identified the presence of six sources of fine aerosols: secondary photochemistry, primary combustion, soil, biomass burning, sea salt and traffic. Finally, from winter 2012 to winter 2013 the contribution of POM to the urban aerosol mass is increased by almost 30%, reflecting the impact of wood combustion (dominant fuel for domestic heating) to air quality in Athens, which massively started in winter 2013. Copyright © 2015 Elsevier B.V. All rights reserved.
Leib, Kenneth J.; Mast, M. Alisa; Wright, Winfield G.
2003-01-01
One of the important types of information needed to characterize water quality in streams affected by historical mining is the seasonal pattern of toxic trace-metal concentrations and loads. Seasonal patterns in water quality are estimated in this report using a technique called water-quality profiling. Water-quality profiling allows land managers and scientists to assess priority areas to be targeted for characterization and(or) remediation by quantifying the timing and magnitude of contaminant occurrence. Streamflow and water-quality data collected at 15 sites in the upper Animas River Basin during water years 1991?99 were used to develop water-quality profiles. Data collected at each sampling site were used to develop ordinary least-squares regression models for streamflow and constituent concentrations. Streamflow was estimated by correlating instantaneous streamflow measured at ungaged sites with continuous streamflow records from streamflow-gaging stations in the subbasin. Water-quality regression models were developed to estimate hardness and dissolved cadmium, copper, and zinc concentrations based on streamflow and seasonal terms. Results from the regression models were used to calculate water-quality profiles for streamflow, constituent concentrations, and loads. Quantification of cadmium, copper, and zinc loads in a stream segment in Mineral Creek (sites M27 to M34) was presented as an example application of water-quality profiling. The application used a method of mass accounting to quantify the portion of metal loading in the segment derived from uncharacterized sources during different seasonal periods. During May, uncharacterized sources contributed nearly 95 percent of the cadmium load, 0 percent of the copper load (or uncharacterized sources also are attenuated), and about 85 percent of the zinc load at M34. During September, uncharacterized sources contributed about 86 percent of the cadmium load, 0 percent of the copper load (or uncharacterized sources also are attenuated), and about 52 percent of the zinc load at M34. Characterized sources accounted for more of the loading gains estimated in the example reach during September, possibly indicating the presence of diffuse inputs during snowmelt runoff. The results indicate that metal sources in the upper Animas River Basin may change substantially with season, regardless of the source.
Memory as embodiment: The case of modality and serial short-term memory.
Macken, Bill; Taylor, John C; Kozlov, Michail D; Hughes, Robert W; Jones, Dylan M
2016-10-01
Classical explanations for the modality effect-superior short-term serial recall of auditory compared to visual sequences-typically recur to privileged processing of information derived from auditory sources. Here we critically appraise such accounts, and re-evaluate the nature of the canonical empirical phenomena that have motivated them. Three experiments show that the standard account of modality in memory is untenable, since auditory superiority in recency is often accompanied by visual superiority in mid-list serial positions. We explain this simultaneous auditory and visual superiority by reference to the way in which perceptual objects are formed in the two modalities and how those objects are mapped to speech motor forms to support sequence maintenance and reproduction. Specifically, stronger obligatory object formation operating in the standard auditory form of sequence presentation compared to that for visual sequences leads both to enhanced addressability of information at the object boundaries and reduced addressability for that in the interior. Because standard visual presentation does not lead to such object formation, such sequences do not show the boundary advantage observed for auditory presentation, but neither do they suffer loss of addressability associated with object information, thereby affording more ready mapping of that information into a rehearsal cohort to support recall. We show that a range of factors that impede this perceptual-motor mapping eliminate visual superiority while leaving auditory superiority unaffected. We make a general case for viewing short-term memory as an embodied, perceptual-motor process. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Kniskern, Megan A; Johnston, Carol S
2011-06-01
The health benefits of vegetarian diets are well-recognized; however, long-term adherence to these diets may be associated with nutrient inadequacies, particularly vitamins B12 and D, calcium, iron, zinc, and protein. The dietary reference intakes (DRIs) expert panels recommended adjustments to the iron, zinc, and calcium DRIs for vegetarians to account for decreased bioavailability, but no adjustments were considered necessary for the protein DRI under the assumption that vegetarians consume about 50% of protein from animal (dairy/egg) sources. This study examined dietary protein sources in a convenience sample of 21 young adult vegetarian women who completed food logs on 4 consecutive days (3 weekdays and 1 weekend day). The daily contribution percentages of protein consumed from cereals, legumes, nuts/seeds, fruits/vegetables, and dairy/egg were computed, and the protein digestibility corrected amino acid score of the daily diets was calculated. The calculated total dietary protein digestibility score for participants was 82 ± 1%, which differed significantly (P < 0.001) from the DRI reference score, 88%, and the 4-d average protein digestibility corrected amino acid score for the sample was 80 ± 2%, which also differed significantly (P < 0.001) from the DRI reference value, 100%. The analyses indicated that animal protein accounted for only 21% of dietary protein. This research suggests that the protein DRI for vegetarians consuming less than the expected amounts of animal protein (45% to 50% of total protein) may need to be adjusted from 0.8 to about 1.0 g/kg to account for decreased protein bioavailability. Copyright © 2011 Elsevier Inc. All rights reserved.
Environmental impact assessment of solid waste management in Beijing City, China
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao Yan; Key Laboratory for Solid Waste Management and Environment Safety, Ministry of Education of China, Tsinghua University, 100084 Beijing; Christensen, Thomas H.
2011-04-15
The environmental impacts of municipal solid waste management in Beijing City were evaluated using a life-cycle-based model, EASEWASTE, to take into account waste generation, collection, transportation, treatment/disposal technologies, and savings obtained by energy and material recovery. The current system, mainly involving the use of landfills, has manifested significant adverse environmental impacts caused by methane emissions from landfills and many other emissions from transfer stations. A short-term future scenario, where some of the landfills (which soon will reach their capacity because of rising amount of waste in Beijing City) are substituted by incinerators with energy recovery, would not result in significantmore » environmental improvement. This is primarily because of the low calorific value of mixed waste, and it is likely that the incinerators would require significant amounts of auxiliary fuels to support combustion of wet waste. As for the long-term future scenario, efficient source separation of food waste could result in significant environmental improvements, primarily because of increase in calorific value of remaining waste incinerated with energy recovery. Sensitivity analysis emphasized the importance of efficient source separation of food waste, as well as the electricity recovery in incinerators, in order to obtain an environmentally friendly waste management system in Beijing City.« less
National Health Expenditures: Short-Term Outlook and Long-Term Projections
Freeland, Mark S.; Schendler, Carol Ellen
1981-01-01
This paper presents projections of national health expenditures by type of expenditure and source of funds for 1981, 1985, and 1990. Rapid growth in national health expenditures is projected to continue through 1990. National health expenditures increased 400 percent between 1965 and 1979, reaching $212 billion in 1979. As a proportion of the Gross National Product (GNP), health expenditures rose from 6.1 percent to 9.0 percent between 1965 and 1979. They are expected to continue to rise, reaching 10.8 percent by 1990. This study projects that, under current legislation, national health expenditures will reach $279 billion in 1981, $462 billion in 1985, and $821 billion in 1990. Sources of payments for these expenditures are shifting. From 1965 to 1979, the percentage of total health expenditures financed by public funds increased 17 percentage points—from 26 to 43 percent. The Federal share of public funds during this same period grew rapidly, from 51 percent in 1965 to 67 percent in 1979. This study projects that in 1985 approximately 45 percent of total health spending will be financed from public funds, of which 68 percent will be paid for by the Federal government. Public funds will account for 46 percent of total national health expenditures by 1990. PMID:10309366
How Big Was It? Getting at Yield
NASA Astrophysics Data System (ADS)
Pasyanos, M.; Walter, W. R.; Ford, S. R.
2013-12-01
One of the most coveted pieces of information in the wake of a nuclear test is the explosive yield. Determining the yield from remote observations, however, is not necessarily a trivial thing. For instance, recorded observations of seismic amplitudes, used to estimate the yield, are significantly modified by the intervening media, which varies widely, and needs to be properly accounted for. Even after correcting for propagation effects such as geometrical spreading, attenuation, and station site terms, getting from the resulting source term to a yield depends on the specifics of the explosion source model, including material properties, and depth. Some formulas are based on assumptions of the explosion having a standard depth-of-burial and observed amplitudes can vary if the actual test is either significantly overburied or underburied. We will consider the complications and challenges of making these determinations using a number of standard, more traditional methods and a more recent method that we have developed using regional waveform envelopes. We will do this comparison for recent declared nuclear tests from the DPRK. We will also compare the methods using older explosions at the Nevada Test Site with announced yields, material and depths, so that actual performance can be measured. In all cases, we also strive to quantify realistic uncertainties on the yield estimation.
RADTRAD: A simplified model for RADionuclide Transport and Removal And Dose estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphreys, S.L.; Miller, L.A.; Monroe, D.K.
1998-04-01
This report documents the RADTRAD computer code developed for the U.S. Nuclear Regulatory Commission (NRC) Office of Nuclear Reactor Regulation (NRR) to estimate transport and removal of radionuclides and dose at selected receptors. The document includes a users` guide to the code, a description of the technical basis for the code, the quality assurance and code acceptance testing documentation, and a programmers` guide. The RADTRAD code can be used to estimate the containment release using either the NRC TID-14844 or NUREG-1465 source terms and assumptions, or a user-specified table. In addition, the code can account for a reduction in themore » quantity of radioactive material due to containment sprays, natural deposition, filters, and other natural and engineered safety features. The RADTRAD code uses a combination of tables and/or numerical models of source term reduction phenomena to determine the time-dependent dose at user-specified locations for a given accident scenario. The code system also provides the inventory, decay chain, and dose conversion factor tables needed for the dose calculation. The RADTRAD code can be used to assess occupational radiation exposures, typically in the control room; to estimate site boundary doses; and to estimate dose attenuation due to modification of a facility or accident sequence.« less
25 CFR 115.800 - When does OTFM open a tribal account?
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false When does OTFM open a tribal account? 115.800 Section 115... TRIBES AND INDIVIDUAL INDIANS Tribal Accounts § 115.800 When does OTFM open a tribal account? A tribal account is opened when OTFM receives income from the sources described in § 115.702. ...
Environmental reporting and accounting in Australia: progress, prospects and research priorities.
van Dijk, Albert; Mount, Richard; Gibbons, Philip; Vardon, Michael; Canadell, Pep
2014-03-01
Despite strong demand for information to support the sustainable use of Australia's natural resources and conserve environmental values and despite considerable effort and investment, nation-wide environmental data collection and analysis remains a substantially unmet challenge. We review progress in producing national environmental reports and accounts, identify challenges and opportunities, and analyse the potential role of research in addressing these. Australia's low and concentrated population density and the short history since European settlement contribute to the lack of environmental data. There are additional factors: highly diverse data requirements and standards, disagreement on information priorities, poorly measurable management objectives, lack of coordination, over-reliance on researchers and businesses for data collection, lack of business engagement, and short-term, project-based activities. New opportunities have arisen to overcome some of these challenges: enhanced monitoring networks, standardisation, data management and modelling, greater commitment to share and integrate data, community monitoring, increasing acceptance of environmental and sustainability indicators, and progress in environmental accounting practices. Successes in generating climate, water and greenhouse gas information appear to be attributable to an unambiguous data requirement, considerable investment, and legislative instruments that enhance data sharing and create a clearly defined role for operational agencies. Based on the analysis presented, we suggest six priorities for research: (1) common definitions and standards for information that address management objectives, (2) ecological measures that are scalable from local to national level, (3) promotion of long-term data collection and reporting by researchers, (4) efficient satellite and sensor network technologies and data analysis methods, (5) environmental modelling approaches that can reconcile multiple data sources, and (6) experimental accounting to pursue consistent, credible and relevant information structures and to identify new data requirements. Opportunities exist to make progress in each of these areas and help secure a more sustainable future. Copyright © 2014. Published by Elsevier B.V.
47 CFR 32.9000 - Glossary of terms.
Code of Federal Regulations, 2010 CFR
2010-10-01
... of this system of accounts. Accounting system means the total set of interrelated principles, rules... entity from a financial perspective. An accounting system generally consists of a chart of accounts, various parallel subsystems and subsidiary records. An accounting system is utilized to provide the...
Biogenic emissions sources come from natural sources and need to accounted for in photochemical grid models. They are computed using a model which utilizes spatial information on vegetation and land use.
Poggi, L A; Malizia, A; Ciparisse, J F; Gaudio, P
2016-10-01
An open issue still under investigation by several international entities working on the safety and security field for the foreseen nuclear fusion reactors is the estimation of source terms that are a hazard for the operators and public, and for the machine itself in terms of efficiency and integrity in case of severe accident scenarios. Source term estimation is a crucial key safety issue to be addressed in the future reactors safety assessments, and the estimates available at the time are not sufficiently satisfactory. The lack of neutronic data along with the insufficiently accurate methodologies used until now, calls for an integrated methodology for source term estimation that can provide predictions with an adequate accuracy. This work proposes a complete methodology to estimate dust source terms starting from a broad information gathering. The wide number of parameters that can influence dust source term production is reduced with statistical tools using a combination of screening, sensitivity analysis, and uncertainty analysis. Finally, a preliminary and simplified methodology for dust source term production prediction for future devices is presented.
Hopkins, Jim
2016-01-01
The main concepts of the free energy (FE) neuroscience developed by Karl Friston and colleagues parallel those of Freud's Project for a Scientific Psychology. In Hobson et al. (2014) these include an innate virtual reality generator that produces the fictive prior beliefs that Freud described as the primary process. This enables Friston's account to encompass a unified treatment-a complexity theory-of the role of virtual reality in both dreaming and mental disorder. In both accounts the brain operates to minimize FE aroused by sensory impingements-including interoceptive impingements that report compliance with biological imperatives-and constructs a representation/model of the causes of impingement that enables this minimization. In Friston's account (variational) FE equals complexity minus accuracy, and is minimized by increasing accuracy and decreasing complexity. Roughly the brain (or model) increases accuracy together with complexity in waking. This is mediated by consciousness-creating active inference-by which it explains sensory impingements in terms of perceptual experiences of their causes. In sleep it reduces complexity by processes that include both synaptic pruning and consciousness/virtual reality/dreaming in REM. The consciousness-creating active inference that effects complexity-reduction in REM dreaming must operate on FE-arousing data distinct from sensory impingement. The most relevant source is remembered arousals of emotion, both recent and remote, as processed in SWS and REM on "active systems" accounts of memory consolidation/reconsolidation. Freud describes these remembered arousals as condensed in the dreamwork for use in the conscious contents of dreams, and similar condensation can be seen in symptoms. Complexity partly reflects emotional conflict and trauma. This indicates that dreams and symptoms are both produced to reduce complexity in the form of potentially adverse (traumatic or conflicting) arousals of amygdala-related emotions. Mental disorder is thus caused by computational complexity together with mechanisms like synaptic pruning that have evolved for complexity-reduction; and important features of disorder can be understood in these terms. Details of the consilience among Freudian, systems consolidation, and complexity-reduction accounts appear clearly in the analysis of a single fragment of a dream, indicating also how complexity reduction proceeds by a process resembling Bayesian model selection.
California School Accounting Manual, 1988 Edition.
ERIC Educational Resources Information Center
California State Dept. of Education, Sacramento.
This report presents the procedure for the accounting methods employed by California school districts for income and expenditures in instructional and support programs. The report has seven parts: (1) an introduction to accounting in local educational agencies; (2) general and subsidiary ledger accounting; (3) revenues and other financing sources;…
A self-consistent fluid model for radio-frequency discharges in SiH4-H2 compared to experiments
NASA Astrophysics Data System (ADS)
Nienhuis, G. J.; Goedheer, W. J.; Hamers, E. A. G.; van Sark, W. G. J. H. M.; Bezemer, J.
1997-09-01
A one-dimensional fluid model for radio-frequency glow discharges is presented which describes silane/hydrogen discharges that are used for the deposition of amorphous silicon (a-Si:H). The model is used to investigate the relation between the external settings (such as pressure, gas inlet, applied power, and frequency) and the resulting composition of the gas and the deposition rate. In the model, discharge quantities such as the electric field, densities, and fluxes of the particles are calculated self-consistently. Look-up tables of the rates of the electron impact collisions as a function of the average electron energy are obtained by solving the Boltzmann equation in a two term approximation for a sequence of values of the reduced electric field. These tables are updated as the composition of the background neutral gas evolves under the influence of chemical reactions and pumping. Pumping configuration and gas inlet are taken into account by adding source terms in the density balance equations. The effect of pumping is represented by an average residence time. The gas inlet is represented by uniformly distributed particle sources. Also the radial transport of neutrals from the discharge volume into the discharge-free volume is important. As the fluid model is one dimensional, this radial transport is taken into account by an additional source term in the density balance equations. Plasma-wall interaction of the radicals (i.e., the growth of a-Si:H) is included through the use of sticking coefficients. A sensitivity study has been used to find a minimum set of different particles and reactions needed to describe the discharge adequately and to reduce the computational effort. This study has also been used to identify the most important plasma-chemical processes and resulted in a minimum set of 24 species, 15 electron-neutral reactions, and 22 chemical reactions. In order to verify the model, including the chemistry used, the results are compared with data from experiments. The partial pressures of silane, hydrogen, disilane, and the growth rate of amorphous silicon are compared for various combinations of the operating pressure (10-50 Pa), the power (2.5-10 W), and the frequency (13.56-65 MHz). The model shows good agreement with the experimental data in the dust free α regime. Discharges in the γ' regime, where dust has a significant influence, could not be used to validate the model.
Gaines, Tommi L; Urada, Lianne A; Martinez, Gustavo; Goldenberg, Shira M; Rangel, Gudelia; Reed, Elizabeth; Patterson, Thomas L; Strathdee, Steffanie A
2015-06-01
This study quantitatively examined the prevalence and correlates of short-term sex work cessation among female sex workers who inject drugs (FSW-IDUs) and determined whether injection drug use was independently associated with cessation. We used data from FSW-IDUs (n=467) enrolled into an intervention designed to increase condom use and decrease sharing of injection equipment but was not designed to promote sex work cessation. We applied a survival analysis that accounted for quit-re-entry patterns of sex work over 1-year stratified by city, Tijuana and Ciudad Juarez, Mexico. Overall, 55% of participants stopped sex work at least once during follow-up. Controlling for other characteristics and intervention assignment, injection drug use was inversely associated with short-term sex work cessation in both cities. In Ciudad Juarez, women receiving drug treatment during follow-up had a 2-fold increase in the hazard of stopping sex work. In both cities, income from sources other than sex work, police interactions and healthcare access were independently and significantly associated with shorter-term cessation. Short-term sex work cessation was significantly affected by injection drug use. Expanded drug treatment and counseling coupled with supportive services such as relapse prevention, job training, and provision of alternate employment opportunities may promote longer-term cessation among women motivated to leave the sex industry. Copyright © 2015 Elsevier Ltd. All rights reserved.
Gaines, Tommi L.; Urada, Lianne; Martinez, Gustavo; Goldenberg, Shira M.; Rangel, Gudelia; Reed, Elizabeth; Patterson, Thomas L.; Strathdee, Steffanie A.
2015-01-01
Objective This study quantitatively examined the prevalence and correlates of short-term sex work cessation among female sex workers who inject drugs (FSW-IDUs) and determined whether injection drug use was independently associated with cessation. Methods We used data from FSW-IDUs (n=467) enrolled into an intervention designed to increase condom use and decrease sharing of injection equipment but was not designed to promote sex work cessation. We applied a survival analysis that accounted for quit-re-entry patterns of sex work over 1-year stratified by city, Tijuana and Ciudad Juarez, Mexico. Results Overall, 55% of participants stopped sex work at least once during follow-up. Controlling for other characteristics and intervention assignment, injection drug use was inversely associated with short-term sex work cessation in both cities. In Ciudad Juarez, women receiving drug treatment during follow-up had a 2-fold increase in the hazard of stopping sex work. In both cities, income from sources other than sex work, police interactions and healthcare access were independently and significantly associated with shorter-term cessation. Conclusions Short-term sex work cessation was significantly affected by injection drug use. Expanded drug treatment and counseling coupled with supportive services such as relapse prevention, job training, and provision of alternate employment opportunities may promote longer-term cessation among women motivated to leave the sex industry. PMID:25644589
Neutrino tomography - Tevatron mapping versus the neutrino sky. [for X-rays of earth interior
NASA Technical Reports Server (NTRS)
Wilson, T. L.
1984-01-01
The feasibility of neutrino tomography of the earth's interior is discussed, taking the 80-GeV W-boson mass determined by Arnison (1983) and Banner (1983) into account. The opacity of earth zones is calculated on the basis of the preliminary reference earth model of Dziewonski and Anderson (1981), and the results are presented in tables and graphs. Proposed tomography schemes are evaluated in terms of the well-posedness of the inverse-Radon-transform problems involved, the neutrino generators and detectors required, and practical and economic factors. The ill-posed schemes are shown to be infeasible; the well-posed schemes (using Tevatrons or the neutrino sky as sources) are considered feasible but impractical.
Io's Magnetospheric Interaction: An MHD Model with Day-Night Asymmetry
NASA Technical Reports Server (NTRS)
Kabin, K.; Combi, M. R.; Gombosi, T. I.; DeZeeuw, D. L.; Hansen, K. C.; Powell, K. G.
2001-01-01
In this paper we present the results of all improved three-dimensional MHD model for Io's interaction with Jupiter's magnetosphere. We have included the day-night asymmetry into the spatial distribution of our mass-loading, which allowed us to reproduce several smaller features or the Galileo December 1995 data set. The calculation is performed using our newly modified description of the pick-up processes that accounts for the effects of the corotational electric field existing in the Jovian magnetosphere. This change in the formulation of the source terms for the MHD equations resulted in significant improvements in the comparison with the Galileo measurements. We briefly discuss the limitations of our model and possible future improvements.
A new component of cosmic rays of unknown origin at a few MeV per nucleon
NASA Technical Reports Server (NTRS)
Gloecker, G.
1974-01-01
Recently discovered anomalies in the abundances and energy spectra of quiet time, extraterrestrial hydrogen, helium, carbon, nitrogen, and oxygen require serious revisions of origin theories to account for this new component of cosmic radiation. Abnormally large O/C and N/C ratios, long term intensity variations with time, and radial gradient measurements indicate a non-solar origin for these 2 to 30 MeV/nucleon particles. Ideas suggested to explain these measurements range from acceleration of galactic source material having an unusual composition to local acceleration of particles within the solar cavity. Observations are at present insufficient to choose between these alternate origin models.
Visual Media Influence on Behavior
2013-12-17
the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed , and completing and reviewing the...was only cultural and social change that needed to be explained, most often in terms of the impact of new technologies (e.g., Ogburn, 1964; Schaniel...1988). We understand now, though, that any meaningful theory of culture change needs to account for both change and stability (Barkow, 1989) and
Experimentally validated finite element model of electrocaloric multilayer ceramic structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, N. A. S., E-mail: nadia.smith@npl.co.uk, E-mail: maciej.rokosz@npl.co.uk, E-mail: tatiana.correia@npl.co.uk; Correia, T. M., E-mail: nadia.smith@npl.co.uk, E-mail: maciej.rokosz@npl.co.uk, E-mail: tatiana.correia@npl.co.uk; Rokosz, M. K., E-mail: nadia.smith@npl.co.uk, E-mail: maciej.rokosz@npl.co.uk, E-mail: tatiana.correia@npl.co.uk
2014-07-28
A novel finite element model to simulate the electrocaloric response of a multilayer ceramic capacitor (MLCC) under real environment and operational conditions has been developed. The two-dimensional transient conductive heat transfer model presented includes the electrocaloric effect as a source term, as well as accounting for radiative and convective effects. The model has been validated with experimental data obtained from the direct imaging of MLCC transient temperature variation under application of an electric field. The good agreement between simulated and experimental data, suggests that the novel experimental direct measurement methodology and the finite element model could be used to supportmore » the design of optimised electrocaloric units and operating conditions.« less
Code of Federal Regulations, 2010 CFR
2010-01-01
... accounts, foreign branches and a de minimis number of accounts. 218.723 Section 218.723 Banks and Banking... accounts, transferred accounts, foreign branches and a de minimis number of accounts. (a) Short-term... dealer. (e) De minimis exclusion. A bank may, in determining its compliance with the chiefly compensated...
Code of Federal Regulations, 2011 CFR
2011-01-01
... accounts, foreign branches and a de minimis number of accounts. 218.723 Section 218.723 Banks and Banking... accounts, transferred accounts, foreign branches and a de minimis number of accounts. (a) Short-term... dealer. (e) De minimis exclusion. A bank may, in determining its compliance with the chiefly compensated...
Advanced relativistic VLBI model for geodesy
NASA Astrophysics Data System (ADS)
Soffel, Michael; Kopeikin, Sergei; Han, Wen-Biao
2017-07-01
Our present relativistic part of the geodetic VLBI model for Earthbound antennas is a consensus model which is considered as a standard for processing high-precision VLBI observations. It was created as a compromise between a variety of relativistic VLBI models proposed by different authors as documented in the IERS Conventions 2010. The accuracy of the consensus model is in the picosecond range for the group delay but this is not sufficient for current geodetic purposes. This paper provides a fully documented derivation of a new relativistic model having an accuracy substantially higher than one picosecond and based upon a well accepted formalism of relativistic celestial mechanics, astrometry and geodesy. Our new model fully confirms the consensus model at the picosecond level and in several respects goes to a great extent beyond it. More specifically, terms related to the acceleration of the geocenter are considered and kept in the model, the gravitational time-delay due to a massive body (planet, Sun, etc.) with arbitrary mass and spin-multipole moments is derived taking into account the motion of the body, and a new formalism for the time-delay problem of radio sources located at finite distance from VLBI stations is presented. Thus, the paper presents a substantially elaborated theoretical justification of the consensus model and its significant extension that allows researchers to make concrete estimates of the magnitude of residual terms of this model for any conceivable configuration of the source of light, massive bodies, and VLBI stations. The largest terms in the relativistic time delay which can affect the current VLBI observations are from the quadrupole and the angular momentum of the gravitating bodies that are known from the literature. These terms should be included in the new geodetic VLBI model for improving its consistency.
NASA Astrophysics Data System (ADS)
Thiry, Olivier; Winckelmans, Grégoire
2016-02-01
In the large-eddy simulation (LES) of turbulent flows, models are used to account for the subgrid-scale (SGS) stress. We here consider LES with "truncation filtering only" (i.e., that due to the LES grid), thus without regular explicit filtering added. The SGS stress tensor is then composed of two terms: the cross term that accounts for interactions between resolved scales and unresolved scales, and the Reynolds term that accounts for interactions between unresolved scales. Both terms provide forward- (dissipation) and backward (production, also called backscatter) energy transfer. Purely dissipative, eddy-viscosity type, SGS models are widely used: Smagorinsky-type models, or more advanced multiscale-type models. Dynamic versions have also been developed, where the model coefficient is determined using a dynamic procedure. Being dissipative by nature, those models do not provide backscatter. Even when using the dynamic version with local averaging, one typically uses clipping to forbid negative values of the model coefficient and hence ensure the stability of the simulation; hence removing the backscatter produced by the dynamic procedure. More advanced SGS model are thus desirable, and that better conform to the physics of the true SGS stress, while remaining stable. We here investigate, in decaying homogeneous isotropic turbulence, and using a de-aliased pseudo-spectral method, the behavior of the cross term and of the Reynolds term: in terms of dissipation spectra, and in terms of probability density function (pdf) of dissipation in physical space: positive and negative (backscatter). We then develop a new mixed model that better accounts for the physics of the SGS stress and for the backscatter. It has a cross term part which is built using a scale-similarity argument, further combined with a correction for Galilean invariance using a pseudo-Leonard term: this is the term that also does backscatter. It also has an eddy-viscosity multiscale model part that accounts for all the remaining phenomena (also for the incompleteness of the cross term model), that is dynamic and that adjusts the overall dissipation. The model is tested, both a priori and a posteriori, and is compared to the direct numerical simulation and to the exact SGS terms, also in time. The model is seen to provide accurate energy spectra, also in comparison to the dynamic Smagorinsky model. It also provides significant backscatter (although four times less than the real SGS stress), while remaining stable.
47 CFR 3.2 - Terms and definitions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... is subject to IMF's definition. One of the monetary units used to effect accounting settlements in... 47 Telecommunication 1 2010-10-01 2010-10-01 false Terms and definitions. 3.2 Section 3.2 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL AUTHORIZATION AND ADMINISTRATION OF ACCOUNTING...
Seasonally-Dynamic SPARROW Modeling of Nitrogen Flux Using Earth Observation Data
NASA Astrophysics Data System (ADS)
Smith, R. A.; Schwarz, G. E.; Brakebill, J. W.; Hoos, A. B.; Moore, R. B.; Shih, J.; Nolin, A. W.; Macauley, M.; Alexander, R. B.
2013-12-01
SPARROW models are widely used to identify and quantify the sources of contaminants in watersheds and to predict their flux and concentration at specified locations downstream. Conventional SPARROW models describe the average relationship between sources and stream conditions based on long-term water quality monitoring data and spatially-referenced explanatory information. But many watershed management issues stem from intra- and inter-annual changes in contaminant sources, hydrologic forcing, or other environmental conditions which cause a temporary imbalance between inputs and stream water quality. Dynamic behavior of the system relating to changes in watershed storage and processing then becomes important. In this study, we describe dynamically calibrated SPARROW models of total nitrogen flux in three sub-regional watersheds: the Potomac River Basin, Long Island Sound drainage, and coastal South Carolina drainage. The models are based on seasonal water quality and watershed input data for a total 170 monitoring stations for the period 2001 to 2008. Frequently-reported, spatially-detailed input data on the phenology of agricultural production, terrestrial vegetation growth, and snow melt are often challenging requirements of seasonal modeling of reactive nitrogen. In this NASA-funded research, we use Enhanced Vegetation Index (EVI), gross primary production and snow/ice cover data from MODIS to parameterize seasonal uptake and release of nitrogen from vegetation and snowpack. The spatial reference frames of the models are 1:100,000-scale stream networks, and the computational time steps are 0.25-year seasons. Precipitation and temperature data are from PRISM. The model formulation accounts for storage of nitrogen from nonpoint sources including fertilized cropland, pasture, urban land, and atmospheric deposition. Model calibration is by non-linear regression. Once calibrated, model source terms based on previous season export allow for recursive dynamic simulation of stream flux: gradual increases or decreases in export occur as source supply rates and hydrologic forcing change. Based on an assumption that removal of nitrogen from watershed storage to stream channels and to 'permanent' sinks (e.g. the atmosphere and deep groundwater) occur as parallel first-order processes, the models can be used to estimate the approximate residence times of nonpoint source nitrogen in the watersheds.
Hong Kong's domestic health spending--financial years 1989/90 through 2004/05.
Leung, G M; Tin, K Y K; Yeung, G M K; Leung, E S K; Tsui, E L H; Lam, D W S; Tsang, C S H; Fung, A Y K; Lo, S V
2008-04-01
This report presents the latest estimates of Hong Kong's domestic health spending between fiscal years 1989/90 and 2004/05, cross-stratified and categorised by financing source, provider and function on an annual basis. Total expenditure on health was HK$67,807 million in fiscal year 2004/05. In real terms, total expenditure on health showed positive growth averaging 7% per annum throughout the period covered in this report while gross domestic product grew at 4% per annum on average, indicating a growing percentage of health spending relative to gross domestic product, from 3.5% in 1989/90 to 5.2% in 2004/05. This increase was largely driven by the rise in public spending, which rose 9% per annum on average in real terms over the period, compared with 5% for private spending. This represents a growing share of public spending from 40% to 55% of total expenditure on health during the period. While public spending was the dominant source of health financing in 2004/05, private household out-of-pocket expenditure accounted for the second largest share of total health spending (32%). The remaining sources of health finance were employer-provided group medical benefits (8%), privately purchased insurance (5%), and other private sources (1%). Of the $67,807 million total health expenditure in 2004/05, current expenditure comprised $65,429 million (96%) while $2378 million (4%) were capital expenses (ie investment in medical facilities). Services of curative care accounted for the largest share of total health spending (67%) which were made up of ambulatory services (35%), in-patient curative care (28%), day patient hospital services (3%), and home care (1%). The next largest share of total health expenditure was spent on medical goods outside the patient care setting (10%). Analysed by health care provider, hospitals accounted for the largest share (46%) and providers of ambulatory health care the second largest share (30%) of total health spending in 2004/05. We observed a system-wide trend towards service consolidation at institutions (as opposed to free-standing ambulatory clinics, most of which are staffed by solo practitioner). In 2004/05, public expenditure on health amounted to $35,247 million (53.9% of total current expenditure), which was mostly incurred at hospitals (76.5%), whilst private expenditure ($30,182 million) was mostly incurred at providers of ambulatory health care (54.6%). This reflects the mixed health care economy of Hong Kong where public hospitals generally account for about 90% of total bed-days and private doctors (including Western and Chinese medicine practitioners) provide 75% to 80% of out-patient care. While both public and private spending were mostly expended on personal health care services and goods (92.9%), the distributional patterns among functional categories differed. Public expenditure was targeted at in-patient care (54.2%) and substantially less on out-patient care (24.5%), especially low-intensity first-contact care. In comparison, private spending was mostly concentrated on out-patient care (49.6%), whereas medical goods outside the patient care setting (22.6%) and in-patient care (18.8%) comprised the majority of the remaining share. Compared to OECD countries, Hong Kong has devoted a relatively low percentage of gross domestic product to health in the last decade. As a share of total spending, public funding (either general government revenue or social security funds) was also lower than in most comparably developed economies, although commensurate with its public revenue collection base.
Closing the loop: integrating human impacts on water resources to advanced land surface models
NASA Astrophysics Data System (ADS)
Zaitchik, B. F.; Nie, W.; Rodell, M.; Kumar, S.; Li, B.
2016-12-01
Advanced Land Surface Models (LSMs), including those used in the North American Land Data Assimilation System (NLDAS), offer a physically consistent and spatially and temporally complete analysis of the distributed water balance. These models are constrained both by physically-based process representation and by observations ingested as meteorological forcing or as data assimilation updates. As such, they have become important tools for hydrological monitoring and long-term climate analysis. The representation of water management, however, is extremely limited in these models. Recent advances have brought prognostic irrigation routines into models used in NLDAS, while assimilation of Gravity Recovery and Climate Experiment (GRACE) derived estimates of terrestrial water storage anomaly has made it possible to nudge models towards observed states in water storage below the root zone. But with few exceptions these LSMs do not account for the source of irrigation water, leading to a disconnect between the simulated water balance and the observed human impact on water resources. This inconsistency is unacceptable for long-term studies of climate change and human impact on water resources in North America. Here we define the modeling challenge, review instances of models that have begun to account for water withdrawals (e.g., CLM), and present ongoing efforts to improve representation of human impacts on water storage across models through integration of irrigation routines, water withdrawal information, and GRACE Data Assimilation in NLDAS LSMs.
Environmental degradation of composites for marine structures: new materials and new applications
2016-01-01
This paper describes the influence of seawater ageing on composites used in a range of marine structures, from boats to tidal turbines. Accounting for environmental degradation is an essential element in the multi-scale modelling of composite materials but it requires reliable test data input. The traditional approach to account for ageing effects, based on testing samples after immersion for different periods, is evolving towards coupled studies involving strong interactions between water diffusion and mechanical loading. These can provide a more realistic estimation of long-term behaviour but still require some form of acceleration if useful data, for 20 year lifetimes or more, are to be obtained in a reasonable time. In order to validate extrapolations from short to long times, it is essential to understand the degradation mechanisms, so both physico-chemical and mechanical test data are required. Examples of results from some current studies on more environmentally friendly materials including bio-sourced composites will be described first. Then a case study for renewable marine energy applications will be discussed. In both cases, studies were performed first on coupons at the material level, then during structural testing and analysis of large components, in order to evaluate their long-term behaviour. This article is part of the themed issue ‘Multiscale modelling of the structural integrity of composite materials’. PMID:27242304
Assessing the Gap Between Top-down and Bottom-up Measured Methane Emissions in Indianapolis, IN.
NASA Astrophysics Data System (ADS)
Prasad, K.; Lamb, B. K.; Cambaliza, M. O. L.; Shepson, P. B.; Stirm, B. H.; Salmon, O. E.; Lavoie, T. N.; Lauvaux, T.; Ferrara, T.; Howard, T.; Edburg, S. L.; Whetstone, J. R.
2014-12-01
Releases of methane (CH4) from the natural gas supply chain in the United States account for approximately 30% of the total US CH4 emissions. However, there continues to be large questions regarding the accuracy of current emission inventories for methane emissions from natural gas usage. In this paper, we describe results from top-down and bottom-up measurements of methane emissions from the large isolated city of Indianapolis. The top-down results are based on aircraft mass balance and tower based inverse modeling methods, while the bottom-up results are based on direct component sampling at metering and regulating stations, surface enclosure measurements of surveyed pipeline leaks, and tracer/modeling methods for other urban sources. Mobile mapping of methane urban concentrations was also used to identify significant sources and to show an urban-wide low level enhancement of methane levels. The residual difference between top-down and bottom-up measured emissions is large and cannot be fully explained in terms of the uncertainties in top-down and bottom-up emission measurements and estimates. Thus, the residual appears to be, at least partly, attributed to a significant wide-spread diffusive source. Analyses are included to estimate the size and nature of this diffusive source.
NASA Astrophysics Data System (ADS)
Čufar, Aljaž; Batistoni, Paola; Conroy, Sean; Ghani, Zamir; Lengar, Igor; Milocco, Alberto; Packer, Lee; Pillon, Mario; Popovichev, Sergey; Snoj, Luka; JET Contributors
2017-03-01
At the Joint European Torus (JET) the ex-vessel fission chambers and in-vessel activation detectors are used as the neutron production rate and neutron yield monitors respectively. In order to ensure that these detectors produce accurate measurements they need to be experimentally calibrated. A new calibration of neutron detectors to 14 MeV neutrons, resulting from deuterium-tritium (DT) plasmas, is planned at JET using a compact accelerator based neutron generator (NG) in which a D/T beam impinges on a solid target containing T/D, producing neutrons by DT fusion reactions. This paper presents the analysis that was performed to model the neutron source characteristics in terms of energy spectrum, angle-energy distribution and the effect of the neutron generator geometry. Different codes capable of simulating the accelerator based DT neutron sources are compared and sensitivities to uncertainties in the generator's internal structure analysed. The analysis was performed to support preparation to the experimental measurements performed to characterize the NG as a calibration source. Further extensive neutronics analyses, performed with this model of the NG, will be needed to support the neutron calibration experiments and take into account various differences between the calibration experiment and experiments using the plasma as a source of neutrons.
General relativistic corrections in density-shear correlations
NASA Astrophysics Data System (ADS)
Ghosh, Basundhara; Durrer, Ruth; Sellentin, Elena
2018-06-01
We investigate the corrections which relativistic light-cone computations induce on the correlation of the tangential shear with galaxy number counts, also known as galaxy-galaxy lensing. The standard-approach to galaxy-galaxy lensing treats the number density of sources in a foreground bin as observable, whereas it is in reality unobservable due to the presence of relativistic corrections. We find that already in the redshift range covered by the DES first year data, these currently neglected relativistic terms lead to a systematic correction of up to 50% in the density-shear correlation function for the highest redshift bins. This correction is dominated by the fact that a redshift bin of number counts does not only lens sources in a background bin, but is itself again lensed by all masses between the observer and the counted source population. Relativistic corrections are currently ignored in the standard galaxy-galaxy analyses, and the additional lensing of a counted source populations is only included in the error budget (via the covariance matrix). At increasingly higher redshifts and larger scales, these relativistic and lensing corrections become however increasingly more important, and we here argue that it is then more efficient, and also cleaner, to account for these corrections in the density-shear correlations.
The Choreography of Accountability
ERIC Educational Resources Information Center
Webb, P. Taylor
2006-01-01
The prevailing performance discourse in education claims school improvements can be achieved through transparent accountability procedures. The article identifies how teachers generate performances of their work in order to satisfy accountability demands. By identifying sources of teachers' knowledge that produce choreographed performances, I…
37 CFR 382.5 - Verification of statements of account.
Code of Federal Regulations, 2011 CFR
2011-07-01
... account. 382.5 Section 382.5 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES RATES AND TERMS FOR DIGITAL TRANSMISSIONS OF SOUND... SATELLITE DIGITAL AUDIO RADIO SERVICES Preexisting Subscription Services § 382.5 Verification of statements...
37 CFR 382.6 - Verification of statements of account.
Code of Federal Regulations, 2013 CFR
2013-07-01
... account. 382.6 Section 382.6 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES RATES AND TERMS FOR DIGITAL TRANSMISSIONS OF SOUND... SATELLITE DIGITAL AUDIO RADIO SERVICES Preexisting Subscription Services § 382.6 Verification of statements...
37 CFR 382.5 - Verification of statements of account.
Code of Federal Regulations, 2010 CFR
2010-07-01
... account. 382.5 Section 382.5 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES RATES AND TERMS FOR DIGITAL TRANSMISSIONS OF SOUND... SATELLITE DIGITAL AUDIO RADIO SERVICES Preexisting Subscription Services § 382.5 Verification of statements...
37 CFR 382.5 - Verification of statements of account.
Code of Federal Regulations, 2012 CFR
2012-07-01
... account. 382.5 Section 382.5 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES RATES AND TERMS FOR DIGITAL TRANSMISSIONS OF SOUND... SATELLITE DIGITAL AUDIO RADIO SERVICES Preexisting Subscription Services § 382.5 Verification of statements...
37 CFR 382.6 - Verification of statements of account.
Code of Federal Regulations, 2014 CFR
2014-07-01
... account. 382.6 Section 382.6 Patents, Trademarks, and Copyrights COPYRIGHT ROYALTY BOARD, LIBRARY OF CONGRESS RATES AND TERMS FOR STATUTORY LICENSES RATES AND TERMS FOR DIGITAL TRANSMISSIONS OF SOUND... SATELLITE DIGITAL AUDIO RADIO SERVICES Preexisting Subscription Services § 382.6 Verification of statements...
Greenberg, R.; Droege, S.
1999-01-01
Unlike most North American blackbirds, Rusty Blackbirds (Euphagus carolensis) have shown steep population declines. Declines of approximately 90% are indicated for three recent decades from the Breeding Bird Survey, Christmas Bird Counts, and Quebec Checklist Program. Analyses of abundance classifications in bird distribution books and annotated checklists reveal an overlooked but long-term decline dating back to at least the early part of this century. Rusty Blackbirds were described as very common to abundant in 5656 of the pre-192O published accounts, 19% of the 1921-1950 accounts, and only 7% of the post-1950 accounts. Rusty Blackbirds were described as uncommon in none of the pre-1950 accounts, 18% of the 1951-1980 accounts, and 43% of the post-1980 accounts. A similar pattern was found for analyses based on local checklists. Destruction of wooded wetlands on wintering grounds, acid precipitation, and the conversion of boreal forest wetlands could have contributed to these declines. Systematic analysis of regional guides and checklists provides a valuable tool for examining large-scale and long-term population changes in birds.
ACG Clinical Guideline: Diagnosis and Management of Small Bowel Bleeding.
Gerson, Lauren B; Fidler, Jeff L; Cave, David R; Leighton, Jonathan A
2015-09-01
Bleeding from the small intestine remains a relatively uncommon event, accounting for ~5-10% of all patients presenting with gastrointestinal (GI) bleeding. Given advances in small bowel imaging with video capsule endoscopy (VCE), deep enteroscopy, and radiographic imaging, the cause of bleeding in the small bowel can now be identified in most patients. The term small bowel bleeding is therefore proposed as a replacement for the previous classification of obscure GI bleeding (OGIB). We recommend that the term OGIB should be reserved for patients in whom a source of bleeding cannot be identified anywhere in the GI tract. A source of small bowel bleeding should be considered in patients with GI bleeding after performance of a normal upper and lower endoscopic examination. Second-look examinations using upper endoscopy, push enteroscopy, and/or colonoscopy can be performed if indicated before small bowel evaluation. VCE should be considered a first-line procedure for small bowel investigation. Any method of deep enteroscopy can be used when endoscopic evaluation and therapy are required. VCE should be performed before deep enteroscopy if there is no contraindication. Computed tomographic enterography should be performed in patients with suspected obstruction before VCE or after negative VCE examinations. When there is acute overt hemorrhage in the unstable patient, angiography should be performed emergently. In patients with occult hemorrhage or stable patients with active overt bleeding, multiphasic computed tomography should be performed after VCE or CTE to identify the source of bleeding and to guide further management. If a source of bleeding is identified in the small bowel that is associated with significant ongoing anemia and/or active bleeding, the patient should be managed with endoscopic therapy. Conservative management is recommended for patients without a source found after small bowel investigation, whereas repeat diagnostic investigations are recommended for patients with initial negative small bowel evaluations and ongoing overt or occult bleeding.
NASA Astrophysics Data System (ADS)
Zhao, Zhuzi; Cao, Junji; Zhang, Ting; Shen, Zhenxing; Ni, Haiyan; Tian, Jie; Wang, Qiyuan; Liu, Suixin; Zhou, Jiamao; Gu, Jian; Shen, Ganzhou
2018-07-01
Stable carbon isotopes provide information on aerosol sources, but no extensive long-term studies of these isotopes have been conducted in China, and they have mainly been used for qualitative rather than quantitative purposes. Here, 24 h PM2.5 samples (n = 58) were collected from July 2008 to June 2009 at Xi'an, China. The concentrations of organic and elemental carbon (OC and EC), water-soluble OC, and the stable carbon isotope abundances of OC and EC were determined. In spring, summer, autumn and winter, the mean stable carbon isotope in OC (δ13COC) were -26.4 ± 0.6, -25.8 ± 0.7, -25.0 ± 0.6 and -24.4 ± 0.8‰, respectively, and the corresponding δ13CEC values were -25.5 ± 0.4, -25.5 ± 0.8, -25.2 ± 0.7 and -23.7 ± 0.6‰. Large δ13CEC and δ13COC values in winter can be linked to the burning coal for residential heating. Less biomass is burned during spring and summer than winter or fall (manifested in the levels of levoglucosan, i.e., 178, 85, 370, 935 ng m-3 in spring, summer, autumn, and winter), and the more negative δ13COC in the warmer months can be explained by the formation of secondary organic aerosols. A levoglucosan tracer method combined with an isotope mass balance analysis indicated that biomass burning accounted for 1.6-29.0% of the EC, and the mean value in winter (14.9 ± 7.5%) was 7 times higher than summer (2.1 ± 0.4%), with intermediate values of 6.1 ± 5.6 and 4.5 ± 2.4% in autumn and spring. Coal combustion accounted for 45.9 ± 23.1% of the EC overall, and the percentages were 63.0, 37.2, 36.7, and 33.7% in winter, autumn, summer and spring respectively. Motor vehicles accounted for 46.6 ± 26.5% of the annual EC, and these contributed over half (56.7-61.8%) of the EC in all seasons except winter. Correlations between motor vehicle-EC and coal combustion-EC with established source indicators (B(ghi)P and As) support the source apportionment results. This paper describes a simple and accurate method for apportioning the sources of EC, and the results may be beneficial for developing model simulations as well as controlling strategies in future.
Issues in Accountability for Saskatchewan Schools. SIDRU Research Report No. 9.
ERIC Educational Resources Information Center
Cooper, Elizabeth
Issues of accountability in enterprises that provide educational services at the individual and institutional levels are explored in this paper. Two conceptions of accountability are addressed: accountability as measurement and as ethics. A major source of confusion is the disagreement between the two ways of thinking about the meaning and…
Tropospheric ozone using an emission tagging technique in the CAM-Chem and WRF-Chem models
NASA Astrophysics Data System (ADS)
Lupascu, A.; Coates, J.; Zhu, S.; Butler, T. M.
2017-12-01
Tropospheric ozone is a short-lived climate forcing pollutant. High concentration of ozone can affect human health (cardiorespiratory and increased mortality due to long-term exposure), and also it damages crops. Attributing ozone concentrations to the contributions from different sources would indicate the effects of locally emitted or transported precursors on ozone levels in specific regions. This information could be used as an important component of the design of emissions reduction strategies by indicating which emission sources could be targeted for effective reductions, thus reducing the burden of ozone pollution. Using a "tagging" approach within the CAM-Chem (global) and WRF-Chem (regional) models, we can quantify the contribution of individual emission of NOx and VOC precursors on air quality. Hence, when precursor emissions of NOx are tagged, we have seen that the largest contributors on ozone levels are the anthropogenic sources, while in the case of precursor emissions of VOCs, the biogenic sources and methane account for more than 50% of ozone levels. Further, we have extended the NOx tagging method in order to investigate continental source region contributions to concentrations of ozone over various receptor regions over the globe, with a zoom over Europe. In general, summertime maximum ozone in most receptor regions is largely attributable to local emissions of anthropogenic NOx and biogenic VOC. During the rest of the year, especially during springtime, ozone in most receptor regions shows stronger influences from anthropogenic emissions of NOx and VOC in remote source regions.
Agricultural Liming, Irrigation, and Carbon Sequestration
NASA Astrophysics Data System (ADS)
McGill, B. M.; Hamilton, S. K.
2015-12-01
Row crop farmers routinely add inorganic carbon to soils in the form of crushed lime (e.g., calcite or dolomite minerals) and/or inadvertently as bicarbonate alkalinity naturally dissolved in groundwater used for irrigation. In the soil these carbonates can act as either a source or sink of carbon dioxide, depending in large part on nitrogen fertilization and nitrification. The potentially variable fate of lime carbon is not accounted for in the IPCC greenhouse gas inventory model for lime emissions, which assumes that all lime carbon becomes carbon dioxide (irrigation additions are not accounted for). In a corn-soybean-wheat crop rotation at the Kellogg Biological Station Long Term Ecological Research site in southwest Michigan, we are collecting soil porewater from several depths in the vadose zone across a nitrogen fertilizer gradient with and without groundwater irrigation. The soil profile in this region is dominated by carbonate rich glacial outwash that lies 1.5 m below a carbonate-leached zone. We analyze the porewater stoichiometry of calcium, magnesium, and carbonate alkalinity in a conceptual model to reveal the source/sink fate of inorganic carbon. High nitrate porewater concentrations are associated with net carbon dioxide production in the carbonate-leached zone, according to our model. This suggests that the acidity associated with nitrification of the nitrogen fertilizer, which is evident from soil pH measurements, is driving the ultimate fate of lime carbon in the vadose zone. Irrigation is a significant source of both alkalinity and nitrate in drier years, compared to normal rates of liming and fertilization. We will also explore the observed dramatic changes in porewater chemistry and the relationship between irrigation and inorganic carbon fate above and within the native carbonate layer.
Ground heat flux and power sources of low-enthalpy geothermal systems
NASA Astrophysics Data System (ADS)
Bayer, Peter; Blum, Philipp; Rivera, Jaime A.
2015-04-01
Geothermal heat pumps commonly extract energy from the shallow ground at depths as low as approximately 400 m. Vertical borehole heat exchangers are often applied, which are seasonally operated for decades. During this lifetime, thermal anomalies are induced in the ground and surface-near aquifers, which often grow over the years and which alleviate the overall performance of the geothermal system. As basis for prediction and control of the evolving energy imbalance in the ground, focus is typically set on the ground temperatures. This is reflected in regulative temperature thresholds, and in temperature trends, which serve as indicators for renewability and sustainability. In our work, we examine the fundamental heat flux and power sources, as well as their temporal and spatial variability during geothermal heat pump operation. The underlying rationale is that for control of ground temperature evolution, knowledge of the primary heat sources is fundamental. This insight is also important to judge the validity of simplified modelling frameworks. For instance, we reveal that vertical heat flux from the surface dominates the basal heat flux towards a borehole. Both fluxes need to be accounted for as proper vertical boundary conditions in the model. Additionally, the role of horizontal groundwater advection is inspected. Moreover, by adopting the ground energy deficit and long-term replenishment as criteria for system sustainability, an uncommon perspective is adopted that is based on the primary parameter rather than induced local temperatures. In our synthetic study and dimensionless analysis, we demonstrate that time of ground energy recovery after system shutdown may be longer than what is expected from local temperature trends. In contrast, unrealistically long recovery periods and extreme thermal anomalies are predicted without account for vertical ground heat fluxes and only when the energy content of the geothermal reservoir is considered.
A Model fot the Sources of the Slow Solar Wind
NASA Technical Reports Server (NTRS)
Antiochos, S. K.; Mikic, Z.; Titov, V. S.; Lionello, R.; Linker, J. A.
2011-01-01
Models for the origin of the slow solar wind must account for two seemingly contradictory observations: the slow wind has the composition of the closed-field corona, implying that it originates from the continuous opening and closing of flux at the boundary between open and closed field. On the other hand, the slow wind also has large angular width, up to approx.60deg, suggesting that its source extends far from the open-closed boundary. We propose a model that can explain both observations. The key idea is that the source of the slow wind at the Sun is a network of narrow (possibly singular) open-field corridors that map to a web of separatrices and quasi-separatrix layers in the heliosphere. We compute analytically the topology of an open-field corridor and show that it produces a quasi-separatrix layer in the heliosphere that extends to angles far from the heliospheric current sheet. We then use an MHD code and MDI/SOHO observations of the photospheric magnetic field to calculate numerically, with high spatial resolution, the quasi-steady solar wind, and magnetic field for a time period preceding the 2008 August 1 total solar eclipse. Our numerical results imply that, at least for this time period, a web of separatrices (which we term an S-web) forms with sufficient density and extent in the heliosphere to account for the observed properties of the slow wind. We discuss the implications of our S-web model for the structure and dynamics of the corona and heliosphere and propose further tests of the model. Key words: solar wind - Sun: corona - Sun: magnetic topology
Rissling, Anthony J.; Miyakoshi, Makoto; Sugar, Catherine A.; Braff, David L.; Makeig, Scott; Light, Gregory A.
2014-01-01
Although sensory processing abnormalities contribute to widespread cognitive and psychosocial impairments in schizophrenia (SZ) patients, scalp-channel measures of averaged event-related potentials (ERPs) mix contributions from distinct cortical source-area generators, diluting the functional relevance of channel-based ERP measures. SZ patients (n = 42) and non-psychiatric comparison subjects (n = 47) participated in a passive auditory duration oddball paradigm, eliciting a triphasic (Deviant−Standard) tone ERP difference complex, here termed the auditory deviance response (ADR), comprised of a mid-frontal mismatch negativity (MMN), P3a positivity, and re-orienting negativity (RON) peak sequence. To identify its cortical sources and to assess possible relationships between their response contributions and clinical SZ measures, we applied independent component analysis to the continuous 68-channel EEG data and clustered the resulting independent components (ICs) across subjects on spectral, ERP, and topographic similarities. Six IC clusters centered in right superior temporal, right inferior frontal, ventral mid-cingulate, anterior cingulate, medial orbitofrontal, and dorsal mid-cingulate cortex each made triphasic response contributions. Although correlations between measures of SZ clinical, cognitive, and psychosocial functioning and standard (Fz) scalp-channel ADR peak measures were weak or absent, for at least four IC clusters one or more significant correlations emerged. In particular, differences in MMN peak amplitude in the right superior temporal IC cluster accounted for 48% of the variance in SZ-subject performance on tasks necessary for real-world functioning and medial orbitofrontal cluster P3a amplitude accounted for 40%/54% of SZ-subject variance in positive/negative symptoms. Thus, source-resolved auditory deviance response measures including MMN may be highly sensitive to SZ clinical, cognitive, and functional characteristics. PMID:25379456
Engineering light emission of two-dimensional materials in both the weak and strong coupling regimes
NASA Astrophysics Data System (ADS)
Brotons-Gisbert, Mauro; Martínez-Pastor, Juan P.; Ballesteros, Guillem C.; Gerardot, Brian D.; Sánchez-Royo, Juan F.
2018-01-01
Two-dimensional (2D) materials have promising applications in optoelectronics, photonics, and quantum technologies. However, their intrinsically low light absorption limits their performance, and potential devices must be accurately engineered for optimal operation. Here, we apply a transfer matrix-based source-term method to optimize light absorption and emission in 2D materials and related devices in weak and strong coupling regimes. The implemented analytical model accurately accounts for experimental results reported for representative 2D materials such as graphene and MoS2. The model has been extended to propose structures to optimize light emission by exciton recombination in MoS2 single layers, light extraction from arbitrarily oriented dipole monolayers, and single-photon emission in 2D materials. Also, it has been successfully applied to retrieve exciton-cavity interaction parameters from MoS2 microcavity experiments. The present model appears as a powerful and versatile tool for the design of new optoelectronic devices based on 2D semiconductors such as quantum light sources and polariton lasers.
NASA Astrophysics Data System (ADS)
Mahanthesh, B.; Gireesha, B. J.; Shehzad, S. A.; Rauf, A.; Kumar, P. B. Sampath
2018-05-01
This research is made to visualize the nonlinear radiated flow of hydromagnetic nano-fluid induced due to rotation of the disk. The considered nano-fluid is a mixture of water and Ti6Al4V or AA7072 nano-particles. The various shapes of nanoparticles like lamina, column, sphere, tetrahedron and hexahedron are chosen in the analysis. The irregular heat source and nonlinear radiative terms are accounted in the law of energy. We used the heat flux condition instead of constant surface temperature condition. Heat flux condition is more relativistic and according to physical nature of the problem. The problem is made dimensionless with the help of suitable similarity constraints. The Runge-Kutta-Fehlberg scheme is adopted to find the numerical solutions of governing nonlinear ordinary differential systems. The solutions are plotted by considering the various values of emerging physical constraints. The effects of various shapes of nanoparticles are drawn and discussed.
12 CFR 561.53 - United States Treasury General Account.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 6 2012-01-01 2012-01-01 false United States Treasury General Account. 561.53... REGULATIONS AFFECTING ALL SAVINGS ASSOCIATIONS § 561.53 United States Treasury General Account. The term United States Treasury General Account means an account maintained in the name of the United States...
12 CFR 161.53 - United States Treasury General Account.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 1 2014-01-01 2014-01-01 false United States Treasury General Account. 161.53... REGULATIONS AFFECTING ALL SAVINGS ASSOCIATIONS § 161.53 United States Treasury General Account. The term United States Treasury General Account means an account maintained in the name of the United States...
18 CFR 356.3 - Preservation of records for oil pipeline companies.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Accounting 7. Ledgers. 8. Journals. 9. Vouchers. 10. Accounts receivable. 11. Records of accounting codes and... management, accounting, financial or legal service, and agreements with agents 3 years after expiration or... other long-term credit agreements 6 years after redemption. Financial Accounting 7. Ledgers: (a) General...
12 CFR 561.53 - United States Treasury General Account.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 5 2010-01-01 2010-01-01 false United States Treasury General Account. 561.53... REGULATIONS AFFECTING ALL SAVINGS ASSOCIATIONS § 561.53 United States Treasury General Account. The term United States Treasury General Account means an account maintained in the name of the United States...
12 CFR 193.3 - Qualification of public accountant.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 1 2013-01-01 2013-01-01 false Qualification of public accountant. 193.3... REQUIREMENTS Form and Content of Financial Statements § 193.3 Qualification of public accountant. The term “qualified public accountant” means a certified public accountant or licensed public accountant certified or...
12 CFR 193.3 - Qualification of public accountant.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 1 2014-01-01 2014-01-01 false Qualification of public accountant. 193.3... REQUIREMENTS Form and Content of Financial Statements § 193.3 Qualification of public accountant. The term “qualified public accountant” means a certified public accountant or licensed public accountant certified or...
12 CFR 390.382 - Qualification of public accountant.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 5 2012-01-01 2012-01-01 false Qualification of public accountant. 390.382....382 Qualification of public accountant. (See also 17 CFR 210.2-01.) The term “qualified public accountant” means a certified public accountant or licensed public accountant certified or licensed by a...
12 CFR 193.3 - Qualification of public accountant.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 1 2012-01-01 2012-01-01 false Qualification of public accountant. 193.3... REQUIREMENTS Form and Content of Financial Statements § 193.3 Qualification of public accountant. The term “qualified public accountant” means a certified public accountant or licensed public accountant certified or...
Strategic implementation and accountability: the case of the long-term care alliance.
Seaman, Al; Elias, Maria; O'Neill, Bill; Yatabe, Karen
2010-01-01
A group of chief executives of long-term care homes formed an alliance in order to tap the resources residing within their management teams. Adopting a strategic implementation project based on a framework of accountability, the executives were able to better understand the uncertainties of the environment and potentially structure their strategic implementation to best use scarce resources. The framework of accountability allowed the homes to recognize the need for a strong business approach to long-term care. Communication improved throughout the organizations while systems and resources showed improved utilization. Quality became the driving force for all actions taken to move the organizations toward achieving their visions.
77 FR 42175 - Securities Act Industry Guides
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-18
... June 30, 2009, the Financial Accounting Standards Board (``FASB'') issued FASB Statement of Financial... Accepted Accounting Principles--a replacement of FASB Statement No. 162 (``Statement No. 168''), to establish the FASB Codification as the source of authoritative non-Commission accounting principles...
12 CFR 390.382 - Qualification of public accountant. (See also 17 CFR 210.2-01.)
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 5 2013-01-01 2013-01-01 false Qualification of public accountant. (See also... Accounting Requirements § 390.382 Qualification of public accountant.(See also 17 CFR 210.2-01.) The term “qualified public accountant” means a certified public accountant or licensed public accountant certified or...
12 CFR 390.382 - Qualification of public accountant. (See also 17 CFR 210.2-01.)
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 5 2014-01-01 2014-01-01 false Qualification of public accountant. (See also... Accounting Requirements § 390.382 Qualification of public accountant. (See also 17 CFR 210.2-01.) The term “qualified public accountant” means a certified public accountant or licensed public accountant certified or...
Code of Federal Regulations, 2013 CFR
2013-01-01
... constitutes its agreement to be: (1) Bound by the terms and conditions of the program, including without limitation, assessments and the terms of the Master Agreement as set forth on the FDIC's Web site; (2... noninterest-bearing transaction accounts will no longer be guaranteed in full under the Transaction Account...
Code of Federal Regulations, 2011 CFR
2011-01-01
... constitutes its agreement to be: (1) Bound by the terms and conditions of the program, including without limitation, assessments and the terms of the Master Agreement as set forth on the FDIC's Web site; (2... noninterest-bearing transaction accounts will no longer be guaranteed in full under the Transaction Account...
Code of Federal Regulations, 2012 CFR
2012-01-01
... constitutes its agreement to be: (1) Bound by the terms and conditions of the program, including without limitation, assessments and the terms of the Master Agreement as set forth on the FDIC's Web site; (2... noninterest-bearing transaction accounts will no longer be guaranteed in full under the Transaction Account...
Code of Federal Regulations, 2014 CFR
2014-01-01
... constitutes its agreement to be: (1) Bound by the terms and conditions of the program, including without limitation, assessments and the terms of the Master Agreement as set forth on the FDIC's Web site; (2... noninterest-bearing transaction accounts will no longer be guaranteed in full under the Transaction Account...
Learning Essential Terms and Concepts in Statistics and Accounting
ERIC Educational Resources Information Center
Peters, Pam; Smith, Adam; Middledorp, Jenny; Karpin, Anne; Sin, Samantha; Kilgore, Alan
2014-01-01
This paper describes a terminological approach to the teaching and learning of fundamental concepts in foundation tertiary units in Statistics and Accounting, using an online dictionary-style resource (TermFinder) with customised "termbanks" for each discipline. Designed for independent learning, the termbanks support inquiring students…
Is There a Dark Matter Signal in the Galactic Positron Annihilation Radiation?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lingenfelter, R. E.; Rothschild, R. E.; Higdon, J. C.
2009-07-17
Assuming Galactic positrons do not go far before annihilating, a difference between the observed 511 keV annihilation flux distribution and that of positron production, expected from beta{sup +} decay in Galactic iron nucleosynthesis, was evoked as evidence of a new source and signal of dark matter. We show, however, that the dark matter sources cannot account for the observed positronium fraction without extensive propagation. Yet with such propagation, standard nucleosynthetic sources can fully account for the spatial differences and positronium fraction, leaving no new signal for dark matter to explain.
Calculations of cosmogenic nuclide production rates in the Earth's atmosphere and their inventories
NASA Technical Reports Server (NTRS)
Obrien, K.
1986-01-01
The production rates of cosmogenic isotopes in the Earth's atmosphere and their resulting terrestrial abundances have been calculated, taking into account both geomagnetic and solar-modulatory effects. The local interstellar flux was assumed to be that of Garcia-Munoz, et al. Solar modulation was accounted for using the heliocentric potential model and expressed in terms of the Deep River neutron monitor count rates. The geomagnetic field was presented by vertical cutoffs calculated by Shea and Smart and the non-vertical cutoffs calculated using ANGRI. The local interstellar particle flux was first modulated using the heliocentric potential field. The modulated cosmic-ray fluxes reaching the earth's orbit then interacted with the geomagnetic field as though it were a high-pass filter. The interaction of the cosmic radiation with the Earth's atmosphere was calculated utilizing the Bolztmann transport equation. Spallation cross sections for isotope production were calculated using the formalism of Silberberg and Tsao and other cross sections were taken from standard sources. Inventories were calculated by accounting from the variation in solar modulation and geomagnetic field strength with time. Results for many isotope, including C-14, Be-7 and Be-10 are in generally good agreement with existing data. The C-14 inventory, for instance, amounts to 1.75/sq cm(e)/s, in excellent agreement with direct estimates.
77 FR 19740 - Water Sources for Long-Term Recirculation Cooling Following a Loss-of-Coolant Accident
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-02
... NUCLEAR REGULATORY COMMISSION [NRC-2010-0249] Water Sources for Long-Term Recirculation Cooling... Regulatory Guide (RG) 1.82, ``Water Sources for Long-Term Recirculation Cooling Following a Loss-of-Coolant... regarding the sumps and suppression pools that provide water sources for emergency core cooling, containment...
[Anthropogenic ammonia emission inventory and characteristics in the Pearl River Delta Region].
Yin, Sha-sha; Zheng, Jun-yu; Zhang, Li-jun; Zhong, Liu-ju
2010-05-01
Based on the collected activity data and emission factors of anthropogenic ammonia sources, a 2006-based anthropogenic ammonia emission inventory was developed for the Pearl River Delta (PRD) region by source categories and cities with the use of appropriate estimation methods. The results show: (1) the total NH3 emission from anthropogenic sources in the PRD region was 194. 8 kt; (2) the agriculture sources were major contributors of anthropogenic ammonia sources, in which livestock sources shared 62.1% of total NH3 emission and the contribution of application of nitrogen fertilizers was 21.7%; (3) the broiler was the largest contributor among the livestock sources, accounting for 43.4% of the livestock emissions, followed by the hog with a contribution of 32.1%; (4) Guangzhou was the largest ammonia emission city in the PRD region, and then Jiangmen, accounting for 23.4% and 19.1% of total NH3 emission in the PRD region respectively, with major sources as livestock sources and application of nitrogen fertilizers.
Recent changes in the oxidized to reduced nitrogen ratio in atmospheric precipitation
NASA Astrophysics Data System (ADS)
Kurzyca, Iwona; Frankowski, Marcin
2017-10-01
In this study, the characteristics of precipitation in terms of various nitrogen forms (NO3-, NO2-, NH4+, Norganic, Ntotal) is presented. The samples were collected in the areas of different anthropogenic pressure (urban area vs. ecologically protected woodland area, ∼30 km distant from each other; Wielkopolska region, Poland). Based on the Nox and Nred emission profiles (Nox/Nred ratio), temporal and spatial comparison was carried out. For both sites, during a decade of observation, more than 60% of samples had higher contribution of N-NH4+ than N-NO3-, the amount of N-NO2- was negligible, and organic nitrogen amounted to 30% of total nitrogen content which varied up to 16 mg/l. The precipitation events w ith high concentration of nitrogen species were investigated in terms of possible local and remote sources of nitrogen (synoptic meteorology), to indicate the areas which can act as potential sources of N-compounds. Based on the chemometric analysis, it was found that Nred implies Nox and vice versa, due to interactions between them in the atmosphere. Taking into account the analysis of precipitation occurring simultaneously in both locations (about 50% of all rainfall episodes), it was observed that such factor as anthropogenic pressure differentiates but does not determine the chemical composition of precipitation in the investigated areas (urban vs. woodland area; distance of ∼30 km). Thermodynamics of the atmosphere had a significant impact on concentrations of N-NO3- and N-NH4+ in precipitation, as well as the circulation of air masses and remote N sources responsible for transboundary inflow of pollutants.
Lv, Ying; Huang, Guohe; Sun, Wei
2013-01-01
A scenario-based interval two-phase fuzzy programming (SITF) method was developed for water resources planning in a wetland ecosystem. The SITF approach incorporates two-phase fuzzy programming, interval mathematical programming, and scenario analysis within a general framework. It can tackle fuzzy and interval uncertainties in terms of cost coefficients, resources availabilities, water demands, hydrological conditions and other parameters within a multi-source supply and multi-sector consumption context. The SITF method has the advantage in effectively improving the membership degrees of the system objective and all fuzzy constraints, so that both higher satisfactory grade of the objective and more efficient utilization of system resources can be guaranteed. Under the systematic consideration of water demands by the ecosystem, the SITF method was successfully applied to Baiyangdian Lake, which is the largest wetland in North China. Multi-source supplies (including the inter-basin water sources of Yuecheng Reservoir and Yellow River), and multiple water users (including agricultural, industrial and domestic sectors) were taken into account. The results indicated that, the SITF approach would generate useful solutions to identify long-term water allocation and transfer schemes under multiple economic, environmental, ecological, and system-security targets. It can address a comparative analysis for the system satisfactory degrees of decisions under various policy scenarios. Moreover, it is of significance to quantify the relationship between hydrological change and human activities, such that a scheme on ecologically sustainable water supply to Baiyangdian Lake can be achieved. Copyright © 2012 Elsevier B.V. All rights reserved.
Organic matter sources and rehabilitation of the Sacramento-San Joaquin Delta (California, USA)
Jassby, A.D.; Cloern, J.E.
2000-01-01
1. The Sacramento San Joaquin River Delta, a complex mosaic of tidal freshwater habitats in California, is the focus of a major ecosystem rehabilitation effort because of significant long-term changes in critical ecosystem functions. One of these functions is the production, transport and transformation of organic matter that constitutes the primary food supply, which may be sub-optimal at trophic levels supporting fish recruitment. A long historical data set is used to define the most important organic matter sources, the factors underlying their variability, and the implications of ecosystem rehabilitation actions for these sources. 2. Tributary-borne loading is the largest organic carbon source on an average annual Delta-wide basis; phytoplankton production and agricultural drainage are secondary; wastewater treatment plant discharge, tidal marsh drainage and possibly aquatic macrophyte production are tertiary; and benthic microalgal production, urban run-off and other sources are negligible. 3. Allochthonous dissolved organic carbon must be converted to particulate form - with losses due to hydraulic flushing and to heterotroph growth inefficiency - before it becomes available to the metazoan food web. When these losses are accounted for, phytoplankton production plays a much larger role than is evident from a simple accounting of bulk organic carbon sources, especially in seasons critical for larval development and recruitment success. Phytoplankton-derived organic matter is also an important component of particulate loading to the Delta. 4. The Delta is a net producer of organic matter in critically dry years but, because of water diversion from the Delta, transport of organic matter from the Delta to important, downstream nursery areas in San Francisco Bay is always less than transport into the Delta from upstream sources. 5. Of proposed rehabilitation measures, increased use of floodplains probably offers the biggest increase in organic matter sources. 6. An isolated diversion facility - channelling water from the Sacramento River around the Delta to the water projects - would result in substantial loading increases during winter and autumn, but little change in spring and summer when food availability probably matters most to developing organisms. 7. Flow and fish barriers in the channel could have significant effects, especially on phytoplankton sources and in dry years, by eliminating 'short-circuits' in the transport of organic matter to diversion points. 8. Finally, productivity of intentionally flooded islands probably would exceed that of adjacent channels because of lower turbidity and shallower mean depth, although vascular plants rather than phytoplankton could dominate if depths were too shallow. Copyright (C) 2000 John Wiley and Sons, Ltd.
Descriptive epidemiology of indoor odor complaints at a large teaching institution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boswell, R.T.; DiBerardinis, L.; Ducatman, A.
1994-04-01
Investigation of indoor odor complaints consumes a substantial portion of the time and resources of many industrial hygiene offices, yet very little information has been published on the subject. We examined 3 years of data on indoor odor complaints at the Massachusetts Institute of Technology in Cambridge, Massachusetts in order to identify factors that may trigger complaints of odors. Plumbing and maintenance accounted for the majority of activities responsible for the identified sources (35% of calls), while research and teaching activities accounted for only 11 percent of calls. A larger number of calls were received during the winter months whenmore » windows are closed and school is in session. There was generally good correlation between the description of an odor by a complainant and the actual source. Offices/secretarial areas/office support rooms accounted for almost half of the calls, while laboratory facilities accounted for 19 percent of the calls. Despite the fact that the chemistry department was responsible for the most number of calls, the odor sources from these complaints were related primarily to plumbing (dried sink and floor drains) and not the chemicals used for research and teaching. Four types of abatement measures were used when odor sources could be identified: natural dissipation of the odor (23%), advice for prevention of future odors (11%), controlling an odor source (16%), and correction of the odor source (33%). We conclude that the majority of sources of indoor odors which trigger complaints are related to the maintenance of the physical plant, and that complaints are likely to be generated by unfamiliarity with certain odors. Recommendations are given to help reduce indoor odors and the time-consuming investigations into complaints from these odors. 10 refs., 4 figs.« less
The effectiveness of tobacco sales ban to minors: the case of Finland.
Rimpelä, A H; Rainio, S U
2004-06-01
To evaluate the effects of the 1977 and 1995 tobacco sales bans on tobacco acquisition of minors. Biennial nationwide postal surveys (adolescent health and lifestyle survey, AHLS) in 1977-2003; annual classroom surveys (school health promotion survey, SHPS) in 1996-2003. Entire Finland-12, 14, 16, and 18 year olds (AHLS, n = 80 282); eighth and ninth graders (14-16 year olds) (SHPS, n = 226 681). Purchase of tobacco from commercial sources during the past month, purchase from different commercial (shop, kiosk, other outlet) and social sources, ease of buying tobacco, overall acquisition of tobacco products, daily smoking, tobacco experimenting. Decrease in tobacco purchase from commercial sources was small and short term after 1977 but large and permanent after 1995: purchase rate among 14 year old smokers diminished from 90% to 67% in 2003, 16 year olds from 94% to 62%. Purchases in shops decreased most (14 year olds: from 39% to 14%; 16 year olds: from 76% to 27%); purchases in kiosks less. An increase was observed in obtaining tobacco from other outlets and friends (social sources). Only 2-3% of 14-16 year old smokers used commercial sources exclusively when obtaining tobacco. Daily smoking began to decrease after 2001, following an earlier decrease in those experimenting. No changes were observed among age groups not targeted by the ban. Legislation appears to have permanently changed tobacco sales practices and decreased purchases from commercial sources. Social sources need to be taken into account when controlling access to tobacco. Sales bans should be accompanied by other health promotion measures.
Unsteady Flow Dynamics and Acoustics of Two-Outlet Centrifugal Fan Design
NASA Astrophysics Data System (ADS)
Wong, I. Y. W.; Leung, R. C. K.; Law, A. K. Y.
2011-09-01
In this study, a centrifugal fan design with two flow outlets is investigated. This design aims to provide high mass flow rate but low noise performance. Two dimensional unsteady flow simulation with CFD code (FLUENT 6.3) is carried out to analyze the fan flow dynamics and its acoustics. The calculations were done using the unsteady Reynolds averaged Navier Stokes (URANS) approach in which effects of turbulence were accounted for using κ-ɛ model. This work aims to provide an insight how the dominant noise source mechanisms vary with a key fan geometrical paramters, namely, the ratio between cutoff distance and the radius of curvature of the fan housing. Four new fan designs were calculated. Simulation results show that the unsteady flow-induced forces on the fan blades are found to be the main noise sources. The blade force coefficients are then used to build the dipole source terms in Ffowcs Williams and Hawkings (FW-H) Equation for estimating their noise effects. It is found that one design is able to deliver a mass flow 34% more, but with sound pressure level (SPL) 10 dB lower, than the existing design .
Beer as a Rich Source of Fluoride Delivered into the Body.
Styburski, D; Baranowska-Bosiacka, I; Goschorska, M; Chlubek, D; Gutowska, I
2017-06-01
Fluoride is an element which in the minimum amount is necessary for the proper construction of the teeth and bones. But on the other hand, it increases the synthesis of reactive oxygen species, inflammatory mediators, and impairs the action of enzymes. Beer is the most popular alcoholic beverage in the world. Due to its prevalence and volume of consumption, it should be considered as a potential source of F- and taken into account in designing a balanced diet. Therefore, the aim of this study was to analyze beer samples in terms of F- levels. The concentrations of fluoride were examined using ion-selective electrode Thermo Scientific Orion and statistical analysis was based on two-way ANOVA and t test. When compared to imported beers, Polish beers were characterized by the lowest mean F- concentration (0.089 ppm). The highest mean F- concentrations were recorded in beers from Thailand (0.260 ppm), Italy (0.238 ppm), Mexico (0.210 ppm), and China (0.203 ppm). Our study shows that beer is a significant source of fluoride for humans, which is mainly associated with the quality of the water used in beer production.
Large Torque Variations in Two Soft Gamma Repeaters
NASA Technical Reports Server (NTRS)
Woods, Peter M.; Kouveliotou, Chryssa; Gogus, Ersin; Finger, Mark H.; Swank, Jean; Markwardt, Craig B.; Hurley, Kevin; vanderKlis, Michiel; Six, N. Frank (Technical Monitor)
2001-01-01
We have monitored the pulse frequencies of the two soft gamma repeaters SGR 1806-20 and SGR 1900+14 through the beginning of year 2001 using primarily Rossi X-ray Timing Explorer Proportional Counter Array observations. In both sources, we observe large changes in the spin-down torque up to a factor of approximately 4, which persist for several months. Using long baseline phase-connected timing solutions as well as the overall frequency histories, we construct torque noise power spectra for each SGR. The power spectrum of each source is very red (power-law slope approximately -3.5). These power spectra are consistent in normalization with some accreting systems, yet much steeper in slope than any known accreting source. To the best of our knowledge, torque noise power spectra with a comparably steep frequency dependence have only been seen in young, glitching radio pulsars (e.g. Vela). The observed changes in spin-down rate do not correlate with burst activity, therefore, the physical mechanisms behind each phenomenon are also likely unrelated. Within the context of the magnetar model, seismic activity cannot account for both the bursts and the long-term torque changes unless the seismically active regions are decoupled from one another.
Nitrous oxide as a function of oxygen and archaeal gene abundance in the North Pacific
NASA Astrophysics Data System (ADS)
Trimmer, Mark; Chronopoulou, Panagiota-Myrsini; Maanoja, Susanna T.; Upstill-Goddard, Robert C.; Kitidis, Vassilis; Purdy, Kevin J.
2016-12-01
Oceanic oxygen minimum zones are strong sources of the potent greenhouse gas N2O but its microbial source is unclear. We characterized an exponential response in N2O production to decreasing oxygen between 1 and 30 μmol O2 l-1 within and below the oxycline using 15NO2-, a relationship that held along a 550 km offshore transect in the North Pacific. Differences in the overall magnitude of N2O production were accounted for by archaeal functional gene abundance. A one-dimensional (1D) model, parameterized with our experimentally derived exponential terms, accurately reproduces N2O profiles in the top 350 m of water column and, together with a strong 45N2O signature indicated neither canonical nor nitrifier-denitrification production while statistical modelling supported production by archaea, possibly via hybrid N2O formation. Further, with just archaeal N2O production, we could balance high-resolution estimates of sea-to-air N2O exchange. Hence, a significant source of N2O, previously described as leakage from bacterial ammonium oxidation, is better described by low-oxygen archaeal production at the oxygen minimum zone's margins.
The meaning of city noises: Investigating sound quality in Paris (France)
NASA Astrophysics Data System (ADS)
Dubois, Daniele; Guastavino, Catherine; Maffiolo, Valerie; Guastavino, Catherine; Maffiolo, Valerie
2004-05-01
The sound quality of Paris (France) was investigated by using field inquiries in actual environments (open questionnaires) and using recordings under laboratory conditions (free-sorting tasks). Cognitive categories of soundscapes were inferred by means of psycholinguistic analyses of verbal data and of mathematical analyses of similarity judgments. Results show that auditory judgments mainly rely on source identification. The appraisal of urban noise therefore depends on the qualitative evaluation of noise sources. The salience of human sounds in public spaces has been demonstrated, in relation to pleasantness judgments: soundscapes with human presence tend to be perceived as more pleasant than soundscapes consisting solely of mechanical sounds. Furthermore, human sounds are qualitatively processed as indicators of human outdoor activities, such as open markets, pedestrian areas, and sidewalk cafe districts that reflect city life. In contrast, mechanical noises (mainly traffic noise) are commonly described in terms of physical properties (temporal structure, intensity) of a permanent background noise that also characterizes urban areas. This connotes considering both quantitative and qualitative descriptions to account for the diversity of cognitive interpretations of urban soundscapes, since subjective evaluations depend both on the meaning attributed to noise sources and on inherent properties of the acoustic signal.
Approaches to Accountability in Long-Term Care
Berta, Whitney; Laporte, Audrey; Wodchis, Walter P.
2014-01-01
This paper discusses the array of approaches to accountability in Ontario long-term care (LTC) homes. A focus group involving key informants from the LTC industry, including both for-profit and not-for-profit nursing home owners/operators, was used to identify stakeholders involved in formulating and implementing LTC accountability approaches and the relevant regulations, policies and initiatives relating to accountability in the LTC sector. These documents were then systematically reviewed. We found that the dominant mechanisms have been financial incentives and oversight, regulations and information; professionalism has played a minor role. More recently, measurement for accountability in LTC has grown to encompass an array of fiscal, clinical and public accountability measurement mechanisms. The goals of improved quality and accountability are likely more achievable using these historical regulatory approaches, but the recent rapid increase in data and measurability could also enable judicious application of market-based approaches. PMID:25305396
Accountability: A Watchword for University Administration in Nigeria
ERIC Educational Resources Information Center
Sofoluwe, Abayomi Olumade; Oduwaiye, Rhoda Olape; Ogundele, Michael Olarewaju; Kayode, David Jimoh
2015-01-01
The term accountability means different things to different people in different organizations. In the educational setting, the term is seen as liability to one's accomplishment in the educational system. The ever increasing needs of the universities and the dwindling resources available to them have forced university management and other…
16 CFR 603.1 - Terms defined in the Fair Credit Reporting Act.
Code of Federal Regulations, 2010 CFR
2010-01-01
... other information about the perpetrator, if known. (3) Name(s) of information furnisher(s), account numbers, or other relevant account information related to the identity theft. (4) Any other information.... (a) The term “identity theft” means a fraud committed or attempted using the identifying information...
76 FR 16856 - Proposed Collection; Comment Request for Regulation Project
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-25
... Reduction Act of 1995, Public Law 104-13 (44 U.S.C. 3506(c)(2)(A)). Currently, the IRS is soliciting comments concerning existing final regulations, REG-208156-91 (TD 8929), Accounting for Long-Term Contracts... the Internet at [email protected] . SUPPLEMENTARY INFORMATION: Title: Accounting for Long-Term...
NASA Astrophysics Data System (ADS)
Nguyen, H. L.; de Fouquet, C.; Courbet, C.; Simonucci, C. A.
2016-12-01
The effects of spatial variability of hydraulic parameters and initial groundwater plume localization on the possible extent of groundwater pollution plumes have already been broadly studied. However, only a few studies, such as Kjeldsen et al. (1995), take into account the effect of source term spatial variability. We explore this question with the 90Sr migration modeling from a shallow waste burial located in the Chernobyl Exclusion Zone to the underlying sand aquifer. Our work is based upon groundwater sampled once or twice a year since 1995 until 2015 from about 60 piezometers and more than 3,000 137Cs soil activity measurements. These measurements were taken in 1999 from one of the trenches dug after the explosion of the Chernobyl nuclear power plant, the so-called "T22 Trench", where radioactive waste was buried in 1987. The geostatistical analysis of 137Cs activity data in soils from Bugai et al. (2005) is first reconsidered to delimit the trench borders using georadar data as a covariable and to perform geostatistical simulations in order to evaluate the uncertainties of this inventory. 90Sr activity in soils is derived from 137Cs/154Eu and 90Sr/154Eu activity ratios in Chernobyl hot fuel particles (Bugai et al., 2003). Meanwhile, a coupled 1D non saturated/3D saturated transient transport model is constructed under the MELODIE software (IRSN, 2009). The previous 90Sr transport model developed by Bugai et al. (2012) did not take into account the effect of water table fluctuations highlighted by Van Meir et al. (2007) which may cause some discrepancies between model predictions and field observations. They are thus reproduced on a 1D vertical non saturated model. The equiprobable radionuclide localization maps produced by the geostatistical simulations are selected to illustrate different heterogeneities in the radionuclide inventory and are implemented in the 1D model. The obtained activity fluxes from all the 1D vertical models are then injected in a 3D saturated transient model to assess the extent of the radionuclide plume in the groundwater and its most likely evolution over time by taking into account uncertainties associated with the source term spatial variability.
NASA Astrophysics Data System (ADS)
Ham, Walter A.; Kleeman, Michael J.
2011-08-01
Very little is currently known about the relationship between exposure to different sources of ambient ultrafine particles (PM 0.1) and human health effects. If human health effects are enhanced by PM 0.1's ability to cross cell membranes, then more information is needed describing the sources of ultrafine particles that are deposited in the human respiratory system. The current study presents results for the source apportionment of airborne particulate matter in six size fractions smaller than 1.8 μm particle diameter including ultrafine particles (PM 0.1) in one of the most polluted air basins in the United States. Size-resolved source apportionment results are presented at an urban site and rural site in central California's heavily polluted San Joaquin Valley during the winter and summer months using a molecular marker chemical mass balance (MM-CMB) method. Respiratory deposition calculations for the size-resolved source apportionment results are carried out with the Multiple Path Particle Dosimetry Model ( MPPD v 2.0), including calculations for ultrafine (PM 0.1) source deposition. Diesel engines accounted for the majority of PM 0.1 and PM 1.8 EC at both the urban and rural sampling locations during both summer and winter seasons. Meat cooking accounted for 33-67% and diesel engines accounted for 15-21% of the PM 0.1 OC at Fresno. Meat cooking accounted for 22-26% of the PM 0.1 OC at the rural Westside location, while diesel engines accounted for 8-9%. Wood burning contributions to PM 0.1 OC increased to as much as 12% of PM 0.1 OC during the wintertime. The modest contribution of wood smoke reflects the success of emissions control programs over the past decade. In contrast to PM 0.1, PM 1.8 OC had a higher fraction of unidentified source contributions (68-85%) suggesting that this material is composed of secondary organic aerosol (SOA) or primary organic aerosol (POA) that has been processed by atmospheric chemical reactions. Meat cooking was the largest identified source of PM 1.8 organic carbon (OC) at the Fresno site (12-13%) while diesel engines were the largest identified PM 1.8 OC source at the rural site (5-8%). Wood burning contributions to PM 1.8 OC increased during the wintertime at both sites (6-9%) but were relatively small during the summertime (˜1%). As expected, diesel engines were the dominant source of PM 0.1 EC respiratory deposition at both the urban and rural site in both summer and winter (0.01-0.03 μg PM 0.1 EC deposited per m 3 air inhaled). Meat cooking accounted for 0.01-0.025 μg PM 0.1 OC deposited per m 3 air inhaled while diesel fuel accounted for 0.005-0.013 μg PM 0.1 OC deposited per m 3 air inhaled. Minor contributions from wood burning, motor oil, and gasoline fuel were calculated at levels <0.005 μg PM 0.1 OC deposited per m 3 air inhaled at both urban and rural locations during winter and summer seasons. If the burden of PM 0.1 deposited in the respiratory system is relevant for human health effects, then future toxicology studies should be carried out at PM 0.1 concentrations and source mixtures equivalent to those measured in the current study.
Survey of current situation in radiation belt modeling
NASA Technical Reports Server (NTRS)
Fung, Shing F.
2004-01-01
The study of Earth's radiation belts is one of the oldest subjects in space physics. Despite the tremendous progress made in the last four decades, we still lack a complete understanding of the radiation belts in terms of their configurations, dynamics, and detailed physical accounts of their sources and sinks. The static nature of early empirical trapped radiation models, for examples, the NASA AP-8 and AE-8 models, renders those models inappropriate for predicting short-term radiation belt behaviors associated with geomagnetic storms and substorms. Due to incomplete data coverage, these models are also inaccurate at low altitudes (e.g., <1000 km) where many robotic and human space flights occur. The availability of radiation data from modern space missions and advancement in physical modeling and data management techniques have now allowed the development of new empirical and physical radiation belt models. In this paper, we will review the status of modern radiation belt modeling. Published by Elsevier Ltd on behalf of COSPAR.
Analyzing the generality of conflict adaptation effects.
Funes, Maria Jesús; Lupiáñez, Juan; Humphreys, Glyn
2010-02-01
Conflict adaptation effects refer to the reduction of interference when the incongruent stimulus occurs immediately after an incongruent trial, compared with when it occurs after a congruent trial. The present study analyzes the key conditions that lead to adaptation effects that are specific to the type of conflict involved versus those that are conflict general. In the first 2 experiments, we combined 2 types of conflict for which compatibility arises from clearly different sources in terms of dimensional overlap while keeping the task context constant across conflict types. We found a clear pattern of specificity on conflict adaptation across conflict types. In subsequent experiments, we tested whether this pattern could be accounted in terms of feature integration processes contributing differently to repetition versus alternation of conflict types. The results clearly indicated that feature integration was not key to generating conflict type specificity on conflict adaptation. The data are consistent with there being separate modes of control for different types of cognitive conflict.
European Long-Term Care Programs: Lessons for Community Living Assistance Services and Supports?
Nadash, Pamela; Doty, Pamela; Mahoney, Kevin J; von Schwanenflugel, Matthias
2012-01-01
Objective To uncover lessons from abroad for Community Living Assistance Services and Supports (CLASS), a federally run voluntary public long-term care (LTC) insurance program created under the Accountable Care Act of 2010. Data Sources Program administrators and policy researchers from Austria, England, France, Germany, and the Netherlands. Study Design Qualitative methods focused on key parameters of cash for care: how programs set benefit levels; project expenditures; control administrative costs; regulate the use of benefits; and protect workers. Data Collection/Extraction Methods Structured discussions were conducted during an international conference of LTC experts, followed by personal meetings and individual correspondence. Principal Findings Germany's self-financing mandate and tight targeting of benefits have resulted in a solvent program with low premiums. Black markets for care are likely in the absence of regulation; France addresses this via a unique system ensuing legal payment of workers. Conclusions Programs in the five countries studied have lessons, both positive and negative, relevant to CLASS design. PMID:22091672
Laboratory investigations: Low Earth orbit environment chemistry with spacecraft surfaces
NASA Technical Reports Server (NTRS)
Cross, Jon B.
1990-01-01
Long-term space operations that require exposure of material to the low earth orbit (LEO) environment must take into account the effects of this highly oxidative atmosphere on material properties and the possible contamination of the spacecraft surroundings. Ground-based laboratory experiments at Los Alamos using a newly developed hyperthermal atomic oxygen (AO) source have shown that not only are hydrocarbon based materials effected but that inorganic materials such as MoS2 are also oxidized and that thin protective coatings such as Al2O3 can be breached, producing oxidation of the underlying substrate material. Gas-phase reaction products, such as SO2 from oxidation of MoS2 and CO and CO2 from hydrocarbon materials, have been detected and have consequences in terms of spacecraft contamination. Energy loss through gas-surface collisions causing spacecraft drag has been measured for a few select surfaces and has been found to be highly dependent on the surface reactivity.
12 CFR 390.314 - United States Treasury General Account.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 5 2012-01-01 2012-01-01 false United States Treasury General Account. 390.314... Affecting All State Savings Associations § 390.314 United States Treasury General Account. The term United States Treasury General Account means an account maintained in the name of the United States Treasury the...
12 CFR 390.314 - United States Treasury General Account.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 5 2014-01-01 2014-01-01 false United States Treasury General Account. 390.314... Affecting All State Savings Associations § 390.314 United States Treasury General Account. The term United States Treasury General Account means an account maintained in the name of the United States Treasury the...
12 CFR 390.314 - United States Treasury General Account.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 5 2013-01-01 2013-01-01 false United States Treasury General Account. 390.314... Affecting All State Savings Associations § 390.314 United States Treasury General Account. The term United States Treasury General Account means an account maintained in the name of the United States Treasury the...
Minella, Marco; Rogora, Michela; Vione, Davide; Maurino, Valter; Minero, Claudio
2011-08-15
A model-based approach is here developed and applied to predict the long-term trends of indirect photochemical processes in the surface layer (5m water depth) of Lake Maggiore, NW Italy. For this lake, time series of the main parameters of photochemical importance that cover almost two decades are available. As a way to assess the relevant photochemical reactions, the modelled steady-state concentrations of important photogenerated transients ((•)OH, ³CDOM* and CO₃(-•)) were taken into account. A multivariate analysis approach was adopted to have an overview of the system, to emphasise relationships among chemical, photochemical and seasonal variables, and to highlight annual and long-term trends. Over the considered time period, because of the decrease of the dissolved organic carbon (DOC) content of water and of the increase of alkalinity, a significant increase is predicted for the steady-state concentrations of the radicals (•)OH and CO₃(-•). Therefore, the photochemical degradation processes that involve the two radical species would be enhanced. Another issue of potential photochemical importance is related to the winter maxima of nitrate (a photochemical (•)OH source) and the summer maxima of DOC ((•)OH sink and ³CDOM* source) in the lake water under consideration. From the combination of sunlight irradiance and chemical composition data, one predicts that the processes involving (•)OH and CO₃(-•) would be most important in spring, while the reactions involving ³CDOM* would be most important in summer. Copyright © 2011 Elsevier B.V. All rights reserved.
Sanchez, Marciano; Karnae, Saritha; John, Kuruvilla
2008-01-01
Selected Volatile Organic Compounds (VOC) emitted from various anthropogenic sources including industries and motor vehicles act as primary precursors of ozone, while some VOC are classified as air toxic compounds. Significantly large VOC emission sources impact the air quality in Corpus Christi, Texas. This urban area is located in a semi-arid region of South Texas and is home to several large petrochemical refineries and industrial facilities along a busy ship-channel. The Texas Commission on Environmental Quality has setup two continuous ambient monitoring stations (CAMS 633 and 634) along the ship channel to monitor VOC concentrations in the urban atmosphere. The hourly concentrations of 46 VOC compounds were acquired from TCEQ for a comprehensive source apportionment study. The primary objective of this study was to identify and quantify the sources affecting the ambient air quality within this urban airshed. Principal Component Analysis/Absolute Principal Component Scores (PCA/APCS) was applied to the dataset. PCA identified five possible sources accounting for 69% of the total variance affecting the VOC levels measured at CAMS 633 and six possible sources affecting CAMS 634 accounting for 75% of the total variance. APCS identified natural gas emissions to be the major source contributor at CAMS 633 and it accounted for 70% of the measured VOC concentrations. The other major sources identified at CAMS 633 included flare emissions (12%), fugitive gasoline emissions (9%), refinery operations (7%), and vehicle exhaust (2%). At CAMS 634, natural gas sources were identified as the major source category contributing to 31% of the observed VOC. The other sources affecting this site included: refinery operations (24%), flare emissions (22%), secondary industrial processes (12%), fugitive gasoline emissions (8%) and vehicle exhaust (3%). PMID:19139530
Radiological analysis of plutonium glass batches with natural/enriched boron
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rainisch, R.
2000-06-22
The disposition of surplus plutonium inventories by the US Department of Energy (DOE) includes the immobilization of certain plutonium materials in a borosilicate glass matrix, also referred to as vitrification. This paper addresses source terms of plutonium masses immobilized in a borosilicate glass matrix where the glass components include both natural boron and enriched boron. The calculated source terms pertain to neutron and gamma source strength (particles per second), and source spectrum changes. The calculated source terms corresponding to natural boron and enriched boron are compared to determine the benefits (decrease in radiation source terms) for to the use ofmore » enriched boron. The analysis of plutonium glass source terms shows that a large component of the neutron source terms is due to (a, n) reactions. The Americium-241 and plutonium present in the glass emit alpha particles (a). These alpha particles interact with low-Z nuclides like B-11, B-10, and O-17 in the glass to produce neutrons. The low-Z nuclides are referred to as target particles. The reference glass contains 9.4 wt percent B{sub 2}O{sub 3}. Boron-11 was found to strongly support the (a, n) reactions in the glass matrix. B-11 has a natural abundance of over 80 percent. The (a, n) reaction rates for B-10 are lower than for B-11 and the analysis shows that the plutonium glass neutron source terms can be reduced by artificially enriching natural boron with B-10. The natural abundance of B-10 is 19.9 percent. Boron enriched to 96-wt percent B-10 or above can be obtained commercially. Since lower source terms imply lower dose rates to radiation workers handling the plutonium glass materials, it is important to know the achievable decrease in source terms as a result of boron enrichment. Plutonium materials are normally handled in glove boxes with shielded glass windows and the work entails both extremity and whole-body exposures. Lowering the source terms of the plutonium batches will make the handling of these materials less difficult and will reduce radiation exposure to operating workers.« less
Improved tomographic reconstructions using adaptive time-dependent intensity normalization.
Titarenko, Valeriy; Titarenko, Sofya; Withers, Philip J; De Carlo, Francesco; Xiao, Xianghui
2010-09-01
The first processing step in synchrotron-based micro-tomography is the normalization of the projection images against the background, also referred to as a white field. Owing to time-dependent variations in illumination and defects in detection sensitivity, the white field is different from the projection background. In this case standard normalization methods introduce ring and wave artefacts into the resulting three-dimensional reconstruction. In this paper the authors propose a new adaptive technique accounting for these variations and allowing one to obtain cleaner normalized data and to suppress ring and wave artefacts. The background is modelled by the product of two time-dependent terms representing the illumination and detection stages. These terms are written as unknown functions, one scaled and shifted along a fixed direction (describing the illumination term) and one translated by an unknown two-dimensional vector (describing the detection term). The proposed method is applied to two sets (a stem Salix variegata and a zebrafish Danio rerio) acquired at the parallel beam of the micro-tomography station 2-BM at the Advanced Photon Source showing significant reductions in both ring and wave artefacts. In principle the method could be used to correct for time-dependent phenomena that affect other tomographic imaging geometries such as cone beam laboratory X-ray computed tomography.
NASA Astrophysics Data System (ADS)
Ragon, T.; Sladen, A.; Bletery, Q.; Simons, M.; Magnoni, F.; Avallone, A.; Cavalié, O.; Vergnolle, M.
2016-12-01
Despite the diversity of available data for the Mw 6.1 2009 earthquake in L'Aquila, Italy, published finite fault slip models are surprisingly different. For instance, the amplitude of the maximum coseismic slip patch varies from 80cm to 225cm, and its depth oscillates between 5 and 15km. Discrepancies between proposed source parameters are believed to result from three sources: observational uncertainties, epistemic uncertainties, and the inherent non-uniqueness of inverse problems. We explore the whole solution space of fault-slip models compatible with the data within the range of both observational and epistemic uncertainties by performing a fully Bayesian analysis. In this initial stage, we restrict our analysis to the static problem.In terms of observation uncertainty, we must take into account the difference in time span associated with the different data types: InSAR images provide excellent spatial coverage but usually correspond to a period of a few days to weeks after the mainshock and can thus be potentially biased by significant afterslip. Continuous GPS stations do not have the same shortcoming, but in contrast do not have the desired spatial coverage near the fault. In the case of the L'Aquila earthquake, InSAR images include a minimum of 6 days of afterslip. Here, we explicitly account for these different time windows in the inversion by jointly inverting for coseismic and post-seismic fault slip. Regarding epistemic or modeling uncertainties, we focus on the impact of uncertain fault geometry and elastic structure. Modeling errors, which result from inaccurate model predictions and are generally neglected, are estimated for both earth model and fault geometry as non-diagonal covariance matrices. The L'Aquila earthquake is particularly suited to investigation of these effects given the availability of a detailed aftershock catalog and 3D velocity models. This work aims at improving our knowledge of the L'Aquila earthquake as well as at providing a more general perspective on which uncertainties are the most critical in finite-fault source studies.
Options for accounting carbon sequestration in German forests
Krug, Joachim; Koehl, Michael; Riedel, Thomas; Bormann, Kristin; Rueter, Sebastian; Elsasser, Peter
2009-01-01
Background The Accra climate change talks held from 21–27 August 2008 in Accra, Ghana, were part of an ongoing series of meetings leading up to the Copenhagen meeting in December 2009. During the meeting a set of options for accounting carbon sequestration in forestry on a post-2012 framework was presented. The options include gross-net and net-net accounting and approaches for establishing baselines. Results This article demonstrates the embedded consequences of Accra Accounting Options for the case study of German national GHG accounting. It presents the most current assessment of sequestration rates by forest management for the period 1990 – 2007, provides an outlook of future emissions and removals (up to the year 2042) as related to three different management scenarios, and shows that implementation of some Accra options may reverse sources to sinks, or sinks to sources. Conclusion The results of the study highlight the importance of elaborating an accounting system that would prioritize the climate convention goals, not national preferences. PMID:19650896
Options for accounting carbon sequestration in German forests.
Krug, Joachim; Koehl, Michael; Riedel, Thomas; Bormann, Kristin; Rueter, Sebastian; Elsasser, Peter
2009-08-03
The Accra climate change talks held from 21-27 August 2008 in Accra, Ghana, were part of an ongoing series of meetings leading up to the Copenhagen meeting in December 2009. During the meeting a set of options for accounting carbon sequestration in forestry on a post-2012 framework was presented. The options include gross-net and net-net accounting and approaches for establishing baselines. This article demonstrates the embedded consequences of Accra Accounting Options for the case study of German national GHG accounting. It presents the most current assessment of sequestration rates by forest management for the period 1990 - 2007, provides an outlook of future emissions and removals (up to the year 2042) as related to three different management scenarios, and shows that implementation of some Accra options may reverse sources to sinks, or sinks to sources. The results of the study highlight the importance of elaborating an accounting system that would prioritize the climate convention goals, not national preferences.
12 CFR 707.4 - Account disclosures.
Code of Federal Regulations, 2013 CFR
2013-01-01
... calculated, and the conditions for its assessment. (iii) Withdrawal of dividends prior to maturity. If... request, the credit union may: (A) Specify rates as follows: (1) For dividend-bearing accounts other than...) For interest bearing accounts and for dividend-bearing term share accounts, specify an interest rate...
12 CFR 707.4 - Account disclosures.
Code of Federal Regulations, 2011 CFR
2011-01-01
... calculated, and the conditions for its assessment. (iii) Withdrawal of dividends prior to maturity. If... request, the credit union may: (A) Specify rates as follows: (1) For dividend-bearing accounts other than...) For interest bearing accounts and for dividend-bearing term share accounts, specify an interest rate...
12 CFR 707.4 - Account disclosures.
Code of Federal Regulations, 2014 CFR
2014-01-01
... calculated, and the conditions for its assessment. (iii) Withdrawal of dividends prior to maturity. If... request, the credit union may: (A) Specify rates as follows: (1) For dividend-bearing accounts other than...) For interest bearing accounts and for dividend-bearing term share accounts, specify an interest rate...
12 CFR 707.4 - Account disclosures.
Code of Federal Regulations, 2012 CFR
2012-01-01
... calculated, and the conditions for its assessment. (iii) Withdrawal of dividends prior to maturity. If... request, the credit union may: (A) Specify rates as follows: (1) For dividend-bearing accounts other than...) For interest bearing accounts and for dividend-bearing term share accounts, specify an interest rate...
NASA Astrophysics Data System (ADS)
Perez, Pedro B.; Hamawi, John N.
2017-09-01
Nuclear power plant radiation protection design features are based on radionuclide source terms derived from conservative assumptions that envelope expected operating experience. Two parameters that significantly affect the radionuclide concentrations in the source term are failed fuel fraction and effective fission product appearance rate coefficients. Failed fuel fraction may be a regulatory based assumption such as in the U.S. Appearance rate coefficients are not specified in regulatory requirements, but have been referenced to experimental data that is over 50 years old. No doubt the source terms are conservative as demonstrated by operating experience that has included failed fuel, but it may be too conservative leading to over-designed shielding for normal operations as an example. Design basis source term methodologies for normal operations had not advanced until EPRI published in 2015 an updated ANSI/ANS 18.1 source term basis document. Our paper revisits the fission product appearance rate coefficients as applied in the derivation source terms following the original U.S. NRC NUREG-0017 methodology. New coefficients have been calculated based on recent EPRI results which demonstrate the conservatism in nuclear power plant shielding design.
Xia, Xinghui; Wu, Qiong; Zhu, Baotong; Zhao, Pujun; Zhang, Shangwei; Yang, Lingyan
2015-08-01
We applied a mixing model based on stable isotopic δ(13)C, δ(15)N, and C:N ratios to estimate the contributions of multiple sources to sediment nitrogen. We also developed a conceptual model describing and analyzing the impacts of climate change on nitrogen enrichment. These two models were conducted in Miyun Reservoir to analyze the contribution of climate change to the variations in sediment nitrogen sources based on two (210)Pb and (137)Cs dated sediment cores. The results showed that during the past 50years, average contributions of soil and fertilizer, submerged macrophytes, N2-fixing phytoplankton, and non-N2-fixing phytoplankton were 40.7%, 40.3%, 11.8%, and 7.2%, respectively. In addition, total nitrogen (TN) contents in sediment showed significant increasing trends from 1960 to 2010, and sediment nitrogen of both submerged macrophytes and phytoplankton sources exhibited significant increasing trends during the past 50years. In contrast, soil and fertilizer sources showed a significant decreasing trend from 1990 to 2010. According to the changing trend of N2-fixing phytoplankton, changes of temperature and sunshine duration accounted for at least 43% of the trend in the sediment nitrogen enrichment over the past 50years. Regression analysis of the climatic factors on nitrogen sources showed that the contributions of precipitation, temperature, and sunshine duration to the variations in sediment nitrogen sources ranged from 18.5% to 60.3%. The study demonstrates that the mixing model provides a robust method for calculating the contribution of multiple nitrogen sources in sediment, and this study also suggests that N2-fixing phytoplankton could be regarded as an important response factor for assessing the impacts of climate change on nitrogen enrichment. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ou, Jiamin; Guo, Hai; Zheng, Junyu; Cheung, Kalam; Louie, Peter K. K.; Ling, Zhenhao; Wang, Dawei
2015-02-01
To understand the long-term variations of nonmethane hydrocarbons (NMHCs) and their emission sources, real-time speciated NMHCs have been monitored in Hong Kong since 2005. Data analysis showed that the concentrations of C3-C5 and C6-C7 alkanes slightly increased from 2005 to 2013 at a rate of 0.0015 and 0.0005 μg m-3 yr-1 (p < 0.05), respectively, while aromatics decreased at a rate of 0.006 μg m-3 yr-1 (p < 0.05). Positive Matrix Factorization (PMF) model was applied to identify and quantify the NMHC sources. Vehicular exhaust, gasoline evaporation and liquefied petroleum gas (LPG) usage, consumer product and printing, architectural paints, and biogenic emissions were identified and on average accounted for 20.2 ± 6.2%, 25.4 ± 6.3%, 32.6 ± 5.8%, 21.5 ± 4.5%, and 3.3 ± 1.5% of the ambient NMHC concentrations, respectively. From 2005 to 2013, the contributions of both traffic-related sources and solvent-related sources showed no significant changes, different from the trends in emission inventory. On O3 episode days dominated by local air masses, the increase ratio of NMHC species from non-episode to episode days was found to be a natural function of the reactivity of NMHC species, suggesting that photochemical reaction would significantly change the NMHCs composition between emission sources and the receptors. Effect of photochemical reaction loss on receptor-oriented source apportionment analysis needs to be quantified in order to identify the NMHCs emission sources on O3 episode days.
Fratellone, Patrick M; Holowecki, Melissa A
2009-01-01
Sister Mary Joseph nodule or node refers to a palpable nodule bulging into the umbilicus and is usually a result of a malignant cancer in the pelvis or abdomen. Traditionally it has been considered a sign of ominous prognosis. Gastrointestinal malignancies, most commonly gastric, colon and pancreatic cancer account for about 52% of the underlying sources. Gynecological cancers, most commonly ovarian and uterine cancers account for about 28% of the sources. PMID:19842232
NASA Astrophysics Data System (ADS)
Yakir, Dan; DeNiro, Michael J.; Rundel, Philip W.
1989-10-01
Variations as large as 11%. in δ18O values and 50%. in δD values were observed among different fractions of water in leaves of ivy (Hedera helix) and sunflower (Helianthus annuus). This observation contradicts previous experimental approaches to leaf water as an isotopically uniform pool. Using ion analysis of the water fractions to identify sources within the leaf, we conclude that the isotopic composition of the water within cells, which is involved in biosynthesis and therefore recorded in the plant organic matter, differs substantially from that of total leaf water. This conclusion must be taken into account in studies in which isotope ratios of fossil plant cellulose are interpreted in paleoclimatic terms. In addition, our results have implications for attempts to explain the Dole effect and to account for the variations of 18O/16O ratios in atmospheric carbon dioxide, since the isotopic composition of cell water, not of total leaf water, influences theδ18O values of O2 and CO2 released from plants into the atmosphere.
Intentional Forgetting of Emotional Words after Trauma: A Study with Victims of Sexual Assault
Blix, Ines; Brennen, Tim
2011-01-01
Following exposure to a trauma, people tend to experience intrusive thoughts and memories about the event. In order to investigate whether intrusive memories in the aftermath of trauma might be accounted for by an impaired ability to intentionally forget disturbing material, the present study used a modified Directed Forgetting task to examine intentional forgetting and intrusive recall of words in sexual assault victims and controls. By including words related to the trauma in addition to neutral, positive, and threat-related stimuli it was possible to test for trauma-specific effects. No difference between the Trauma and the Control group was found for correct recall of to-be-forgotten (F) words or to-be-remembered (R) words. However, when recalling words from R-list, the Trauma group mistakenly recalled significantly more trauma-specific words from F-list. “Intrusive“ recall of F-trauma words when asked to recall R-words was related to symptoms of post-traumatic stress disorder reported on the Impact of Event Scale and the Post-traumatic Diagnostic Scale. The results are discussed in term of a source-monitoring account. PMID:21994497
26 CFR 1.734-1 - Optional adjustment to basis of undistributed partnership property.
Code of Federal Regulations, 2012 CFR
2012-04-01
... a long-term contract accounted for under a long-term contract method of accounting. The provisions... allocated. (e) Recovery of adjustments to basis of partnership property—(1) Increases in basis. For purposes of section 168, if the basis of a partnership's recovery property is increased as a result of the...
26 CFR 1.734-1 - Optional adjustment to basis of undistributed partnership property.
Code of Federal Regulations, 2014 CFR
2014-04-01
... a long-term contract accounted for under a long-term contract method of accounting. The provisions... allocated. (e) Recovery of adjustments to basis of partnership property—(1) Increases in basis. For purposes of section 168, if the basis of a partnership's recovery property is increased as a result of the...
26 CFR 1.734-1 - Optional adjustment to basis of undistributed partnership property.
Code of Federal Regulations, 2011 CFR
2011-04-01
... a long-term contract accounted for under a long-term contract method of accounting. The provisions... allocated. (e) Recovery of adjustments to basis of partnership property—(1) Increases in basis. For purposes of section 168, if the basis of a partnership's recovery property is increased as a result of the...
26 CFR 1.734-1 - Optional adjustment to basis of undistributed partnership property.
Code of Federal Regulations, 2013 CFR
2013-04-01
... a long-term contract accounted for under a long-term contract method of accounting. The provisions... allocated. (e) Recovery of adjustments to basis of partnership property—(1) Increases in basis. For purposes of section 168, if the basis of a partnership's recovery property is increased as a result of the...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-26
... for Reinstatement, With Change, of a Previously Approved Collection; Comment Request AGENCY: National... institutions to disclose to consumers certain information, including interest rates, bonuses, and fees... specific disclosures when an account is opened, when a disclosed term changes or a term account is close to...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-05
... for Reinstatement, With Change, of a Previously Approved Collection; Comment Request AGENCY: National... institutions to disclose to consumers certain information, including interest rates, bonuses, and fees... specific disclosures when an account is opened, when a disclosed term changes or a term account is close to...
12 CFR 204.130 - Eligibility for NOW accounts.
Code of Federal Regulations, 2011 CFR
2011-01-01
... a trustee in bankruptcy), including those awaiting distribution or investment, may be held in the... duties. The availability of NOW accounts provides a convenient vehicle for providing a short-term return...) Grandfather provision. In order to avoid unduly disrupting account relationships, a NOW account established at...
48 CFR 9904.416-50 - Techniques for application.
Code of Federal Regulations, 2010 CFR
2010-10-01
... ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.416-50 Techniques for application. (a) Measurement of.... 9904.416-50 Section 9904.416-50 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD... be assigned pro rata among the cost accounting periods covered by the policy term, except as provided...
26 CFR 1.737-3 - Basis adjustments; Recovery rules.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...
26 CFR 1.737-3 - Basis adjustments; Recovery rules.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...
26 CFR 1.737-3 - Basis adjustments; Recovery rules.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...
26 CFR 1.737-3 - Basis adjustments; Recovery rules.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...
26 CFR 1.737-3 - Basis adjustments; Recovery rules.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Properties A1, A2, and A3 is long-term, U.S.-source capital gain or loss. The character of gain on Property A4 is long-term, foreign-source capital gain. B contributes Property B, nondepreciable real property...-term, foreign-source capital gain ($3,000 total gain under section 737 × $2,000 net long-term, foreign...
Accurate finite difference methods for time-harmonic wave propagation
NASA Technical Reports Server (NTRS)
Harari, Isaac; Turkel, Eli
1994-01-01
Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.
An analytical and experimental investigation of resistojet plumes
NASA Technical Reports Server (NTRS)
Zana, L. M.; Hoffman, D. J.; Breyley, L. R.; Serafini, J. S.
1987-01-01
As a part of the electrothermal propulsion plume research program at the NASA Lewis Research Center, efforts have been initiated to analytically and experimentally investigate the plumes of resistojet thrusters. The method of G.A. Simons for the prediction of rocket exhaust plumes is developed for the resistojet. Modifications are made to the source flow equations to account for the increased effects of the relatively large nozzle boundary layer. Additionally, preliminary mass flux measurements of a laboratory resistojet using CO2 propellant at 298 K have been obtained with a cryogenically cooled quartz crystal microbalance (QCM). There is qualitative agreement between analysis and experiment, at least in terms of the overall number density shape functions in the forward flux region.
Accountability and Sanctions in English Schools
ERIC Educational Resources Information Center
West, Anne; Mattei, Paola; Roberts, Jonathan
2011-01-01
This paper focuses on accountability in school-based education in England. It explores notions of accountability and proposes a new framework for its analysis. It then identifies a number of types of accountability which are present in school-based education, and discusses each in terms of who is accountable to whom and for what. It goes on to…
ERIC Educational Resources Information Center
Cliatt, Katherine H.
This learning activity guide and instructor's manual provide information and exercises for an exploratory activity in accounting. Instructional objectives covered in the guide are for the students to learn (1) reasons for studying accounting and related job descriptions, (2) definitions for accounting terms, (3) the accounting equation, (4) how to…
Leshuk, Tim; de Oliveira Livera, Diogo; Peru, Kerry M; Headley, John V; Vijayaraghavan, Sucharita; Wong, Timothy; Gu, Frank
2016-12-01
Oil sands process-affected water (OSPW) is generated as a byproduct of bitumen extraction in Canada's oil sands. Due to the water's toxicity, associated with dissolved acid extractable organics (AEO), especially naphthenic acids (NAs), along with base-neutral organics, OSPW may require treatment to enable safe discharge to the environment. Heterogeneous photocatalysis is a promising advanced oxidation process (AOP) for OSPW remediation, however, predicting treatment efficacy can be challenging due to the unique water chemistry of OSPW from different tailings ponds. The objective of this work was to study various factors affecting the kinetics of photocatalytic AEO degradation in OSPW. The rate of photocatalytic treatment varied significantly in two different OSPW sources, which could not be accounted for by differences in AEO composition, as studied by high resolution mass spectrometry (HRMS). The effects of inorganic water constituents were investigated using factorial and response surface experiments, which revealed that hydroxyl (HO) radical scavenging by iron (Fe 3+ ) and bicarbonate (HCO 3 - ) inhibited the NA degradation rate. The effects of NA concentration and temperature on the treatment kinetics were also evaluated in terms of Langmuir-Hinshelwood and Arrhenius models; pH and temperature were identified as weak factors, while dissolved oxygen (DO) was critical to the photo-oxidation reaction. Accounting for all of these variables, a general empirical kinetic expression is proposed, enabling prediction of photocatalytic treatment performance in diverse sources of OSPW. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ngwenya, Solwayo
2017-07-06
Stillbirths are distressing to the parents and healthcare workers. Globally large numbers of babies are stillborn. A number of strategies have been implemented to try and reduce stillbirths worldwide. The objective of this study was to assess the impact of leadership and accountability changes on reducing full term intrapartum stillbirths. Leadership and accountability changes were implemented in January 2016. This retrospective cohort study was carried out to assess the impact of the changes on fresh full term intrapartum stillbirths covering the period 6 months prior to the implementation date and 12 months after the implementation date. The changes included leadership and accountability. Fresh full term stillbirths (>37 weeks gestation) occurring during the intrapartum stage of labour were analysed to see if there would be any reduction in numbers after the measures were put in place. There was a reduction in the number of fresh full term intrapartum stillbirths after the introduction of the measures. There was a statistical difference before and after implementation of the changes, 50% vs 0%, P = 0.025. There was a reduction in the time it took to perform an emergency caesarean section from a mean of 30 to 15 min by the end of the study, a 50% reduction. Clear and consistent clinical leadership and accountability can help in the global attempts to reduce stillbirth figures. Simple measures can contribute to improving perinatal outcomes.
17 CFR 242.406 - Undermargined accounts.
Code of Federal Regulations, 2010 CFR
2010-04-01
...) REGULATIONS M, SHO, ATS, AC, AND NMS AND CUSTOMER MARGIN REQUIREMENTS FOR SECURITY FUTURES Customer Margin... securities account. Regulation AC—Analyst Certification Source: 68 FR 9492, February 27, 2003, unless...
NASA Astrophysics Data System (ADS)
Labahn, Jeffrey William; Devaud, Cecile
2017-05-01
A Reynolds-Averaged Navier-Stokes (RANS) simulation of the semi-industrial International Flame Research Foundation (IFRF) furnace is performed using a non-adiabatic Conditional Source-term Estimation (CSE) formulation. This represents the first time that a CSE formulation, which accounts for the effect of radiation on the conditional reaction rates, has been applied to a large scale semi-industrial furnace. The objective of the current study is to assess the capabilities of CSE to accurately reproduce the velocity field, temperature, species concentration and nitrogen oxides (NOx) emission for the IFRF furnace. The flow field is solved using the standard k-ε turbulence model and detailed chemistry is included. NOx emissions are calculated using two different methods. Predicted velocity profiles are in good agreement with the experimental data. The predicted peak temperature occurs closer to the centreline, as compared to the experimental observations, suggesting that the mixing between the fuel jet and vitiated air jet may be overestimated. Good agreement between the species concentrations, including NOx, and the experimental data is observed near the burner exit. Farther downstream, the centreline oxygen concentration is found to be underpredicted. Predicted NOx concentrations are in good agreement with experimental data when calculated using the method of Peters and Weber. The current study indicates that RANS-CSE can accurately predict the main characteristics seen in a semi-industrial IFRF furnace.
Design and realization of disaster assessment algorithm after forest fire
NASA Astrophysics Data System (ADS)
Xu, Aijun; Wang, Danfeng; Tang, Lihua
2008-10-01
Based on GIS technology, this paper mainly focuses on the application of disaster assessment algorithm after forest fire and studies on the design and realization of disaster assessment based on GIS. After forest fire through the analysis and processing of multi-sources and heterogeneous data, this paper integrates the foundation that the domestic and foreign scholars laid of the research on assessment for forest fire loss with the related knowledge of assessment, accounting and forest resources appraisal so as to study and approach the theory framework and assessment index of the research on assessment for forest fire loss. The technologies of extracting boundary, overlay analysis, and division processing of multi-sources spatial data are available to realize the application of the investigation method of the burnt forest area and the computation of the fire area. The assessment provides evidence for fire cleaning in burnt areas and new policy making on restoration in terms of the direct and the indirect economic loss and ecological and environmental damage caused by forest fire under the condition of different fire danger classes and different amounts of forest accumulation, thus makes forest resources protection operated in a faster, more efficient and more economical way. Finally, this paper takes Lin'an city of Zhejiang province as a test area to confirm the method mentioned in the paper in terms of key technologies.
Weathering and carbon fluxes of the Irrawaddy-Salween-Mekong river system
NASA Astrophysics Data System (ADS)
Baronas, J. J.; Tipper, E.; Hilton, R. G.; Bickle, M.; Relph, K.; Parsons, D. R.
2017-12-01
The Irrawaddy-Salween-Mekong (ISM) rivers with their source regions draining the eastern Tibetan Plateau account for a significant portion of the global solute and sediment flux to the ocean, and appear to exhibit some of the highest chemical weathering rates in the world. However they are greatly understudied, despite their significance. We will present data from the first part of a recently started multi-year study of these monsoon-controlled river systems. Our aim is to fully deconvolve and quantify the multiple processes and fluxes which play a role in the long-term feedback loop between tectonics, climate, and the critical zone. The long-term goals of the project are to accurately partition the silicate and carbonate weathering rates, acidity sources, and various organic and inorganic carbon fluxes, using a large range of geochemical and isotopic analyses. In addition, we have begun to collect extensive suspended sediment depth profiles to assess changes in sediment chemistry from the Himalayan headwaters to the river mouths, in an attempt to quantify whole-catchment silicate weathering rates over millennial timescales. Finally, bi-weekly multi-annual time-series data are being used to assess the catchment biogeochemical response to the strong hydrological seasonality imposed by the monsoonal climate. Here, we will present some of our preliminary findings of our dissolved dissolved and sediment data from the main-stems and major tributaries of the ISM rivers.
Deterministic Stress Modeling of Hot Gas Segregation in a Turbine
NASA Technical Reports Server (NTRS)
Busby, Judy; Sondak, Doug; Staubach, Brent; Davis, Roger
1998-01-01
Simulation of unsteady viscous turbomachinery flowfields is presently impractical as a design tool due to the long run times required. Designers rely predominantly on steady-state simulations, but these simulations do not account for some of the important unsteady flow physics. Unsteady flow effects can be modeled as source terms in the steady flow equations. These source terms, referred to as Lumped Deterministic Stresses (LDS), can be used to drive steady flow solution procedures to reproduce the time-average of an unsteady flow solution. The goal of this work is to investigate the feasibility of using inviscid lumped deterministic stresses to model unsteady combustion hot streak migration effects on the turbine blade tip and outer air seal heat loads using a steady computational approach. The LDS model is obtained from an unsteady inviscid calculation. The LDS model is then used with a steady viscous computation to simulate the time-averaged viscous solution. Both two-dimensional and three-dimensional applications are examined. The inviscid LDS model produces good results for the two-dimensional case and requires less than 10% of the CPU time of the unsteady viscous run. For the three-dimensional case, the LDS model does a good job of reproducing the time-averaged viscous temperature migration and separation as well as heat load on the outer air seal at a CPU cost that is 25% of that of an unsteady viscous computation.
NASA Astrophysics Data System (ADS)
Wang, Jiandong; Wang, Shuxiao; Voorhees, A. Scott; Zhao, Bin; Jang, Carey; Jiang, Jingkun; Fu, Joshua S.; Ding, Dian; Zhu, Yun; Hao, Jiming
2015-12-01
Air pollution is a major environmental risk to health. In this study, short-term premature mortality due to particulate matter equal to or less than 2.5 μm in aerodynamic diameter (PM2.5) in the Yangtze River Delta (YRD) is estimated by using a PC-based human health benefits software. The economic loss is assessed by using the willingness to pay (WTP) method. The contributions of each region, sector and gaseous precursor are also determined by employing brute-force method. The results show that, in the YRD in 2010, the short-term premature deaths caused by PM2.5 are estimated to be 13,162 (95% confidence interval (CI): 10,761-15,554), while the economic loss is 22.1 (95% CI: 18.1-26.1) billion Chinese Yuan. The industrial and residential sectors contributed the most, accounting for more than 50% of the total economic loss. Emissions of primary PM2.5 and NH3 are major contributors to the health-related loss in winter, while the contribution of gaseous precursors such as SO2 and NOx is higher than primary PM2.5 in summer.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-30
... the telemarketing transaction involves preacquired account information \\20\\ and a free-to-pay... preacquired account information, the dealer would have to: (1) Identify the account to be charged with..., which includes similar provisions.\\23\\ \\20\\ The term ``preacquired account information'' would mean any...
31 CFR 598.301 - Blocked account; blocked property.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 31 Money and Finance:Treasury 3 2013-07-01 2013-07-01 false Blocked account; blocked property. 598.301 Section 598.301 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued... REGULATIONS General Definitions § 598.301 Blocked account; blocked property. The terms blocked account and...
31 CFR 598.301 - Blocked account; blocked property.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 31 Money and Finance:Treasury 3 2014-07-01 2014-07-01 false Blocked account; blocked property. 598.301 Section 598.301 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued... REGULATIONS General Definitions § 598.301 Blocked account; blocked property. The terms blocked account and...
31 CFR 598.301 - Blocked account; blocked property.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 31 Money and Finance:Treasury 3 2012-07-01 2012-07-01 false Blocked account; blocked property. 598.301 Section 598.301 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued... REGULATIONS General Definitions § 598.301 Blocked account; blocked property. The terms blocked account and...
31 CFR 598.301 - Blocked account; blocked property.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 31 Money and Finance:Treasury 3 2011-07-01 2011-07-01 false Blocked account; blocked property. 598.301 Section 598.301 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued... REGULATIONS General Definitions § 598.301 Blocked account; blocked property. The terms blocked account and...
31 CFR 598.301 - Blocked account; blocked property.
Code of Federal Regulations, 2010 CFR
2010-07-01
.... 598.301 Section 598.301 Money and Finance: Treasury Regulations Relating to Money and Finance... REGULATIONS General Definitions § 598.301 Blocked account; blocked property. The terms blocked account and blocked property mean any account or property subject to § 598.202 held in the name of a specially...
7 CFR 1767.19 - Liabilities and other credits.
Code of Federal Regulations, 2010 CFR
2010-01-01
... furnish complete information concerning each note and open account. 224Other Long-Term Debt A. This... this account shall be kept in such a manner that the utility can furnish full information as to the... Accounts § 1767.19 Liabilities and other credits. The liabilities and other credit accounts identified in...
26 CFR 1.815-5 - Other accounts defined.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Distributions to Shareholders § 1.815-5 Other accounts defined. The term other accounts, as... accounts includes amounts representing the increase in tax due to the operation of section 802(b)(3) which...
31 CFR 510.301 - Blocked account; blocked property.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 31 Money and Finance:Treasury 3 2013-07-01 2013-07-01 false Blocked account; blocked property. 510.301 Section 510.301 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued... Definitions § 510.301 Blocked account; blocked property. The terms blocked account and blocked property shall...
31 CFR 510.301 - Blocked account; blocked property.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 31 Money and Finance:Treasury 3 2014-07-01 2014-07-01 false Blocked account; blocked property. 510.301 Section 510.301 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued... Definitions § 510.301 Blocked account; blocked property. The terms blocked account and blocked property shall...
31 CFR 510.301 - Blocked account; blocked property.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 31 Money and Finance:Treasury 3 2011-07-01 2011-07-01 false Blocked account; blocked property. 510.301 Section 510.301 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued... Definitions § 510.301 Blocked account; blocked property. The terms blocked account and blocked property shall...
30 CFR 206.155 - Accounting for comparison.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 2 2010-07-01 2010-07-01 false Accounting for comparison. 206.155 Section 206... MANAGEMENT PRODUCT VALUATION Federal Gas § 206.155 Accounting for comparison. (a) Except as provided in... subpart. (b) The requirement for accounting for comparison contained in the terms of leases will govern as...
47 CFR 32.1410 - Other noncurrent assets.
Code of Federal Regulations, 2010 CFR
2010-10-01
... the cost method, shall be charged to Account 4540, Other capital, if temporary and as a current period... outstanding long-term debt. Amounts included in this account shall be amortized monthly and charged to account... be charged to this account. A subsidiary record shall be kept for each sinking fund which shall...
Integrating Systems into Accounting Instruction.
ERIC Educational Resources Information Center
Heatherington, Ralph
1980-01-01
By incorporating a discussion of systems into the beginning accounting class, students will have a more accurate picture of business and the role accounting plays in it. Students should understand the purpose of forms, have a basic knowledge of flowcharting principles and symbols, and know how source documents are created. (CT)
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 4 2010-10-01 2010-10-01 false Accounting. 1610.9 Section 1610.9 Public Welfare Regulations Relating to Public Welfare (Continued) LEGAL SERVICES CORPORATION USE OF NON-LSC FUNDS, TRANSFERS OF LSC FUNDS, PROGRAM INTEGRITY § 1610.9 Accounting. Funds received by a recipient from a source...
Representation of Knowledge on Some Management Accounting Techniques in Textbooks
ERIC Educational Resources Information Center
Golyagina, Alena; Valuckas, Danielius
2016-01-01
This paper examines the coverage of management accounting techniques in several popular management accounting texts, assessing each technique's claimed position within practice, its benefits and limitations, and the information sources substantiating these claims. Employing the notion of research genres, the study reveals that textbooks in their…
Bayesian estimation of a source term of radiation release with approximately known nuclide ratios
NASA Astrophysics Data System (ADS)
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek
2016-04-01
We are concerned with estimation of a source term in case of an accidental release from a known location, e.g. a power plant. Usually, the source term of an accidental release of radiation comprises of a mixture of nuclide. The gamma dose rate measurements do not provide a direct information on the source term composition. However, physical properties of respective nuclide (deposition properties, decay half-life) can be used when uncertain information on nuclide ratios is available, e.g. from known reactor inventory. The proposed method is based on linear inverse model where the observation vector y arise as a linear combination y = Mx of a source-receptor-sensitivity (SRS) matrix M and the source term x. The task is to estimate the unknown source term x. The problem is ill-conditioned and further regularization is needed to obtain a reasonable solution. In this contribution, we assume that nuclide ratios of the release is known with some degree of uncertainty. This knowledge is used to form the prior covariance matrix of the source term x. Due to uncertainty in the ratios the diagonal elements of the covariance matrix are considered to be unknown. Positivity of the source term estimate is guaranteed by using multivariate truncated Gaussian distribution. Following Bayesian approach, we estimate all parameters of the model from the data so that y, M, and known ratios are the only inputs of the method. Since the inference of the model is intractable, we follow the Variational Bayes method yielding an iterative algorithm for estimation of all model parameters. Performance of the method is studied on simulated 6 hour power plant release where 3 nuclide are released and 2 nuclide ratios are approximately known. The comparison with method with unknown nuclide ratios will be given to prove the usefulness of the proposed approach. This research is supported by EEA/Norwegian Financial Mechanism under project MSMT-28477/2014 Source-Term Determination of Radionuclide Releases by Inverse Atmospheric Dispersion Modelling (STRADI).
River Export of Plastic from Land to Sea: A Global Modeling Approach
NASA Astrophysics Data System (ADS)
Siegfried, Max; Gabbert, Silke; Koelmans, Albert A.; Kroeze, Carolien; Löhr, Ansje; Verburg, Charlotte
2016-04-01
Plastic is increasingly considered a serious cause of water pollution. It is a threat to aquatic ecosystems, including rivers, coastal waters and oceans. Rivers transport considerable amounts of plastic from land to sea. The quantity and its main sources, however, are not well known. Assessing the amount of macro- and microplastic transport from river to sea is, therefore, important for understanding the dimension and the patterns of plastic pollution of aquatic ecosystems. In addition, it is crucial for assessing short- and long-term impacts caused by plastic pollution. Here we present a global modelling approach to quantify river export of plastic from land to sea. Our approach accounts for different types of plastic, including both macro- and micro-plastics. Moreover, we distinguish point sources and diffuse sources of plastic in rivers. Our modelling approach is inspired by global nutrient models, which include more than 6000 river basins. In this paper, we will present our modelling approach, as well as first model results for micro-plastic pollution in European rivers. Important sources of micro-plastics include personal care products, laundry, household dust and car tyre wear. We combine information on these sources with information on sewage management, and plastic retention during river transport for the largest European rivers. Our modelling approach may help to better understand and prevent water pollution by plastic , and at the same time serves as 'proof of concept' for future application on global scale.
49 CFR 193.2613 - Auxiliary power sources.
Code of Federal Regulations, 2011 CFR
2011-10-01
... test must take into account the power needed to start up and simultaneously operate equipment that... 49 Transportation 3 2011-10-01 2011-10-01 false Auxiliary power sources. 193.2613 Section 193.2613...: FEDERAL SAFETY STANDARDS Maintenance § 193.2613 Auxiliary power sources. Each auxiliary power source must...
10 CFR 50.67 - Accident source term.
Code of Federal Regulations, 2014 CFR
2014-01-01
... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2014-01-01 2014-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...
10 CFR 50.67 - Accident source term.
Code of Federal Regulations, 2012 CFR
2012-01-01
... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2012-01-01 2012-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...
10 CFR 50.67 - Accident source term.
Code of Federal Regulations, 2010 CFR
2010-01-01
... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2010-01-01 2010-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...
10 CFR 50.67 - Accident source term.
Code of Federal Regulations, 2013 CFR
2013-01-01
... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2013-01-01 2013-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...
10 CFR 50.67 - Accident source term.
Code of Federal Regulations, 2011 CFR
2011-01-01
... occupancy of the control room under accident conditions without personnel receiving radiation exposures in... 10 Energy 1 2011-01-01 2011-01-01 false Accident source term. 50.67 Section 50.67 Energy NUCLEAR... Conditions of Licenses and Construction Permits § 50.67 Accident source term. (a) Applicability. The...
Image Reconstruction in Radio Astronomy with Non-Coplanar Synthesis Arrays
NASA Astrophysics Data System (ADS)
Goodrick, L.
2015-03-01
Traditional radio astronomy imaging techniques assume that the interferometric array is coplanar, with a small field of view, and that the two-dimensional Fourier relationship between brightness and visibility remains valid, allowing the Fast Fourier Transform to be used. In practice, to acquire more accurate data, the non-coplanar baseline effects need to be incorporated, as small height variations in the array plane introduces the w spatial frequency component. This component adds an additional phase shift to the incoming signals. There are two approaches to account for the non-coplanar baseline effects: either the full three-dimensional brightness and visibility model can be used to reconstruct an image, or the non-coplanar effects can be removed, reducing the three dimensional relationship to that of the two-dimensional one. This thesis describes and implements the w-projection and w-stacking algorithms. The aim of these algorithms is to account for the phase error introduced by non-coplanar synthesis arrays configurations, making the recovered visibilities more true to the actual brightness distribution model. This is done by reducing the 3D visibilities to a 2D visibility model. The algorithms also have the added benefit of wide-field imaging, although w-stacking supports a wider field of view at the cost of more FFT bin support. For w-projection, the w-term is accounted for in the visibility domain by convolving it out of the problem with a convolution kernel, allowing the use of the two-dimensional Fast Fourier Transform. Similarly, the w-Stacking algorithm applies a phase correction in the image domain to image layers to produce an intensity model that accounts for the non-coplanar baseline effects. This project considers the KAT7 array for simulation and analysis of the limitations and advantages of both the algorithms. Additionally, a variant of the Högbom CLEAN algorithm was used which employs contour trimming for extended source emission flagging. The CLEAN algorithm is an iterative two-dimensional deconvolution method that can further improve image fidelity by removing the effects of the point spread function which can obscure source data.
NASA Astrophysics Data System (ADS)
Hazenberg, P.; Uijlenhoet, R.; Leijnse, H.
2015-12-01
Volumetric weather radars provide information on the characteristics of precipitation at high spatial and temporal resolution. Unfortunately, rainfall measurements by radar are affected by multiple error sources, which can be subdivided into two main groups: 1) errors affecting the volumetric reflectivity measurements (e.g. ground clutter, vertical profile of reflectivity, attenuation, etc.), and 2) errors related to the conversion of the observed reflectivity (Z) values into rainfall intensity (R) and specific attenuation (k). Until the recent wide-scale implementation of dual-polarimetric radar, this second group of errors received relatively little attention, focusing predominantly on precipitation type-dependent Z-R and Z-k relations. The current work accounts for the impact of variations of the drop size distribution (DSD) on the radar QPE performance. We propose to link the parameters of the Z-R and Z-k relations directly to those of the normalized gamma DSD. The benefit of this procedure is that it reduces the number of unknown parameters. In this work, the DSD parameters are obtained using 1) surface observations from a Parsivel and Thies LPM disdrometer, and 2) a Monte Carlo optimization procedure using surface rain gauge observations. The impact of both approaches for a given precipitation type is assessed for 45 days of summertime precipitation observed within The Netherlands. Accounting for DSD variations using disdrometer observations leads to an improved radar QPE product as compared to applying climatological Z-R and Z-k relations. However, overall precipitation intensities are still underestimated. This underestimation is expected to result from unaccounted errors (e.g. transmitter calibration, erroneous identification of precipitation as clutter, overshooting and small-scale variability). In case the DSD parameters are optimized, the performance of the radar is further improved, resulting in the best performance of the radar QPE product. However, the resulting optimal Z-R and Z-k relations are considerably different from those obtained from disdrometer observations. As such, the best microphysical parameter set results in a minimization of the overall bias, which besides accounting for DSD variations also corrects for the impact of additional error sources.
Brown, Kevin A; Jones, Makoto; Daneman, Nick; Adler, Frederick R; Stevens, Vanessa; Nechodom, Kevin E; Goetz, Matthew B; Samore, Matthew H; Mayer, Jeanmarie
2016-06-21
Although clinical factors affecting a person's susceptibility to Clostridium difficile infection are well-understood, little is known about what drives differences in incidence across long-term care settings. To obtain a comprehensive picture of individual and regional factors that affect C difficile incidence. Multilevel longitudinal nested case-control study. Veterans Health Administration health care regions, from 2006 through 2012. Long-term care residents. Individual-level risk factors included age, number of comorbid conditions, and antibiotic exposure. Regional risk factors included importation of cases of acute care C difficile infection per 10 000 resident-days and antibiotic use per 1000 resident-days. The outcome was defined as a positive result on a long-term care C difficile test without a positive result in the prior 8 weeks. 6012 cases (incidence, 3.7 cases per 10 000 resident-days) were identified in 86 regions. Long-term care C difficile incidence (minimum, 0.6 case per 10 000 resident-days; maximum, 31.0 cases per 10 000 resident-days), antibiotic use (minimum, 61.0 days with therapy per 1000 resident-days; maximum, 370.2 days with therapy per 1000 resident-days), and importation (minimum, 2.9 cases per 10 000 resident-days; maximum, 341.3 cases per 10 000 resident-days) varied substantially across regions. Together, antibiotic use and importation accounted for 75% of the regional variation in C difficile incidence (R2 = 0.75). Multilevel analyses showed that regional factors affected risk together with individual-level exposures (relative risk of regional antibiotic use, 1.36 per doubling [95% CI, 1.15 to 1.60]; relative risk of importation, 1.23 per doubling [CI, 1.14 to 1.33]). Case identification was based on laboratory criteria. Admission of residents with recent C difficile infection from non-Veterans Health Administration acute care sources was not considered. Only 25% of the variation in regional C difficile incidence in long-term care remained unexplained after importation from acute care facilities and antibiotic use were accounted for, which suggests that improved infection control and antimicrobial stewardship may help reduce the incidence of C difficile in long-term care settings. U.S. Department of Veterans Affairs and Centers for Disease Control and Prevention.
Park, Eun Sug; Hopke, Philip K; Oh, Man-Suk; Symanski, Elaine; Han, Daikwon; Spiegelman, Clifford H
2014-07-01
There has been increasing interest in assessing health effects associated with multiple air pollutants emitted by specific sources. A major difficulty with achieving this goal is that the pollution source profiles are unknown and source-specific exposures cannot be measured directly; rather, they need to be estimated by decomposing ambient measurements of multiple air pollutants. This estimation process, called multivariate receptor modeling, is challenging because of the unknown number of sources and unknown identifiability conditions (model uncertainty). The uncertainty in source-specific exposures (source contributions) as well as uncertainty in the number of major pollution sources and identifiability conditions have been largely ignored in previous studies. A multipollutant approach that can deal with model uncertainty in multivariate receptor models while simultaneously accounting for parameter uncertainty in estimated source-specific exposures in assessment of source-specific health effects is presented in this paper. The methods are applied to daily ambient air measurements of the chemical composition of fine particulate matter ([Formula: see text]), weather data, and counts of cardiovascular deaths from 1995 to 1997 for Phoenix, AZ, USA. Our approach for evaluating source-specific health effects yields not only estimates of source contributions along with their uncertainties and associated health effects estimates but also estimates of model uncertainty (posterior model probabilities) that have been ignored in previous studies. The results from our methods agreed in general with those from the previously conducted workshop/studies on the source apportionment of PM health effects in terms of number of major contributing sources, estimated source profiles, and contributions. However, some of the adverse source-specific health effects identified in the previous studies were not statistically significant in our analysis, which probably resulted because we incorporated parameter uncertainty in estimated source contributions that has been ignored in the previous studies into the estimation of health effects parameters. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Eremin, Yu. A.; Sveshnikov, A. G.
2018-04-01
The discrete source method is used to develop and implement a mathematical model for solving the problem of scattering electromagnetic waves by a three-dimensional plasmonic scatterer with nonlocal effects taken into account. Numerical results are presented whereby the features of the scattering properties of plasmonic particles with allowance for nonlocal effects are demonstrated depending on the direction and polarization of the incident wave.
Combination of acoustical radiosity and the image source method.
Koutsouris, Georgios I; Brunskog, Jonas; Jeong, Cheol-Ho; Jacobsen, Finn
2013-06-01
A combined model for room acoustic predictions is developed, aiming to treat both diffuse and specular reflections in a unified way. Two established methods are incorporated: acoustical radiosity, accounting for the diffuse part, and the image source method, accounting for the specular part. The model is based on conservation of acoustical energy. Losses are taken into account by the energy absorption coefficient, and the diffuse reflections are controlled via the scattering coefficient, which defines the portion of energy that has been diffusely reflected. The way the model is formulated allows for a dynamic control of the image source production, so that no fixed maximum reflection order is required. The model is optimized for energy impulse response predictions in arbitrary polyhedral rooms. The predictions are validated by comparison with published measured data for a real music studio hall. The proposed model turns out to be promising for acoustic predictions providing a high level of detail and accuracy.
Code of Federal Regulations, 2010 CFR
2010-04-01
... from, for, or on behalf of foreign futures or foreign options customers as defined in § 30.1 of this.... This term means a customer that is an eligible contract participant, as defined in section 1a(12) of... customer shall be a customer as defined in § 1.3(k). (vv) Futures account. This term means an account that...
NASA Astrophysics Data System (ADS)
Zander, C.; Plastino, A. R.; Díaz-Alonso, J.
2015-11-01
We investigate time-dependent solutions for a non-linear Schrödinger equation recently proposed by Nassar and Miret-Artés (NM) to describe the continuous measurement of the position of a quantum particle (Nassar, 2013; Nassar and Miret-Artés, 2013). Here we extend these previous studies in two different directions. On the one hand, we incorporate a potential energy term in the NM equation and explore the corresponding wave packet dynamics, while in the previous works the analysis was restricted to the free-particle case. On the other hand, we investigate time-dependent solutions while previous studies focused on a stationary one. We obtain exact wave packet solutions for linear and quadratic potentials, and approximate solutions for the Morse potential. The free-particle case is also revisited from a time-dependent point of view. Our analysis of time-dependent solutions allows us to determine the stability properties of the stationary solution considered in Nassar (2013), Nassar and Miret-Artés (2013). On the basis of these results we reconsider the Bohmian approach to the NM equation, taking into account the fact that the evolution equation for the probability density ρ =| ψ | 2 is not a continuity equation. We show that the effect of the source term appearing in the evolution equation for ρ has to be explicitly taken into account when interpreting the NM equation from a Bohmian point of view.
NASA Astrophysics Data System (ADS)
Ferrero, Pietro
The main objective of this work is to investigate the effects of the coupling between the turbulent fluctuations and the highly non-linear chemical source terms in the context of large-eddy simulations of turbulent reacting flows. To this aim we implement the filtered mass density function (FMDF) methodology on an existing finite volume (FV) fluid dynamics solver. The FMDF provides additional statistical sub-grid scale (SGS) information about the thermochemical state of the flow - species mass fractions and enthalpy - which would not be available otherwise. The core of the methodology involves solving a transport equation for the FMDF by means of a stochastic, grid-free, Lagrangian particle procedure. Any moments of the distribution can be obtained by taking ensemble averages of the particles. The main advantage of this strategy is that the chemical source terms appear in closed form so that the effects of turbulent fluctuations on these terms are already accounted for and do not need to be modeled. We first validate and demonstrate the consistency of our implementation by comparing the results of the hybrid FV/FMDF procedure against model-free LES for temporally developing, non-reacting mixing layers. Consistency requires that, for non-reacting cases, the two solvers should yield identical solutions. We investigate the sensitivity of the FMDF solution on the most relevant numerical parameters, such as the number of particles per cell and the size of the ensemble domain. Next, we apply the FMDF modeling strategy to the simulation of chemically reacting, two- and three-dimensional temporally developing mixing layers and compare the results against both DNS and model-free LES. We clearly show that, when the turbulence/chemistry interaction is accounted for with the FMDF methodology, the results are in much better agreement to the DNS data. Finally, we perform two- and three-dimensional simulations of high Reynolds number, spatially developing, chemically reacting mixing layers, with the intent of reproducing a set of experimental results obtained at the California Institute of Technology. The mean temperature rise calculated by the hybrid FV/FMDF solver, which is associated with the amount of product formed, lies very close to the experimental profile. Conversely, when the effects of turbulence/chemistry coupling are ignored, the simulations clearly over predict the amount of product that is formed.
Probing the X-Ray Binary Populations of the Ring Galaxy NGC 1291
NASA Technical Reports Server (NTRS)
Luo, B.; Fabbiano, G.; Fragos, T.; Kim, D. W.; Belczynski, K.; Brassington, N. J.; Pellegrini, S.; Tzanavaris, P.; Wang, J.; Zezas, A.
2012-01-01
We present Chandra studies of the X-ray binary (XRB) populations in the bulge and ring regions of the ring galaxy NGC 1291. We detect 169 X-ray point sources in the galaxy, 75 in the bulge and 71 in the ring, utilizing the four available Chandra observations totaling an effective exposure of 179 ks. We report photometric properties of these sources in a point-source catalog. There are approx. 40% of the bulge sources and approx. 25% of the ring sources showing > 3(sigma) long-term variability in their X-ray count rate. The X-ray colors suggest that a significant fraction of the bulge (approx. 75%) and ring (approx. 65%) sources are likely low-mass X-ray binaries (LMXBs). The spectra of the nuclear source indicate that it is a low-luminosity AGN with moderate obscuration; spectral variability is observed between individual observations. We construct 0.3-8.0 keV X-ray luminosity functions (XLFs) for the bulge and ring XRB populations, taking into account the detection incompleteness and background AGN contamination. We reach 90% completeness limits of approx.1.5 x 10(exp 37) and approx. 2.2 x 10(exp 37) erg/s for the bulge and ring populations, respectively. Both XLFs can be fit with a broken power-law model, and the shapes are consistent with those expected for populations dominated by LMXBs. We perform detailed population synthesis modeling of the XRB populations in NGC 1291 , which suggests that the observed combined XLF is dominated by aD old LMXB population. We compare the bulge and ring XRB populations, and argue that the ring XRBs are associated with a younger stellar population than the bulge sources, based on the relative over-density of X-ray sources in the ring, the generally harder X-ray color of the ring sources, the overabundance of luminous sources in the combined XLF, and the flatter shape of the ring XLF.
A Review of Navy Stock Fund Accounting Practices for Procurements from Commercial Sources.
1987-06-01
determine if these systems can be improved through the application of accounting procedures currently in D S R 3’ 01 AVAILAILITY 0 ABSTRACT 21 ABSTRACT...system( s )1 and the resultant need for reconciliation between levels. The various management levels of the NSF .se different accounting systems to...purposes. 3. The cost of maintaining the inventory is reflected in a single account , the Navy Stock Account (NSA), rather than in costs associated
31 CFR 585.520 - Entries in certain accounts for normal service charges authorized.
Code of Federal Regulations, 2010 CFR
2010-07-01
... account; and (2) Make book entries against any foreign currency account maintained by it with a financial... charges in connection therewith. (b) As used in this section, the term normal service charge shall include...
NASA Astrophysics Data System (ADS)
Tuttenuj, Daniel; Wetter, Oliver
2016-04-01
The methodology developed by Wetter et al. (2011) combines different documentary and instrumental sources, retaining relevant information for the reconstruction of extreme pre-instrumental flood events. These include hydrological measurements (gauges), historic river profiles (cross and longitudinal profiles), flood marks, historic city maps, documentary flood evidence (reports in chronicles and newspapers) as well as paintings and drawings. It has been shown that extreme river Rhine flood events of the pre-instrumental period can be reconstructed in terms of peak discharges for the last 750 years by applying this methodology to the site of Basel. Pfister & Wetter (2011) furthermore demonstrated that this methodology is also principally transferable to other locations and rivers in Switzerland. Institutional documentary evidence has not been systematically analysed in the context of historical hydrology in Switzerland so far. The term institutional documentary evidence generally outlines sources that were produced by governments or other (public) bodies including the church, hospitals, and the office of the bridge master. Institutional bodies were typically not directly interested in describing climate or hydrological events but they were obliged to document their activities, especially if they generated financial costs (bookkeeping), and in doing so they often indirectly recorded climatologic or hydrological events. The books of weekly expenditures of Basel ("Wochenausgabenbücher der Stadt Basel") were first analysed by Fouquet (1999). He found recurring records of wage expenditures for a squad of craftsmen that was called up onto the bridge with the task of preventing the bridge from being damaged by fishing out drifting logs from the flood waters. Fouquet systematically analysed the period from 1446-1542 and could prove a large number of pre-instrumental flood events of river Rhine, Birs, Birsig and Wiese in Basel. All in all the weekly led account books contained 54 Rhine flood events, whereas chroniclers and annalists only recorded seven floods during the same period. This is a ratio of almost eight to one. This large difference points to the significantly sharper "observation skills" of the account books towards smaller floods, which may be explained by the fact that bridges can be endangered by relatively small floods because of driftwood, whereas it is known that chroniclers or annalists were predominantly focussing on spectacular (extreme) flood events. We [Oliver Wetter and Daniel Tuttenuj] are now able to present first preliminary results of reconstructed peak water levels and peak discharges of pre instrumental river Aare-, Emme-, Limmat-, Reuss-, Rhine- and Saane floods. These first results clearly show the strengths as well as the limits of the data and method used, depending mainly on the river types. Of the above mentioned rivers only the floods of river Emme could not be reconstructed whereas the long-term development of peak water levels and peak discharges of the other rivers clearly correlate with major local and supra-regional Swiss flood corrections over time. PhD student Daniel Tuttenuj is going to present the results of river Emme and Saane, whereas Dr Oliver Wetter is going to present the results for the other rivers and gives a first insight on long-term recurring periods of smaller river Birs, Birsig, Rhine and Wiese flood events based on the analysis of the weekly led account books "Wochenausgabenbücher der Stadt Basel" (see Abstract Oliver Wetter).
12 CFR 563c.3 - Qualification of public accountant.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 5 2010-01-01 2010-01-01 false Qualification of public accountant. 563c.3... REQUIREMENTS Form and Content of Financial Statements § 563c.3 Qualification of public accountant. (See also 17 CFR 210.2-01.) The term “qualified public accountant” means a certified public accountant or licensed...
12 CFR 561.54 - United States Treasury Time Deposit Open Account.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 6 2012-01-01 2012-01-01 false United States Treasury Time Deposit Open Account. 561.54 Section 561.54 Banks and Banking OFFICE OF THRIFT SUPERVISION, DEPARTMENT OF THE TREASURY... Open Account. The term United States Treasury Time Deposit Open Account means a non-interest-bearing...
ERIC Educational Resources Information Center
Armitage, Andrew
2011-01-01
Mainstream accounting historians study accounting in terms of its progressive development of instrumental techniques and practices, this being counterpoised to critical accounting that sees the world as socially constructed, and intrinsically linked to organisational, social and political contexts. This is exemplified by the notion of the…
ERIC Educational Resources Information Center
Carnegie, Garry D.; West, Brian
2011-01-01
Accounting is a practical discipline, existing to satisfy particular human needs which are usually depicted in terms of decision-making processes and accountability evaluations. Proposals for how accounting education may be infused with learning from the "real-world" contexts in which it operates are always welcome. However, as the…
12 CFR 563c.3 - Qualification of public accountant.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 12 Banks and Banking 5 2011-01-01 2011-01-01 false Qualification of public accountant. 563c.3... REQUIREMENTS Form and Content of Financial Statements § 563c.3 Qualification of public accountant. (See also 17 CFR 210.2-01.) The term “qualified public accountant” means a certified public accountant or licensed...
12 CFR 563c.3 - Qualification of public accountant.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 6 2013-01-01 2012-01-01 true Qualification of public accountant. 563c.3... REQUIREMENTS Form and Content of Financial Statements § 563c.3 Qualification of public accountant. (See also 17 CFR 210.2-01.) The term “qualified public accountant” means a certified public accountant or licensed...
12 CFR 563c.3 - Qualification of public accountant.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 6 2014-01-01 2012-01-01 true Qualification of public accountant. 563c.3... REQUIREMENTS Form and Content of Financial Statements § 563c.3 Qualification of public accountant. (See also 17 CFR 210.2-01.) The term “qualified public accountant” means a certified public accountant or licensed...
12 CFR 563c.3 - Qualification of public accountant.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 12 Banks and Banking 6 2012-01-01 2012-01-01 false Qualification of public accountant. 563c.3... REQUIREMENTS Form and Content of Financial Statements § 563c.3 Qualification of public accountant. (See also 17 CFR 210.2-01.) The term “qualified public accountant” means a certified public accountant or licensed...
Hopkins, Jim
2016-01-01
The main concepts of the free energy (FE) neuroscience developed by Karl Friston and colleagues parallel those of Freud's Project for a Scientific Psychology. In Hobson et al. (2014) these include an innate virtual reality generator that produces the fictive prior beliefs that Freud described as the primary process. This enables Friston's account to encompass a unified treatment—a complexity theory—of the role of virtual reality in both dreaming and mental disorder. In both accounts the brain operates to minimize FE aroused by sensory impingements—including interoceptive impingements that report compliance with biological imperatives—and constructs a representation/model of the causes of impingement that enables this minimization. In Friston's account (variational) FE equals complexity minus accuracy, and is minimized by increasing accuracy and decreasing complexity. Roughly the brain (or model) increases accuracy together with complexity in waking. This is mediated by consciousness-creating active inference—by which it explains sensory impingements in terms of perceptual experiences of their causes. In sleep it reduces complexity by processes that include both synaptic pruning and consciousness/virtual reality/dreaming in REM. The consciousness-creating active inference that effects complexity-reduction in REM dreaming must operate on FE-arousing data distinct from sensory impingement. The most relevant source is remembered arousals of emotion, both recent and remote, as processed in SWS and REM on “active systems” accounts of memory consolidation/reconsolidation. Freud describes these remembered arousals as condensed in the dreamwork for use in the conscious contents of dreams, and similar condensation can be seen in symptoms. Complexity partly reflects emotional conflict and trauma. This indicates that dreams and symptoms are both produced to reduce complexity in the form of potentially adverse (traumatic or conflicting) arousals of amygdala-related emotions. Mental disorder is thus caused by computational complexity together with mechanisms like synaptic pruning that have evolved for complexity-reduction; and important features of disorder can be understood in these terms. Details of the consilience among Freudian, systems consolidation, and complexity-reduction accounts appear clearly in the analysis of a single fragment of a dream, indicating also how complexity reduction proceeds by a process resembling Bayesian model selection. PMID:27471478
Li, Lei; Wang, Tie-yu; Wang, Xiaojun; Xiao, Rong-bo; Li, Qi-feng; Peng, Chi; Han, Cun-liang
2016-04-15
Based on comprehensive consideration of soil environmental quality, pollution status of river, environmental vulnerability and the stress of pollution sources, a technical method was established for classification of priority area of soil environmental protection around the river-style water sources. Shunde channel as an important drinking water sources of Foshan City, Guangdong province, was studied as a case, of which the classification evaluation system was set up. In detail, several evaluation factors were selected according to the local conditions of nature, society and economy, including the pollution degree of heavy metals in soil and sediment, soil characteristics, groundwater sensitivity, vegetation coverage, the type and location of pollution sources. Data information was mainly obtained by means of field survey, sampling analysis, and remote sensing interpretation. Afterwards, Analytical Hierarchy Process (AHP) was adopted to decide the weight of each factor. The basic spatial data layers were set up respectively and overlaid based on the weighted summation assessment model in Geographical Information System (GIS), resulting in a classification map of soil environmental protection level in priority area of Shunde channel. Accordingly, the area was classified to three levels named as polluted zone, risky zone and safe zone, which respectively accounted for 6.37%, 60.90% and 32.73% of the whole study area. Polluted zone and risky zone were mainly distributed in Lecong, Longjiang and Leliu towns, with pollutants mainly resulted from the long-term development of aquaculture and the industries containing furniture, plastic constructional materials and textile and clothing. In accordance with the main pollution sources of soil, targeted and differentiated strategies were put forward. The newly established evaluation method could be referenced for the protection and sustainable utilization of soil environment around the water sources.
The effectiveness of tobacco sales ban to minors: the case of Finland
Rimpela, A; Rainio, S
2004-01-01
Objective: To evaluate the effects of the 1977 and 1995 tobacco sales bans on tobacco acquisition of minors. Design: Biennial nationwide postal surveys (adolescent health and lifestyle survey, AHLS) in 1977–2003; annual classroom surveys (school health promotion survey, SHPS) in 1996–2003. Setting and participants: Entire Finland—12, 14, 16, and 18 year olds (AHLS, n = 80 282); eighth and ninth graders (14–16 year olds) (SHPS, n = 226 681). Main outcome measures: Purchase of tobacco from commercial sources during the past month, purchase from different commercial (shop, kiosk, other outlet) and social sources, ease of buying tobacco, overall acquisition of tobacco products, daily smoking, tobacco experimenting. Results: Decrease in tobacco purchase from commercial sources was small and short term after 1977 but large and permanent after 1995: purchase rate among 14 year old smokers diminished from 90% to 67% in 2003, 16 year olds from 94% to 62%. Purchases in shops decreased most (14 year olds: from 39% to 14%; 16 year olds: from 76% to 27%); purchases in kiosks less. An increase was observed in obtaining tobacco from other outlets and friends (social sources). Only 2–3% of 14–16 year old smokers used commercial sources exclusively when obtaining tobacco. Daily smoking began to decrease after 2001, following an earlier decrease in those experimenting. No changes were observed among age groups not targeted by the ban. Conclusions: Legislation appears to have permanently changed tobacco sales practices and decreased purchases from commercial sources. Social sources need to be taken into account when controlling access to tobacco. Sales bans should be accompanied by other health promotion measures. PMID:15175535
40 CFR 97.251 - Establishment of accounts.
Code of Federal Regulations, 2010 CFR
2010-07-01
... (CONTINUED) FEDERAL NOX BUDGET TRADING PROGRAM AND CAIR NOX AND SO2 TRADING PROGRAMS CAIR SO2 Allowance... establish a compliance account for the CAIR SO2 source for which the certificate of representation was... transferring CAIR SO2 allowances. An application for a general account may designate one and only one CAIR...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-23
... Accounting Framework for Biogenic CO2 Emissions from Stationary Sources (September 2011). DATES: The... teleconferences to discuss draft responses to charge questions on EPA's draft Accounting Framework for Biogenic CO... (OAP) in EPA's Office of Air and Radiation requested SAB review of the draft report and accounting...
Piecewise synonyms for enhanced UMLS source terminology integration.
Huang, Kuo-Chuan; Geller, James; Halper, Michael; Cimino, James J
2007-10-11
The UMLS contains more than 100 source vocabularies and is growing via the integration of others. When integrating a new source, the source terms already in the UMLS must first be found. The easiest approach to this is simple string matching. However, string matching usually does not find all concepts that should be found. A new methodology, based on the notion of piecewise synonyms, for enhancing the process of concept discovery in the UMLS is presented. This methodology is supported by first creating a general synonym dictionary based on the UMLS. Each multi-word source term is decomposed into its component words, allowing for the generation of separate synonyms for each word from the general synonym dictionary. The recombination of these synonyms into new terms creates an expanded pool of matching candidates for terms from the source. The methodology is demonstrated with respect to an existing UMLS source. It shows a 34% improvement over simple string matching.
Comparisons of thermospheric density data sets and models
NASA Astrophysics Data System (ADS)
Doornbos, Eelco; van Helleputte, Tom; Emmert, John; Drob, Douglas; Bowman, Bruce R.; Pilinski, Marcin
During the past decade, continuous long-term data sets of thermospheric density have become available to researchers. These data sets have been derived from accelerometer measurements made by the CHAMP and GRACE satellites and from Space Surveillance Network (SSN) tracking data and related Two-Line Element (TLE) sets. These data have already resulted in a large number of publications on physical interpretation and improvement of empirical density modelling. This study compares four different density data sets and two empirical density models, for the period 2002-2009. These data sources are the CHAMP (1) and GRACE (2) accelerometer measurements, the long-term database of densities derived from TLE data (3), the High Accuracy Satellite Drag Model (4) run by Air Force Space Command, calibrated using SSN data, and the NRLMSISE-00 (5) and Jacchia-Bowman 2008 (6) empirical models. In describing these data sets and models, specific attention is given to differences in the geo-metrical and aerodynamic satellite modelling, applied in the conversion from drag to density measurements, which are main sources of density biases. The differences in temporal and spa-tial resolution of the density data sources are also described and taken into account. With these aspects in mind, statistics of density comparisons have been computed, both as a function of solar and geomagnetic activity levels, and as a function of latitude and local solar time. These statistics give a detailed view of the relative accuracy of the different data sets and of the biases between them. The differences are analysed with the aim at providing rough error bars on the data and models and pinpointing issues which could receive attention in future iterations of data processing algorithms and in future model development.
NASA Astrophysics Data System (ADS)
Crochet, M. W.; Gonthier, K. A.
2013-12-01
Systems of hyperbolic partial differential equations are frequently used to model the flow of multiphase mixtures. These equations often contain sources, referred to as nozzling terms, that cannot be posed in divergence form, and have proven to be particularly challenging in the development of finite-volume methods. Upwind schemes have recently shown promise in properly resolving the steady wave solution of the associated multiphase Riemann problem. However, these methods require a full characteristic decomposition of the system eigenstructure, which may be either unavailable or computationally expensive. Central schemes, such as the Kurganov-Tadmor (KT) family of methods, require minimal characteristic information, which makes them easily applicable to systems with an arbitrary number of phases. However, the proper implementation of nozzling terms in these schemes has been mathematically ambiguous. The primary objectives of this work are twofold: first, an extension of the KT family of schemes is proposed that formally accounts for the nonconservative nozzling sources. This modification results in a semidiscrete form that retains the simplicity of its predecessor and introduces little additional computational expense. Second, this modified method is applied to multiple, but equivalent, forms of the multiphase equations to perform a numerical study by solving several one-dimensional test problems. Both ideal and Mie-Grüneisen equations of state are used, with the results compared to an analytical solution. This study demonstrates that the magnitudes of the resulting numerical errors are sensitive to the form of the equations considered, and suggests an optimal form to minimize these errors. Finally, a separate modification of the wave propagation speeds used in the KT family is also suggested that can reduce the extent of numerical diffusion in multiphase flows.
46 CFR 403.400 - Uniform pilot's source form.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 8 2011-10-01 2011-10-01 false Uniform pilot's source form. 403.400 Section 403.400... ACCOUNTING SYSTEM Source Forms § 403.400 Uniform pilot's source form. (a) Each Association shall record pilotage transactions on a form approved by the Director. The approved form shall be issued to pilots by...
48 CFR 9904.412-30 - Definitions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Section 9904.412-30 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.412-30 Definitions. (a) The following are definitions of terms...
48 CFR 9904.413-30 - Definitions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Section 9904.413-30 Federal Acquisition Regulations System COST ACCOUNTING STANDARDS BOARD, OFFICE OF FEDERAL PROCUREMENT POLICY, OFFICE OF MANAGEMENT AND BUDGET PROCUREMENT PRACTICES AND COST ACCOUNTING STANDARDS COST ACCOUNTING STANDARDS 9904.413-30 Definitions. (a) The following are definitions of terms...
Duan, Baoling; Liu, Fenwu; Zhang, Wuping; Zheng, Haixia; Zhang, Qiang; Li, Xiaomei; Bu, Yushan
2015-01-01
Heavy metals (HMs) in sewage sludge have become the crucial limiting factors for land use application. Samples were collected and analyzed from 32 waste water treatment plants (WWTPs) in the Shanxi Province, China. HM levels in sewage sludge were assessed. The multivariate statistical method principal component analysis (PCA) was applied to identify the sources of HMs in sewage sludge. HM pollution classes by geochemical accumulation index Igeo and correlation analyses between HMs were also conducted. HMs were arranged in the following decreasing order of mean concentration: Zn > Cu > Cr > Pb > As > Hg > Cd; the maximum concentrations of all HMs were within the limit of maximum content permitted by Chinese discharge standard. Igeo classes of HMs pollution in order from most polluted to least were: Cu and Hg pollution were the highest; Cd and Cr pollution were moderate; Zn, As and Pb pollution were the least. Sources of HM contamination in sewage sludge were identified as three components. The primary contaminant source accounting for 35.7% of the total variance was identified as smelting industry, coking plant and traffic sources; the second source accounting for 29.0% of the total variance was distinguished as household and water supply pollution; the smallest of the three sources accounting for 16.2% of the total variance was defined as special industries such as leather tanning, textile manufacturing and chemical processing industries. Source apportionment of HMs in sewage sludge can control HM contamination through suggesting improvements in government policies and industrial processes. PMID:26690464
Duan, Baoling; Liu, Fenwu; Zhang, Wuping; Zheng, Haixia; Zhang, Qiang; Li, Xiaomei; Bu, Yushan
2015-12-11
Heavy metals (HMs) in sewage sludge have become the crucial limiting factors for land use application. Samples were collected and analyzed from 32 waste water treatment plants (WWTPs) in the Shanxi Province, China. HM levels in sewage sludge were assessed. The multivariate statistical method principal component analysis (PCA) was applied to identify the sources of HMs in sewage sludge. HM pollution classes by geochemical accumulation index I(geo) and correlation analyses between HMs were also conducted. HMs were arranged in the following decreasing order of mean concentration: Zn > Cu > Cr > Pb > As > Hg > Cd; the maximum concentrations of all HMs were within the limit of maximum content permitted by Chinese discharge standard. I(geo) classes of HMs pollution in order from most polluted to least were: Cu and Hg pollution were the highest; Cd and Cr pollution were moderate; Zn, As and Pb pollution were the least. Sources of HM contamination in sewage sludge were identified as three components. The primary contaminant source accounting for 35.7% of the total variance was identified as smelting industry, coking plant and traffic sources; the second source accounting for 29.0% of the total variance was distinguished as household and water supply pollution; the smallest of the three sources accounting for 16.2% of the total variance was defined as special industries such as leather tanning, textile manufacturing and chemical processing industries. Source apportionment of HMs in sewage sludge can control HM contamination through suggesting improvements in government policies and industrial processes.
SINQ layout, operation, applications and R&D to high power
NASA Astrophysics Data System (ADS)
Bauer, G. S.; Dai, Y.; Wagner, W.
2002-09-01
Since 1997, the Paul Scherrer Institut (PSI) is operating a 1 MW class research spallation neutron source, named SINQ. SINQ is driven by a cascade of three accelerators, the final stage being a 590 MeV isochronous ring cyclotron which delivers a beam current of 1.8 mA at an rf-frequency of 51 MHz. Since for neutron production this is essentially a dc-device, SINQ is a continuous neutron source and is optimized in its design for high time average neutron flux. This makes the facility similar to a research reactor in terms of utilization, but, in terms of beam power, it is, by a large margin, the most powerful spallation neutron source currently in operation world wide. As a consequence, target load levels prevail in SINQ which are beyond the realm of existing experience, demanding a careful approach to the design and operation of a high power target. While the best neutronic performance of the source is expected for a liquid lead-bismuth eutectic target, no experience with such systems exists. For this reason a staged approach has been embarked upon, starting with a heavy water cooled rod target of Zircaloy-2 and proceeding via steel clad lead rods towards the final goal of a target optimised in both, neutronic performance and service life time. Experience currently accruing with a test target containing sample rods with different materials specimens will help to select the proper structural material and make dependable life time estimates accounting for the real operating conditions that prevail in the facility. In parallel, both theoretical and experimental work is going on within the MEGAPIE (MEGAwatt Pilot Experiment) project, a joint initiative by six European research institutions and JAERI (Japan), DOE (USA) and KAERI (Korea), to design, build, operate and explore a liquid lead-bismuth spallation target for 1MW of beam power, taking advantage of the existing spallation neutron facility SINQ.
Long-term financing needs for HIV control in sub-Saharan Africa in 2015–2050: a modelling study
Atun, Rifat; Chang, Angela Y; Ogbuoji, Osondu; Silva, Sachin; Resch, Stephen; Hontelez, Jan; Bärnighausen, Till
2016-01-01
Objectives To estimate the present value of current and future funding needed for HIV treatment and prevention in 9 sub-Saharan African (SSA) countries that account for 70% of HIV burden in Africa under different scenarios of intervention scale-up. To analyse the gaps between current expenditures and funding obligation, and discuss the policy implications of future financing needs. Design We used the Goals module from Spectrum, and applied the most up-to-date cost and coverage data to provide a range of estimates for future financing obligations. The four different scale-up scenarios vary by treatment initiation threshold and service coverage level. We compared the model projections to current domestic and international financial sources available in selected SSA countries. Results In the 9 SSA countries, the estimated resources required for HIV prevention and treatment in 2015–2050 range from US$98 billion to maintain current coverage levels for treatment and prevention with eligibility for treatment initiation at CD4 count of <500/mm3 to US$261 billion if treatment were to be extended to all HIV-positive individuals and prevention scaled up. With the addition of new funding obligations for HIV—which arise implicitly through commitment to achieve higher than current treatment coverage levels—overall financial obligations (sum of debt levels and the present value of the stock of future HIV funding obligations) would rise substantially. Conclusions Investing upfront in scale-up of HIV services to achieve high coverage levels will reduce HIV incidence, prevention and future treatment expenditures by realising long-term preventive effects of ART to reduce HIV transmission. Future obligations are too substantial for most SSA countries to be met from domestic sources alone. New sources of funding, in addition to domestic sources, include innovative financing. Debt sustainability for sustained HIV response is an urgent imperative for affected countries and donors. PMID:26948960
NASA Astrophysics Data System (ADS)
Destouni, G.
2008-12-01
Continental fresh water transports and loads excess nutrients and pollutants from various land surface sources, through the landscape, into downstream inland and coastal water environments. Our ability to understand, predict and control the eutrophication and the pollution pressures on inland, coastal and marine water ecosystems relies on our ability to quantify these mass flows. This paper synthesizes a series of hydro- biogeochemical studies of nutrient and pollutant sources, transport-transformations and mass flows in catchment areas across a range of scales, from continental, through regional and national, to individual drainage basin scales. Main findings on continental scales include correlations between country/catchment area, population and GDP and associated pollutant and nutrient loading, which differ significantly between world regions with different development levels. On regional scales, essential systematic near-coastal gaps are identified in the national monitoring of nutrient and pollutant loads from land to the sea. Combination of the unmonitored near-coastal area characteristics with the relevant regional nutrient and pollutant load correlations with these characteristics shows that the unmonitored nutrient and pollutant mass loads to the sea may often be as large as, or greater than the monitored river loads. Process studies on individual basin- scales show long-term nutrient and pollutant memories in the soil-groundwater systems of the basins, which may continue to uphold large mass loading to inland and coastal waters long time after mitigation of the sources. Linked hydro-biogeochemical-economic model studies finally demonstrate significant comparative advantages of policies that demand explicit quantitative account of the uncertainties implied by these monitoring gaps and long-term nutrient-pollution memories and time lags, and other knowledge, data and model limitations, instead of the now common neglect or subjective implicit handling of such uncertainties in strategies and practices for combating water pollution and eutrophication.
Framework for Assessing Biogenic CO2 Emissions from Stationary Sources
This revision of the 2011 report, Accounting Framework for Biogenic CO2 Emissions from Stationary Sources, evaluates biogenic CO2 emissions from stationary sources, including a detailed study of the scientific and technical issues associated with assessing biogenic carbon dioxide...
Low birth weight and air pollution in California: Which sources and components drive the risk?
Laurent, Olivier; Hu, Jianlin; Li, Lianfa; Kleeman, Michael J; Bartell, Scott M; Cockburn, Myles; Escobedo, Loraine; Wu, Jun
2016-01-01
Intrauterine growth restriction has been associated with exposure to air pollution, but there is a need to clarify which sources and components are most likely responsible. This study investigated the associations between low birth weight (LBW, <2500g) in term born infants (≥37 gestational weeks) and air pollution by source and composition in California, over the period 2001-2008. Complementary exposure models were used: an empirical Bayesian kriging model for the interpolation of ambient pollutant measurements, a source-oriented chemical transport model (using California emission inventories) that estimated fine and ultrafine particulate matter (PM2.5 and PM0.1, respectively) mass concentrations (4km×4km) by source and composition, a line-source roadway dispersion model at fine resolution, and traffic index estimates. Birth weight was obtained from California birth certificate records. A case-cohort design was used. Five controls per term LBW case were randomly selected (without covariate matching or stratification) from among term births. The resulting datasets were analyzed by logistic regression with a random effect by hospital, using generalized additive mixed models adjusted for race/ethnicity, education, maternal age and household income. In total 72,632 singleton term LBW cases were included. Term LBW was positively and significantly associated with interpolated measurements of ozone but not total fine PM or nitrogen dioxide. No significant association was observed between term LBW and primary PM from all sources grouped together. A positive significant association was observed for secondary organic aerosols. Exposure to elemental carbon (EC), nitrates and ammonium were also positively and significantly associated with term LBW, but only for exposure during the third trimester of pregnancy. Significant positive associations were observed between term LBW risk and primary PM emitted by on-road gasoline and diesel or by commercial meat cooking sources. Primary PM from wood burning was inversely associated with term LBW. Significant positive associations were also observed between term LBW and ultrafine particle numbers modeled with the line-source roadway dispersion model, traffic density and proximity to roadways. This large study based on complementary exposure metrics suggests that not only primary pollution sources (traffic and commercial meat cooking) but also EC and secondary pollutants are risk factors for term LBW. Copyright © 2016 Elsevier Ltd. All rights reserved.
Modeling the X-Ray Process, and X-Ray Flaw Size Parameter for POD Studies
NASA Technical Reports Server (NTRS)
Khoshti, Ajay
2014-01-01
Nondestructive evaluation (NDE) method reliability can be determined by a statistical flaw detection study called probability of detection (POD) study. In many instances the NDE flaw detectability is given as a flaw size such as crack length. The flaw is either a crack or behaving like a crack in terms of affecting the structural integrity of the material. An alternate approach is to use a more complex flaw size parameter. The X-ray flaw size parameter, given here, takes into account many setup and geometric factors. The flaw size parameter relates to X-ray image contrast and is intended to have a monotonic correlation with the POD. Some factors such as set-up parameters including X-ray energy, exposure, detector sensitivity, and material type that are not accounted for in the flaw size parameter may be accounted for in the technique calibration and controlled to meet certain quality requirements. The proposed flaw size parameter and the computer application described here give an alternate approach to conduct the POD studies. Results of the POD study can be applied to reliably detect small flaws through better assessment of effect of interaction between various geometric parameters on the flaw detectability. Moreover, a contrast simulation algorithm for a simple part-source-detector geometry using calibration data is also provided for the POD estimation.
Modeling the X-ray Process, and X-ray Flaw Size Parameter for POD Studies
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2014-01-01
Nondestructive evaluation (NDE) method reliability can be determined by a statistical flaw detection study called probability of detection (POD) study. In many instances, the NDE flaw detectability is given as a flaw size such as crack length. The flaw is either a crack or behaving like a crack in terms of affecting the structural integrity of the material. An alternate approach is to use a more complex flaw size parameter. The X-ray flaw size parameter, given here, takes into account many setup and geometric factors. The flaw size parameter relates to X-ray image contrast and is intended to have a monotonic correlation with the POD. Some factors such as set-up parameters, including X-ray energy, exposure, detector sensitivity, and material type that are not accounted for in the flaw size parameter may be accounted for in the technique calibration and controlled to meet certain quality requirements. The proposed flaw size parameter and the computer application described here give an alternate approach to conduct the POD studies. Results of the POD study can be applied to reliably detect small flaws through better assessment of effect of interaction between various geometric parameters on the flaw detectability. Moreover, a contrast simulation algorithm for a simple part-source-detector geometry using calibration data is also provided for the POD estimation.